Author here. I have a JavaScript port of my automated test suite (https://github.com/a-e-k/canvas_ity/blob/main/test/test.html) that I used to compare my library against browser <canvas> implementations. I was surprised by all of the browser quirks that I found!
But compiling to WASM and running side-by-side on that page is definitely something that I've thought about to make the comparison easier. (For now, I just have my test suite write out PNGs and compare them in an image viewer split-screen with the browser.)
Author here. There's no AI-generated code in this. But yes, security hardening this has not been a priority of mine (though I do have some ideas about fuzz testing it), so for now - like with many small libraries of this nature - it's convenient but best used only with trusted inputs if that's a concern.
Author here. No vibe-coding, all human-written. Are you thinking of my use of GitHub emoji on the section headings in the README? I just found they helped my eye pick out the headings a little more easily and I'd seen some other nice READMEs at the time do that sort of thing when I went looking for examples to pattern it off of. I swear I'd had no idea it would become an LLM thing!
I'd originally started with a different version control system and was still getting used to Git and GitHub at the time that I'd released this. (I was a latecomer to Git just because I hated the CLI so much.) It was easiest for me just to drop the whole thing as a snapshot in a single commit.
But my private repo for it actually started in May 2017, and it had 320 commits leading up to its release, all human-written.
For the v2.0 that I have in mind, I'm thinking of force-pushing to migrate the full development history to the public repo.
And finally I'll add that I'm a graphics engineer by education and career. Where would the fun be in vibe-coding this? :-) Oh, and this compiles down to just ~36KiB of object code on x86-64 last I checked. Good luck vibe-coding that constraint.
2. Why a single header with `#define CANVAS_ITY_IMPLEMENTATION`?
I was inspired by the STB header ibraries (https://github.com/nothings/stb) and by libraries inspired by those, all of which I've found very convenient. In particular, I like their convenience for small utilities written in a single .cpp file where I can just `g++ -O3 -o prog prog.cpp` or such to compile without even bothering with a makefile or CMake.
Since the implementation here is all within a single #ifdef block, I had figured that anyone who truly preferred separate .cpp and .h files could easily split it themselves in just a few minutes.
But anyway, I thought this would be a fun way of "giving back" to the STB header ecosystem and filling what looked to me like an obvious gap among the available header libraries. It started as something that I'd wished I'd had before, for doing some lightweight drawing on top of images, and it just kind of grew from there. (Yes, there was Skia and Cairo, but both seemed way heavier weight than they ought to be, and even just building Skia was an annoying chore.)
----
Since I mentioned a v2.0, I do have a roadmap in mind with a few things for it: beside the small upgrades mentioned in the GitHub issues to support parts of newer <canvas> API specs (alternate fill rules, conic gradients, elliptical arcs, round rectangles) and text kerning, I'm thinking about porting it to a newer C++ standard such as C++20 (I intentionally limited v1.0 to C++03 so that it could be used in as many places as possible), possibly including a small optional library on top of it to parse and rasterize a subset of SVG, and an optional Python binding.
Not to mention if that if somebody needs to come over, the proper thing to do is signal first. Then I'm happy to politely ease off a bit and open more space for them to come over safely.
It's the people who aggressively slide right over just a few feet in front of me (cutting off nearly all of my safety buffer) without so much as a signal that really drive me nuts.
In Austin too, and probably just caused a driver to think the same thing. They were in the left lane on a frontage road which was suddenly turning left even though there was an entire lane opposite the intersection blocked off by those plastic things that seem popular to randomly place in the road these days. I saw them hesitate and figured they wanted to merge right, so i decelerated a bit to add another car length or or so, at maybe 10-15mph. They had plenty of space, flipped on their blinker, and instead of just merging started slowing down, to which I decided I wasn't going to brake more to allow them to block myself and everyone else from rolling through the intersection. They basically stopped in their lane, and beeped as I rolled by, to which someone behind them beeped at them for blocking the lane.
In Austin if you want to merge, decide if you can, blink and then merge.
Don't expect people to stomp on their brakes and stop to let you in, especially if your already traveling slower than the lane you are trying to get into and decide to further slow yourself.
And if you can't merge, deal with it, exit, or miss your exit and go around. Next time you will be more prepared or you will learn how to properly merge.
My experience driving in MA and NY was similar, but so often it was because a rusted out shitbox was trying to merge in that would slow down traffic significantly, and not only put me at risk of rear ending them, but being rear ended myself.
When flows merge, there's turbulence. There's less turbulence if the flows are more closely matched, including speed.
In the UK we are taught that you should not signal until you are ready to manoeuvre. If you follow the rules exactly this can put you in the unfortunate position of being penned in behind shower traffic.
Unless I'm the last car in a line and there's plenty of open space behind me. Then you should just wait until after I've passed before merging, because otherwise you create a little ripple in the flow. A few ripples and you got a wave, and that's how you get traffic.
So for the love of gods, if you're merging, even if you signal, match speeds for merging. If you're too slow to match speed, then suck it up buttercup, and hang out in the right lane until there's an opening.
One thing to consider is that this version is a new architecture, so it’ll take time for Llama CPP to get updated. Similar to how it was with Qwen Next.
I was on the RenderMan team at the top and remember thinking it really neat that our system could stand up to that.
I remember find it mind blowing when I learned that in Brave, the artists weren't just using a texture/displacement mapped surface for the clothing and armor. They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing actually woven out of curve primitives for the threads.)
> I was on the RenderMan team at the time and remember thinking it really neat that our system could stand up to that.
> I remember finding it mind blowing when I learned that in Brave, the artists weren't just using a texture/displacement mapped surface for the clothing and armor. They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing was actually woven out of curve primitives for the threads.)
> They were using tori primitives for the chain mail, and curve primitives for the clothing. (I.e., the clothing was actually woven out of curve primitives for the threads.)
That sounds mind-blowing. Is this documented anywhere?
I got your autograph in my notebook at EGSR (I think?) 2019, still a little sad I didn't have my PBRT full of autographs at that particular dinner! Next time ;)
A nice example of this is shown in Figure 2 of the paper "Illustrative Rendering in Team Fortress 2" [1] from Valve. It shows how they tried to make the silhouettes of each character class distinct and readable. (And the paper also discusses the choices that went into the color palette.)
In case of games, that's pretty much optional. Many games (e.g. Battlefield) take the opposite approach where spotting the enemy in the chaos is intentionally hard and a skill to master. I'm sure there are also intentionally less readable movies or at least scenes, although no immediate example comes to mind.
But compiling to WASM and running side-by-side on that page is definitely something that I've thought about to make the comparison easier. (For now, I just have my test suite write out PNGs and compare them in an image viewer split-screen with the browser.)
reply