Thanks so much! I've learned way more than I anticipated (or even wanted..lol) about shaders and resource managment. Now that the pattern is established, I'm going to do a barebones with templates so folks can fork and make a much smaller package with their custom resources.
Have to agree with _pdp_ on this one. I just don't see the need for an LLM agent to do a recursive grep for API keys in public repos.
Not saying people shouldn't build these tools, but the use case is lost on me.
It feels like the industry is in this weird phase of trying to replace 30-year-old, perfectly optimized shell utilities with multi-shot agent workflows that literally cost money to run. A basic Python script with a regex matcher and the GitHub API will find these keys faster, cheaper, and more reliably.
I tried going back to Thunderbird for RSS recently just to get away from bloated web readers. The fact that you can use standard email rules to filter out high-volume noise is actually amazing. You can just auto-archive posts based on regex or keywords.
But the lack of simple cross-device sync killed the experiment for me. If you read a few articles on your phone while commuting, your desktop client has no idea when you get home. It is a great setup if you only ever consume news at one desk, but I ended up just sticking with Miniflux so my unread counts stay sane.
> But the lack of simple cross-device sync killed the experiment for me. If you read a few articles on your phone while commuting, your desktop client has no idea when you get home.
There is an open source service named gpodder.net (web app, not the client app) that does this for podcasts. It doesn't just sync post read status, it can also sync added podcast feeds across supported clients on all devices.
Since podcasts are based on RSS feeds, this shows that what you seek is possible with regular feeds too. I don't know yet how gpodder does it, but that should be easy enough to find out because the web app seems to have good documentation in addition to being open source.
However, looking at the RSS and Atom feed formats, they seem to include some variation of a uuid per story. This is like message-ids in emails and should be useful in cross-device post status sync. This could be what gpodder uses for sync. A similar service for regular feeds would be easy enough to make. But it would need support across feed readers too, like how several podcast clients support gpodder.
> It is a great setup if you only ever consume news at one desk, but I ended up just sticking with Miniflux so my unread counts stay sane.
I'm considering deploying an aggregator too. So I'm curious. What made you settle with miniflux?
Not sure quite what's going on with that project but when I looked - gpodder.net was a subscription service and the foss project was somewhat hidden and renamed as mygpo. Felt a bit suss and abandoned although I guess an rss server could just be "done".
There is also opodsync that seems to be a bit more alive and popular and says it's gpodder compatible. Not tried it though.
My enthusiasm to self-host my podcast/rss feed was killed dead when I gave nextcloud another go since it can apparently do this. Every few years I set it up having forgotten that the last time I did, I swore to never touch it again. I can't believe it's still such a bad experience.
I sync up my newsboat feed reader with syncthing so I am up to date on multiple devices. I wonder if Thunderbird could be made to work in the same way...
How do you handle sync conflicts when the same expense gets edited offline on two devices before syncing? Last-write-wins, or something more sophisticated?
I maintain a few small open source projects and the PR quality went off a cliff once AI coding tools got popular. People generate something that compiles and looks reasonable but is wrong in ways that take me longer to explain than to just fix myself.
The weird part is I use AI heavily for my own code and it's great. But I know the codebase since I wrote it from scratch. Someone who doesn't know it feeding prompts to Copilot produces code that passes lint and still misses the point entirely.
Hashimoto locking contributions to vouched users makes total sense. The old assumption was that effort implied understanding. That's just not true anymore.
Have you tried, as Peter Steinberg suggested, "I ask for Specifications to be submitted, I can generate then the code based on that spec in a minute" - what we also find useful is exactly this, if a contributor wants to submit a PR he also submits a specification from a template that we have given. We have automated pipelines to verify the quality of that specification with GitHub Actions. That way, you can see the reason, deep thinking, insights, what the use cases were that the contributor tried solving, and the environment in which this was done.
This has been my experience too. AI is like handing someone a motorcycle. Sure they're going to move faster but those without a map won’t necessarily be heading in the right direction.
I spent a while trying to sync animation to music and the biggest surprise was how much rhythm alone does. Change BPM, same notes, same key, and the emotional feel completely flips. 120 BPM is neutral. 60 is contemplative. 160 is anxious. Nothing else changed.
If you strip a song down to just its rhythm, no pitch, no timbre, people can still tell you what emotion it's going for. Add the pitch back and it barely moves the needle. Timing between events does most of the heavy lifting. The actual "sound" part is almost window dressing on a temporal skeleton.
Started this about 6 months ago. Wanted an animated character that could react to conversation, not just sit there as a static avatar.
First version was Canvas 2D. Shape morphing and particles. Worked fine for basic emotions but felt limited, so I added a WebGL 3D renderer with custom shaders, bloom, AO, a post-processing chain.
About 6 weeks ago I started the elemental system. Fire first. FBM noise, decoupled color from alpha so additive stacking looks natural instead of washing out. Then water (screen-space refraction, spray particles). By the third element (ice — Snell's law, Voronoi crack lines, chromatic dispersion), the architecture was clearly repeating: factory + instanced GPU material + overlay shader + gesture configs + registration hook. Same five pieces every time.
So I kept going. Electricity, earth, nature, light, void. Eight total in about 6 weeks. Wanted to see if the pattern held or if I'd been fooling myself.
161 gestures later it held. Each gesture is about 20 lines of config composing from archetypes. The factory does the rest.
Shuffle button in the demo shows all 161. Press G for the GPU monitor. `window.profiler` is exposed if you want to poke around.
Happy to talk about any of the shaders, the rendering pipeline, or things I got wrong along the way.