The “fix_ci_complete…” script was written (by me) to patch some CI integration issues—if the style looks generic, it’s probably because it’s a standard shell script pattern.
I haven’t used LLMs to write or patch any code in minikv; any fix or automation was written and debugged manually.
If there’s something specific in the script that seems suspect, I’m happy to explain or walk through it line by line.
Again, all implementation code in minikv is mine, and I’m always open to reviewing anything that looks unclear—transparency is important to me.
This script was actually written manually to automate some repeated local fixes—mainly to speed up my workflow and make sure patches were applied consistently (and safely, with backups).
The colorful output and detailed logging are just for clarity and UX; I tend to over-comment my scripts out of habit—no AI tools were involved here (nor elsewhere in the code).
But I get why it might look generic—happy to explain any section line by line if you want!
Those whiteboarding sessions and discussions used to serve as useful opportunities for context building. Where will that context be built within the cycle now? During a production incident?
Google's AI overview regularly hallucinates or gets the answer wrong. Obviously if you're going to run inference on every search from billions of users, it has to be a very cheap model.
You can already do what you're looking for by reading the browser cache as new data is cached. This would allow you to see the site as it was loaded originally, instead of simply fetching an updated view from a URL. The data layout for the cache in Firefox and Chrome is available online.
The unfortunate reality is that, depending on your personal preferences, "most modern games" require such a ring 0 anti-cheat. Any game that has a matchmaking mode with a competitive option requires a rootkit.
As an aside, I recently found Riot Games' Vanguard installed on my Linux ESP partition... after having installed the game on my windows partition. It rooted every OS it could find mounted. Incredible.
I've found the latency of /compact makes it unusable. Perhaps this is just the result of my waiting until I have 0% context remaining.
Fun fact, a large chunk of context is reserved for compaction. When you are shown that you have "0% context remaining," it's actually like 30% remaining that's reserved for compaction.
And yet, for some reason I feel like 50% of the time, compaction fails because it runs out of context or hits (non-rate) API limits.
Weirdly, I’ve found that when that happens I can close Claude and then run `claude --continue` and now it has room to compact. Makes no sense.
But I have no idea what state it will be in after compact, so it’s better to ask it to write a complete and thorough report including what source files to read. Lot more work but better than going off the rails.
reply