It’s funny. The whole “review intent", "learning" from past mistakes, etc, is exactly what my current set up does too. For free. Using .md files said agents generate as they go.
It's been a while since I've Swifted but it was mostly with combinations of nested generics, lambdas, and literals. Annotating the type of the variable those were assigned to could improve performance a lot.
You don't need to add an annotation on `let foo = bar()` where bar is a function that returns `String` or whatever.
Trying to make a .xcframework from SPM is "fun". And getting the Objective-C that are generated, but treated as intermediary artifacts discarded is a bit of a pain.
But I guess the issue here is that .xcframework is an Apple thing, not a Swift thing.
The whole "won’t build if the product / target has the same name as one of its classes" is absolutely ridiculous on the other hand.
> Yeah, the "dark factory" thing is basically unproven right now.
Isn’t that as close as "it" gets for now?
> but then I started trusting the model more and more. These days I don’t read much code anymore. I watch the stream and sometimes look at key parts, but I gotta be honest - most code I don’t read. I do know where which components are and how things are structured and how the overall system is designed, and that’s usually all that’s needed.
Even when I was a kid, people were saying all software is either a prototype or obsolete.
The difference is the cycle got compressed from half of what we know becoming obsolete every 18 months, we just don't know which half, to every 18 weeks.
Have you tried the compound engineering plugin? [^1]
My workflow with it is usually brainstorm -> lfg (planning) -> clear context -> lfg (giving it the produced plan to work on) -> compound if it didn’t on its own.
I had some successes refactoring one instance of a pattern in our codebase, along with all the class' call sites, and having codex identify all the others instances of said pattern and refactor them in parallel by following my initial refactor.
Similarly, I had it successfully migrate a third (so far) of our tests from an old testing framework to a new one, one test suite at a time.
We also had a race condition, and providing Claude Code with both the unsymbolicated trace and the build’s symbols, it successfully symbolicated the trace, identified the cause. When prompted, it identified most of the similar instances of the responsible pattern in our codebase (the one it missed was an indirect one).
I didn’t care much about the suggested fixes on that last one, but consider it a success too, especially since I could just keep working on other stuff while it chugged along.
The problem with all of these, even the most recent one, is that they have the "AI look". People have tired of this look already, even for short adverts; if they don't want five minutes of it, they really won't like two hours of it. There is no doubt the quality has vastly improved over time, but I see no sign of progress in removing the "AI look" from these things.
My feeling is the definition of the "AI look" has evolved as these models progressed.
It used to mean psychedelic weird things worthy of the strangest dreams or an acid trip.
Then it meant strangely blurry with warped alien script and fifteen fingers, including one coming out of another’s second phalanx
Now it means something odd, off, somewhat both hard to place and obvious, like the CGI "transparent" car (is it that the 3D model is too simple, looks like a bad glass sculpture, and refracts light in squares?) and ice cliffs (I think the the lighting is completely off, and the colours are wrong) in Die Another Day.
And if that’s the case, then these models have covered far more in far less time then it took computer graphics and CGI.
reply