We're living in such interesting times - you can talk to a computer and it works, in many cases at extraordinary level - yet you still see intellectually constipated opinions arguing against basic facts established years ago - incredible.
It has been interesting experience, like trolling but you actually believe what you're saying. I wonder how you arrived at it - is it fear, insecurity, ignorance, feelings of injustice or maybe something else? I wonder what bothers you about LLMs?
Judging by the site, they don't have insightful answers to these questions. It's broken with weird artifacts, errors, and amateurish console printing in PROD.
I definitely don't have insightful answers to these questions, just the ones I gave in the sibling comment an hour before yours. How could someone who uses LLMs be expected to know anything, or even be human?
Alas, I did not realize I was being held to the standard of having no bugs under any circumstance, and printing nothing to the console.
I have removed the amateurish log entries, I am pitiably sorry for any offense they may have caused. I will be sure to artisanally hand-write all my code from now on, to atone for the enormity of my sin.
Yeah, all of the above was a single bug in the plot allocation code, the exception that handled the transaction rollback had the wrong name. It's working again.
> how did you make sure that each new prompt didn't break some previous functionality?
For the backend, I reviewed the code and steered it to better solutions a few times (fewer than I thought I'd need to!). For the frontend, I only tested and steered, because I don't know much about React at all.
This was impossible with previous models, I was really surprised that Codex didn't seem to completely break down after a few iterations!
> did you have a precise vision
I had a fairly precise vision, but the LLM made some good contributions. The UI aesthetic is mostly the LLM, as I'm not very good at that. The UX and functionality is almost entirely me.
> did you not run into this problem described by ilya below
I used to run into a related issue, where fixing a bug would add more bugs, to the point where it would not be able to progress past a given codebase complexity. However, Codex is much better at not doing that. There are some cases where the model kept going back and forth between two bugs, but I discovered that that was because I had misunderstood the constraints and was telling the model to do something impossible.
> how did you discover that and why it slip out.
Sentry alerted me but I thought it was an edge case, and I didn't pay attention until hours later.
I use a spiral allocation algorithm to allocate plots, so new users are clustered around the center. Sometimes plots are emptied (when the user isn't active), so you can have gaps in the spiral, which the algorithm tries to fill, and it's meant to go to the next plot if the current one can't be assigned.
For one specific plot, however, conditions were such that the database was giving an integrity error. The exception handling code that was supposed to handle that didn't take into account that it needed to roll back before resuming, so the entire request failed, instead of resuming gracefully. Just adding an atomic() context manager fixed it.
> looks like site wasn't working at all when you posted that comment?
It was working for a few hundreds (thousands?) of visitors, then the allocation code hit the plot that caused the bug, and signup couldn't proceed after that.
> Just adding an atomic() context manager fixed it.
ok looks like you are intimately familiar with the code that is being produced and are AI as code generator rather than pure vibe coding.
That makes sense to me.
Btw did AI add that line when you explained what the error was or did you add that in manually.
No, I paste the trace back, ask it to explain the error, judge whether it makes sense, and either ask it to fix it or say "that makes no sense, please look again/change the fix/etc".
i know several friends and family who cycled through here ( non tech) . they pay relatively higher salaries for chicago burbs area, so lot of ppl take their chances despite the bad reputation around here.
Assuming you mean "context engineering" as in "engineering the context for LLM prompts" - 0.
I take particular issue with the usage of the word "engineering", in this context, as in my practical experience what I witnessed was more akin to "try random things until it somewhat works" than anything I would associate with "engineering". But hey, it's a free country and people can use words whichever way they want. Just shouldn't be confused if noone keeps on listening ;)
he compared it to students who win at math competition but cant do anything practical .