I wonder about that. A general experience in software engineering is that abstractions are always leaky and that details always end up mattering, or at least that it’s very hard to predict which details will end up mattering. So there may not be a threshold below which cognitive debt isn’t an issue.
> So there may not be a threshold below which cognitive debt isn’t an issue.
That's my hunch too.
The problem isn't "I don't understand how the code works", it's "I don't understand what my product does deeply enough to make good decisions about it".
No amount of AI assistance is going to fill that hole. You gotta pay down your cognitive debt and build a robust enough mental model that you can reason about your product.
I wouldn’t use the term “product” here. Apart from most software being projects, not products, what I was getting at is that details and design decisions matter at all levels of software. You might have a robust mental model of your product as a product, and about what it does, but that doesn’t mean that you have a good mental model of what’s going on in some sub-sub-sub-module deep within its bowels. Software design has a fractal quality to it, and cognitive debt can accumulate at the ostensibly mundane implementation-detail level as well as at the domain-conceptual level. If you replace “product” by “module”, I would agree.
I think of that as the law of leaky abstractions - https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... - where the more abstractions between you and how things actually work the more chance there is that something will go wrong at a layer you're not familiar with.
I think of cognitive debt as more of a product design challenge - but yeah, it certainly overlaps with abstraction debt.
Of course! The original attempt wasn’t really AI doing everything. I was writing much of the code but letting AI drive general patterns since I was unfamiliar with web dev. Now, it’s also not entirely without AI, but I am very much steering the ship and my usage of AI is more “low context chat” than “agentic”. IMO it’s a more functional way to interface with AI for anyone with solid engineering skills.
I think the sweet spot is to make the initial stuff yourself and then extend or modify somewhat with LLMs
it acts as a guide for the LLM too, so it doesn't have to just come up with everything on its own in terms of style or design choices in terms of consistency I'd say?
For more complex projects I find this pattern very helpful. The last two gens of SOTA models have become rather good at following existing code patterns.
If you have a solid architecture they can be almost prescient in their ability to modify things. However they're a bit like Taylor series expansions. They only accurate out so far from the known basis. Hmm, or control theory where you have stable and unstable regimes.
I haven't tried this yet, but I wanted to say I think there's a lot of potential in this space. There's so much friction with the current popular solutions...and yet it's so hard to justify trying some of the newer and less popular ones.
I wish you luck because there are a lot of good ideas in here. Running locally and remote debugger are the most exciting to me.
I'd argue that Tenerife was due to taking off (in bad weather), not landing. But of course, a bunch of planes landing at the same airport without ATC sounds quite dangerous.
There were a lot of contributing causes, but it wouldn't have happened if not for the fact that Tenerife North airport was massively overcrowded due to Gran Canaria airport being suddenly closed (for unrelated reasons) and flights forced to divert.
The issue wasn't with landing specifically; I'm just using it as a general example of issues caused by havoc situations in aviation.
This sounds like a very nice compromise actually. I'm surprised it helped with abuse though, since there's a lot of email providers that are easier to create an account with than gmail.
A big part of handling abuse is to recognize that you cannot win - all you can do is better. And a big part of abuse is just raising the bar of sophistication required to abuse you. We went from "any random script kiddie with a gmail account gets infinite accounts easily" to "now someone has to use a custom email domain" (which is easy for us to just banhammer the domain), which both requires sophistication and money. And it makes the banhammer-swing more on par with the amount of effort they have to put in to evade it - banning the domain means go find another domain and pay another registrar fee.
Well, you have to go out of your way to prevent it. The sub-addressing complexity is on the email provider side; ticketmaster doesn't have to do anything for it to work except not reject valid email addresses.
In my experience, most but not all sites will accept "+" email addresses.
I think that's exactly correct. You either do split queries (with more latency) or you do a join (and risk Cartesian explosion). Most ORMs should do this for you.
They still cost the same to make regardless, and they’d still need some form of distribution that would take a cut.
PC games aren’t any cheaper at launch than console games. Console games don’t become any cheaper when the cost of manufacturing goes down over the course of a generation.
I think you misunderstand how the cut works and why they’re subsidized.
Take a look at PC games. Steam also takes a 30% cut, which is the same as what the console makers take. Steam doesn’t have to pay anyone back. The 30% is a pretty long standing cut and is about the lowest most incumbents have gone despite reductions in other costs.
The subsidizing of the console is pre-factored into that 30% cost. Historically console manufacturers have also reduced the amount of subsidization over the course of a generation without affecting the price of the games. The subsidizing is to get it into as many homes as possible, but even a few games purchased over the lifetime of a console would negate any subsidy.
There’s no precedence or indication other than wishful thinking that removing console subsidies wouldn’t remove that cut.
They might mean that rather than using a mock, use a real typed object/instance of a real thing and inject it into the unit that you’re testing. Admittedly, that might meet the definition of a fake/mock once you get down to the level of testing something that needs db access. Another way of interpreting that is that you can use in memory versions of your deps to mirror the interface of your dependency without needing to repeatedly, and possibly haphazardly mock certain functions of your dependency.
reply