In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
> Boris said that writing code is a solved problem
That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem.
There are bloggers that can't even acknowledge that they're only invited out to big tech events because they'll glaze them up to high heavens.
Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting."
He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents.
Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.
It's funny that "restructure the codebase to be more friendly to agents" aligns really well with what we have "supposed" to have been doing already, but many teams slack on: quality tests that are easy to run, and great documentation. Context and verifiability.
The easier your codebase is to hack on for a human, the easier it is for an LLM generally.
It’s really interesting. It suggests that intelligence is intelligence, and the electronic kind also needs the same kinds of organization that humans do to quickly make sense of code and modify it without breaking something else.
I had this epiphany a few weeks ago, I'm glad to see others agreeing. Eventually most models will handle large enough context windows where this will sadly not matter as much, but it would be nice for the industry to still do everything to make better looking code that humans can see and appreciate.
Having picked up a few long neglected projects in the past year, AI has been tremendous in rapidly shipping quality of dev life stuff like much improved test suites, documenting the existing behavior, handling upgrades to newer framework versions, etc.
I've really found it's a flywheel once you get going.
Truth. I've had much easier time grappling with code bases I keep clean and compartmentalized with AI, over-stuffing context is one of the main killers of its quality.
I think you mean software engineering, not computer science. And no, I don’t think there is reason for software engineering (and certainly not for computer science) to be plateauing. Unless we let it plateau, which I don’t think we will. Also, writing code isn’t a solved problem, whatever that’s supposed to mean. Furthermore, since the patterns we use often aren’t orthogonal, it’s certainly not a linear combination.
I assume that new business scenarios will drive new workflows, which requires new work of software engineering. In the meantime, I assume that computer science will drive paradigm shift, which will drive truly different software engineering practice. If we don't have advances in algorithms, systems, and etc, I'd assume that people can slowly abstract away all the hard parts, enabling AI to do most of our jobs.
Or does the field become plateaued because engineers treat "writing code" as a "solved problem?"
We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.
> we especially need 50,000 people writing poems at Google, we do still need poets.
I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.
I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later?
I wonder what all we might build instead, if all that time could be saved.
> I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI.
Yeah, hence my question can only be hypothetical.
> I wonder what all we might build instead, if all that time could be saved
If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.
> If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.
I'm not sure I agree with the application of the broken-window theory here. That's a metaphor intended to counter arguments in favor of make-work projects for economic stimulus: the idea here is that breaking a window always has a net negative on the economy, since even though it creates demand for a replacement window, the resources that are necessary to replace a window that already existed are just being allocated to restore the status quo ante, but the opportunity cost of that is everything else the same resources might have bee used for instead, if the window hadn't been broken.
I think that's quite distinct from manufacturing new windows for new installations, which is net positive production, and where newer use cases for windows create opportunities for producers to iterate on new window designs, and incrementally refine and improve the product, which wouldn't happen if you were simply producing replacements for pre-existing windows.
Even in this example, lots of people writing lots of different variations of login pages has produced incremental improvements -- in fact, as an industry, we haven't been writing the same exact login page over and over again, but have been gradually refining them in ways that have evolved their appearance, performance, security, UI intuitiveness, and other variables considerably over time. Relying on AI to design, not just implement, login pages will likely be the thing that causes this process to halt, and perpetuate the status quo indefinitely.
I saw Boris give a live demo today. He had a swarm of Claude agents one shot the most upvoted open issue on Excalidraw while he explained Claude code for about 20 minutes.
No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it.
There's so many timeless books on how to write software, design patterns, lessons learned from production issues. I don't think AI will stop being used for open source, in fact, with the number of increasing projects adjusting their contributor policies to account for AI I would argue that what we'll see is always people who love to hand craft their own code, and people who use AI to build their own open source tooling and solutions. We will also see an explosion is needing specs for things. If you give a model a well defined spec, it will follow it. I get better results the more specific I get about how I want things built and which libraries I want used.
Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that?
> is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.
> Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.
Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above.
Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them.
Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that.
My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them.
Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider
Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same.
Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem.
However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.
Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.
> It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.
My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.
That is the same team that has an app that used React for TUI, that uses gigabytes to have a scrollback buffer, and that had text scrolling so slow you could get a coffee in between.
And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC)
He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world"
So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in.
I wanted to write the same comment. These people are fucking hucksters. Don’t listen to their words, look at their software … says all you need to know.
Even if you like them, I don't think there's any reason to believe what people from these companies say. They have every reason to exaggerate or outright lie, and the hype cycle moves so quickly that there are zero consequences for doing so.
Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.
The value of human repeating other human's know-how through hard work and thinking is going down. Unfortunately (or fortunately?), many types of programming is one human repeating other human's know-how.
> Which is to say that the pleasure I get from programming is mostly about
> learning the underlying truths about computation and applying what I’ve
> learned. Always improving the craft. This, to me, is the practice of
> programming.
I think the pleasure and the identity is still there. It's just that we need to amp up the game. How many times one can really enjoy writing a binary search or a service that essentially transforms data from one JSON format to another. Fortunately or unfortunately, the huge wave of Internet+Big Data+Cloud+Mobile+Social has been part of our daily life, and what we do day in and day out has become increasingly mundane. As a result, what we implement day in and day out is getting increasingly repetitive -- I think to a certain degree the repetitiveness, when crossing certain threshold, will take away the pleasure of programming and will destroy our identity.
I don't know. I lost all my trust to the democrats when the Biden government used a bulldozer to tear open the barbwire to allow people to enter our border freely, and his spokesperson told us the border was fine. At that moment, the democrats are like the Bush who started war by lying. I'd rather give republicans the benefit of the doubt and see them crush democrats for decades.
Yeah, there are a lot of people that believe in destroying the US/Constitution because things didn't go like they wanted at points. It's a 100% un-American position, but thanks for speaking up and showing Republicans do not give a f' about anything but their pet peeve issue and will burn it all down over it. You don't care Republicans are literally and intentionally bankrupting the nation with their recent big beautiful bill tax cuts and are the ACTIVELY and INTENTIONALLY by design the party of financial irresponsibility all while claiming to be the part of...financial responsibility.
Just like Republicans run up the debt with crazy tax cuts then say 'look at this unsustainable debt'. Republicans care more about power and their pet issues than the health and stability of our nation and the people within it.
> RustFS and SeaweedFS are the fastest in the object storage field.
I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing the IOs, in particular the metadata lookup, for accessing individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.
On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.
This level if geekiness is amazing. I hope more, a lot more, Americans can get into STEMS with this level of passion. It's sad that in the past few decades more and more people seemed to forget that STEM is a pillar of the modern civilization that we enjoy.
Maybe a better question is: Is natural language to code what high-level programming is to hand-written assembly? Brooks claims the "essential complexity" lies in the specification: if a spec is precise enough to be executable, it’s just code by another name. But is the gap actually that large today? When I ask for a "centered 3x3 Tailwind grid", the patterns are so standardized that the ambiguity nearly vanishes. It’s like asking for a Java 8 main method. The implementation is so predictable that the intent and the code are one and the same. Or using jargons, most of the coding has a strong prior that leads to predictable posterior.
The key question now is: how far can AI go? It started with simple auto-completion, but as AI absorbs more procedural know-how, it becomes capable of generating increasingly larger chunks of maintainable code. Perhaps we are reaching a point where established patterns are so well-understood that AI can bridge the gap between a vague intent and a working system, effectively automating away what Brooks once considered essential complexity.
In the long run, this probably makes experts more valuable, but it’ll gut the demand for standard engineers. So much of our market value is currently tied to how hard it is to transfer expertise among humans. AI renders that bottleneck moot. Once the know-how is commoditized, the only thing left is the what and why.
One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.
So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.
I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?
To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.
If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.
Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.
There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.
> and now they can churn out millions of code easily.
It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.
Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.
These are toy compilers missing many edge cases. You’ll be lucky if they support anything other than integer types, nevermind complex pointer-to-pointer-to-struct-with-pointers type definitions. They certainly won’t support GNU extensions. They won’t compile any serious open source project, nevermind the Linux kernel.
reply