Hacker Newsnew | past | comments | ask | show | jobs | submit | enos_feedler's commentslogin

The darkest hour is just before the dawn

Of course that is true. The nuance here is that software isn’t just getting cheaper but the activity to build it is changing. Instead of writing lines of code you are writing requirements. That shifts who can do the job. The customer might be able to do it themselves. This removes a market, not grows one. I am not saying the market will collapse just be careful applying a blunt theory to such a profound technological shift that isn’t just lowering cost but changing the entire process.

You say that like someone that has been coding for so long you have forgotten what it's like to not know how to code. The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem. AI is amazing at producing answers you previously would have looked up on stack overflow, which is very useful. It often can type faster that than I can which is also useful. However, if we are going to see the exponential improvements towards AGI AI boosters talk about we would have already seen the start of it.

When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.


Exactly. The customer doesn't know what's possible, but increasingly neither do we unless we're staying current at frontier speed. AI can type faster and answer Stack Overflow questions. But understanding what's newly possible, what competitors just shipped, what research just dropped... that requires continuous monitoring across arXiv, HN, Reddit, Discord, Twitter. The gap isn't coding ability anymore. It's information asymmetry. Teams with better intelligence infrastructure will outpace teams with better coding skills. That's the shift people are missing.

Hey, welcome to HN. I see that you have a few LLM generated comments going here, please don’t do it as it is mostly a place for humans to interact. Thank you.

No, I’m pretty sure the models are still improving or the harnesses are, and I don’t think that distinction is all that important for users. Where were coding agents at 2025? 2024? I’m pretty amazed by the improvements in the last few months.

I'm both amazed by the improvements, and also think they are fundamentally incremental at this point.

But I'm happy about this. I'm not that interested in or optimistic about AGI, but having increasingly great tools to do useful work with computers is incredible!

My only concern is that it won't be sustainable, and it's only as great as it is right now because the cost to end users is being heavily subsidized by investment.


>The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem.

How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.


One example is programmers who would code physics simulations that run in massive data. You need a decent amount of software engineering skills to maintain software like that but the programmer maybe has a BS in Physics but doesn’t really know the nuances of the actual algorithm being implemented.

With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.

Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.


Yes, I made the same point. Customers are not as dumb as our PMs and Execs think they are. They know their needs more than us, unless its about social media and banks.

I agree. People forget that people know how to use computers and have a good intuition on what they are capable of. Its the programming task that many people cant do. Its unlocking users to solve their own problems again

We talk about things like S curves for AGI, and how it's slowing down.

But where is the S curves for programmers at?


Actual socialization is my bet.

> However, if we are going to see the exponential improvements towards AGI AI boosters talk about we would have already seen the start of it.

Maybe you already understood this, but many of the "AI boosters" you refer to genuinely believe we have "seen the start of it".

Or at least they appear to believe it.


> The customer might be able to do it themselves

Have you ever paid for software? I have, many times, for things I could build myself

Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.

Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI

No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them


You are missing the point. Who said anything about turning what they make into a “business”. Software you maintain merely for yourself has no such overhead.

There are also technical requirements, which, in practice, you will need to make for applications. Technical requirements can be done by people that can't program, but it is very close to programming. You reach a manner of specification where you're designing schemas, formatting specs, high level algorithms, and APIs. Programmers can be, and are, good at this, and the people doing it who aren't programmers would be good programmers.

At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.


I think it's like super insane people think that anyone can just "code" an app with AI and that can replace actual paid or established open-source software, especially if they are not a programmer or know how to think like one. It might seem super obvious if you work in tech but most people don't even know what an HTTP server is or what is pytho, let alone understanding best practices or any kind of high-level thinking regarding applications and code. And if you're willing to spend that time in learning all that, might as well learn programming as well.

AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.


I think this comment will not age well. I understand where you are coming from. You are missing the idea that infrastructure will come along to support vibe coding. You are assuming vibe coding as it stands today will not be improved. It will get to the point where the vibe coder needs to know less and less about the underlying construction of software.

> Instead of writing lines of code you are writing requirements.

https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...


The way I would approach writing specs and requirements as code would be to write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests. Then let someone else maybe AI write the implementation as a set of concrete classes and then verify that those unit-tests pass.

I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.


The problem is that a sufficient black box description of a system is way more elaborate then the white box description of the system or even a rigorous description of all acceptable white boxes (a proof). Unit tests contain enough information to distinguish an almost correct system from a more correct one, but there is way more information needed to even arrive at the almost correct system. Also even the knowledge which traits likely separate an almost correct one from the correct one likely requires a lot of white box knowledge.

Unit tests are the correct tool, because going from an almost correct one to a correct one is hard, because it implies the failure rate to be zero and the lower you go the harder it is to reduce the failure rate any further. But when your constraint is not infinitesimal small failure rate, but reaching expressiveness fast, then a naive implementation or a mathematical model are a much denser representation of the information, and thus easier to generate. In practical terms, it is much easier to encode the slightly incorrect preconception you have in your mind, then try to enumerate all the cases in which a statistically generated system might deviate from the preconception you already had in your head.


“write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests.”

An exhaustive set of use cases to confirm vibe AI generated apps would be an app by itself. Experienced developers know what subsets of tests are critical, avoiding much work.


I agree (?) that using AI vibe-coding can be a good way to prooduce a prototype for stakeholders to see if the AI-output is actually something they want.

The problem I see is how to evolve such a prototype to more correct specs, or changed specs in the future, because AI output is non-deterministic -- and "vibes" are ambiguous.

Giving AI more specs or modified specs means it will have to re-interpret the specs and since its output is non-deterministic it can re-interpret viby specs differently and thus diverge in a new direction.

Using unit-tests as (at least part of) the spec would be a way to keep the specs stable and unambiguous. If AI is re-interpreting the viby ambiguous specs, then the specs are unstable which measn the final output has hard-time converging to a stable state.

I've asked this before, not knowing much about AI-sw-development, whether there is an LLM that given a set of unit-tests, will generate an implementation that passes those unit-tests? And is such practice used commonly in the community, and if not why not?


> Experienced developers know what subsets of tests are critical, avoiding much work.

And, they do know this for the programs written by other experienced developers, because they know where to expect "linearity" and were to expect steps in the output function. (Testing 0, 1, 127, 128, 255, is important, 89 and 90 likely not, unless that's part of the domain knowledge) This is not necessarily correct for statistically derived algorithm descriptions.


That depends a bit on whether you view and use unit-tests for

a) Testing that the spec is implemented correctly, OR

b) As the Spec itself, or part of it.

I know people have different views on this, but if unit-tests are not the spec, or part of it, then we must formalize the spec in some other way.

If the Spec is not written in some formal way then I don't think we can automatically verify whether the implementation implements the spec, or not. (that's what the cartoon was about).


> then we must formalize the spec in some other way.

For most projects, the spec is formalized in formal natural language (like any other spec in other professions) and that is mostly fine.

If you want your unit tests to be the spec, as I wrote in https://news.ycombinator.com/item?id=46667964, there would be quite A LOT of them needed. I rather learn to write proofs, then try to exhaustively list all possible combinations of a (near) infinite number of input/output combinations. Unit-tests are simply the wrong tool, because they imply taking excerpts from the library of all possible books. I don't think that is what people mean with e.g. TDD.

What the cartoon is about is that any formal(-enough) way to describe program behaviour will just be yet another programming tool/language. If you have some novel way of program specification, someone will write a compiler and then we might use it, but it will still be programming and LLMs ain't that.


"Thinking clearly about complexity" is much more that writing requirements.

"yours is not to reason why, yours is just to do, or die"

( variation of .. "Ours is not to reason why, ours is but to do and die" )


The nuance here is that AI cant do what you think it can.

AI can code because the user of AI can code.

Debbie from accounting doesn't have a clue what an int is


I sure hope accounting knows what an integer is.

But not an int, int32, or int64

My experiments with AI generated code is you have to specify it like a programmer would, i.e. you have to be a programmer.

Anecdote: I have decades of software experience, and am comfortable both writing code myself and using AI tools.

Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.

I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.

I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.

Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).


I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.

What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.


> I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.

At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.

China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.


Thank you for the insight. Those countries you listed are nowhere near US salaries. I wonder what the SASS market is like in Europe? I hear its utilized but that the problem is that there is too much reliance on American companies.

I hear those other Asian countries are just like China in terms of adoption.

>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.

It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.


When I worked in China for Microsoft China, I was making 60-70% what I would have made back in the US working the same job, but my living expenses actually kind of made up for that. I learned that most of my non-Chinese asian colleagues were in it for the money instead of just the experience (this was basically my dream job, now I have to settle for working in the states for Google).

> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.

China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).

I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.


Fair point! And I wasn't clear: my anecdote was me, personally, needing an instance of some software. Rather than me personally either write it by hand, or even write it using AI, and then host it, I just found an off-the-shelf solution that worked well enough for me. One less thing I have to think about.

I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.


This take makes sense in the context of MLIR creation which introduces dialects which are namespaces within the IR. Given it was created by Chris Lattner I would guess he saw these problems with LLVM as well.

What a random set of companies to choose. You'd probably need to think critically about each one of those when assessing the accuracy of your statements.

> What a random set of companies to choose.

All of the mentioned named companies have network effects, distribution and trust.

Not quite easy to copy. Disposable LLM gen'd code without users is cheap, which is the point of the article.


you are learning what it takes to keep a machine up and running. You still witness the breakage. You can still watch the fix. You can review what happened. What you are implying from your question is that compared to doing things without AI, you are learning less (or perhaps you believe nothing). You definitely are learning less about mucking around in linux. But, if the alternative was not ever running a linux machine at all because you didn't want to deal with running it, you are learning infinitely more.

How can you review if you don‘t know in the first place?

You can watch your doctor, your plumber, your car mechanic and still wouldn’t know if they di something wrong if you don’t know the subject as such.


You can learn a lot from watching your doctor, plumber or mechanic work, and you could learn even more if you could ask them questions for hours without making them mad.

You learn less from watching a faux-doctor, faux-plumber, faux-mechanic and learn even less by engaging in their hallucinations without a level horizon for reference.

Bob the Builder doesn't convey much about drainage needs for foundations and few children think to ask. Who knows how AI-Bob might respond.


> You can learn a lot from watching your doctor [...] work

Very true but I'll still opt for that general anesthesia...


The primary way humans learn anything at all is by watching and mimicking. Sure, there will be mistakes, but that doesn't preclude learning.

Your hypothetical situation would cause all progress to halt. Nobody would be able to fix genuine problems.

the puzzling thing to me is Tim Cook was in the board meetings. Apple and Nike play similae games to stay ahead and keep margins high. i am sure he is on the board to glean insights from the older brother Nike. And yet…


Doubt it. Apple understands how important retail presence is - their stores generate more revenue per square foot than any others, including Tiffany’s.


well thats my point. makes me wonder how much influence Cook has on the Nike board to teach them that to avoid the mistakes they made. Cook had a front row seat to the decline of Nike


Echo other comments here. As an ex games cracker for some pretty large scale piracy orgs in the 90s I lol’ed at this


I think the idea that nobody would talk to strangers online is a bit too general. We are all mostly doing it here. I do it on reddit all the time in the same recurring subreddits that I've grown to trust. IRC was also pretty hostile back in the 90s. But again it depended on the communities. Just think you can't generalize the internet this way.


True I would also add that this an exception to most social media platforms. I feel as there is a roundtable Everytime somone posts a something. I'm some how invited and listening, whether I comment or say something is entirely up to what I have to share. Argument or debate isnt so aggressiveas it's factual based for the most part.


Sounds like you got offended by a robot.


I believe the correct term that I've seen elsewhere is "clanker".


Apple’s analytics probably support this which is exactly why siri still sucks. But ya, everyone will continue to think they somehow know better and apple is wrong and poorly executing


> which is exactly why siri still sucks

It probably does what most people need it to do, so it sucks?

That's some interesting logic to say the least.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: