> What exactly is being de-valuated for a profession
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.
This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.
Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.
LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.
I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.
Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.
Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.
On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.
Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.
> Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs
Curious if you gave Antigravity a try yet? It auto-launches a browser and you can watch it move the mouse and click around. It's able to review what it sees and iterate or report success according to your specs. It takes screen recordings and saves them as an artifact for you to verify.
I only tried some simple things with it so far but it worked well.
Right, and as a hiring manager, I'm more inclined to hire junior devs since they eventually learn the intricacies of the business, whereas LLMs are limited in that capacity.
I'd rather babysit a junior dev and give them some work to do until they can stand on their own than babysit an LLM indefinitely. That just sounds like more work for me.
You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)
Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.
I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.
It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.
But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.
Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.
And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.
> Don't worry about where AI is today, worry about where it will be in 5-10 years.
And where will it be in 5-10 years?
Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".
Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.
If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.
Right about where it is today with better integrations?
One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.
The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.
The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.
But humans have vastly lower error rates than llms. And in a multi-step process that means that those error rates compound. And when that happens, you end up with a 50/50 or worse
And, more importantly, a given human can, and usually will, learn from their mistakes and do better in a reasonably consistent pattern.
And when humans do make mistakes, they're also in patterns that are fairly predictable and easy for other humans to understand, because we make mistakes due to a few different well-known categories of errors of thought and behavior.
LLMs, meanwhile, make mistakes simply because they happen to have randomly generated incorrect text that time. Or, to look at it another way, they get things right simply because they happen to have randomly generated correct text that time.
Individual humans can be highly reliable. Humans can consciously make tradeoffs between speed and reliability. Individual unreliable humans can become more reliable through time and effort.
It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.
I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.
> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
This sounds kind of logical, but really isn't.
In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.
In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.
Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.
Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things
Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
> the point is they don't have human level intelligence
> If you want to ASSIGN a task to something/someone then you need a human or artificial human
Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).
The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.
Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.
Yes, they continue to get better, but they are not at human level (and jr devs are humans too) yet, and I doubt the next level "AGI" that people like Demis Hassabis are projecting to still be 10 years away will be human level either.
No doubt it depends on the company, but I'd say that in many places only 10% of what a developer does is coding, and the percentage is less and less the more senior you become and have other responsibilities.
In many companies, product development is very cyclic - new products and enhancement/modernization cycles come and result in months, maybe years, of intense development (architecture, design, maybe prototyping before coding) but then there may be many months of "downtime" before the next major development cycle, where coding gives way to ongoing support, tuning, bug triaging, etc.
Maybe in some large companies there are highly compartmentalized roles like business analyst, systems architect, developers, perhaps "coders" as a separate or junior category (I have never worked anyplace where "coders" was a thing, although some people seem to insist that it still is). My experience, of a lifetime of software development, mostly at smaller companies, is that "software developers" are expected to do all of the above as well as production support, documentation, mentoring, new technology evaluation, etc, etc. The boss wants to give you a high level assignment, and get back a product. At one company I worked it was literally "build us an EKG machine at this price point", which might be a bit of an extreme example.
The other thing a human software developer does during any "downtime" is self-initiated projects such as creating tools and infrastructure, automation, self-learning, refactoring, etc, and in my experience these self-initiated efforts can be just as, if not more, important to the overall productivity, and output quality, of the team as that of the product development cycles.
LLMs primary use is coding, although they can also be of use for conversational brainstorming about design, tooling, etc. If you are "vibe coding" (just ask the LLM to do it, and cross your fingers) then the LLM is also doing in effect doing architecture, design, tool selection etc.
Notwithstanding agents, which the AI companies proudly state may run for hours before messing up, LLMs are not autonomous entities that can replace developers and be handed high level assignments and will do what it takes (incl. communication with all stake holders, etc) to get the job done. They will not run in the background "taking care of the business" when you are not prompting them.
The difference is the LLM is predictable and repeatable. Whereas a junior dev could go AWOL, leave unexpectedly for a new job, or be generally difficult to work with, LLMs fit my schedule, show constant progress and are generally less anxiety inducing than pouring hundreds of thousands into a worker who may not pan out. This sentiment may be showing my lack of expertise in team building but at worst shows that LLMs represent a legitimate alternative to building a large team to achieve a vision.
What are your talking about? You seem to live in a parallel universe.
Every single time I tried this or someone of my colleagues, this task failed tremendously hard.
> “…exploiting our employer's lack of information…”
I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.
I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.
But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint, nor do any other work) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else, staying quiet) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people!
Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")
Sure there is a wide spectrum of skills, having worked in FANG and top tier research I have a pretty good idea of the capability at the top of the spectrum. I know I wasn't hiring at that level. I was paying 2x the local market rate (non-US) and pulling from the functional programming talent pool. These were not the top 1% but I think they were easily top 10% and probably in the top 5%.
I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.
You're (unwittingly?) making an argument for using an LLM: you know what you're going to get. It does not take six months to evaluate one; six minutes suffice.
The argument I'm trying to make is that hiring a real person or using LLMs has upsides and downsides. People have their own agendas, can leave, can affect your business in many ways, unrelated to code etc, but also can learn, can be creative and address problems that you've not even surfaced. LLM will not and will not be capable of that.
With LLMs you know what you're going to get to a certain value. Will it not listen to you? No. Will it not follow your instructions? Maybe. Will it produce unmaintainable garbage? Most certainly. Does that matter for nondevs? Sometimes
> After 6 months I let them go, the improvement was far too slow.
The bit that's missing from this story is the "why" the improvement was far too slow. Was it primarily a failure to hire the right people, a failure to teach them what you wanted them to learn, a failure to understand what you needed to teach, or a deliberate lie on the part of all three of them to steal from you?
And even if their progress had been faster, now they are a capable developer who can command higher compensation that statistically your company won’t give them and they are going to jump ship anyway.
One didn't even wait, they immediately tried to sub-contract the work out to a third party and make a transition from a consultant to a consultancy company. I had to be clear that they are hired as named person and I very much do care about who does the work.While not FANG comp it was ~2x the market rates, statistically I think they'd have a hard time matching that somewhere else. I think in part because I was offering these rates they got rather excited about the perceived opportunity in being a consultancy company, i.e. the appetite grows with the eating. I'm not sure if it's something that could be solved with more money, I guess in theory with FANG money but it's not like those companies are without their dysfunctions. With LLMs I can solve the same problem with far less money.
Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.
Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.
Actually it does, if you put those concepts in documentation in your repository…
Those concepts will be in your repository long after that junior dev jumps ship because your company refused to pay him at market rates as he improved so he had to jump ship to make more money - “salary compression” is real and often out of your manager’s control.
You need to hit that thumbs down with the explanation so the model is trained with the penalty applied. Otherwise your explanations are not in the training corpus
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.