Hacker Newsnew | past | comments | ask | show | jobs | submit | georgemcbay's commentslogin

> The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.

I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.


I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.

Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?


I would say it is privilege (now) combined with denialism (for the future).

Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.

It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.

And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?


Most of the logic of this post will be incoherent in a world where AI has replaced software jobs wholesale. You have to pick a lane. Is it so effective that it (and the labor market more broadly) needs to be aggressively regulated, or it not very useful for anything but trolling? It can't be both.

This assumes every decision-maker is a rational actor. Just today an executive was rambling about "quantum-empowered AI". These are the people who take decisions about firing workers. It is entirely possible that AI will replace many jobs while being useless (at achieving what those workers do). At least in the short-medium period.

We would live in a post-scarcity utopia if big economic decisions were taken based on long-term optimal effects.


I'm interested in how you can tell an industry-wide job displacement story about AI, where AI isn't actually doing the job, that isn't a just-so story.

If you wanted to tell such a story, you’d have to find examples of companies spending bazillions on new AI tooling, but failing to hit their top level OKRs. I suspect there will be at least a few of these by the end of 2026 - even a great technology can seem like an abacus in the hands of a disorganized and slow moving org.

The story only matters if it produces an industry-wide displacement in jobs. Failed billion-dollar IT projects are not a new thing, and don't disrupt the entire labor market.

To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.

Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.


There's a coherent story that straddles both lanes, by assuming that the human economy is in some weird place where the vast majority of humans don't create real economic value and mostly get employment through inertia and custom, and that AI, despite being worthless, provides an excuse for employers to break through taboos and traditions and eliminate all those jobs. Quite a stretch, but it's coherent at least.

Seems to be what is happening in a lot of the places it's encroaching.

AI journalism is strictly worse than having a human research and write the text, but it's also orders of magnitude much cheaper. You see prompt fragments and other blatant AI artifacts in news articles almost every day. So we get newspapers that have the same shape as they used to, but that don't fulfill their purpose. That's a development that was already going on before AI, but now it's even worse.

Walked past a billboard the other day with advertisement that was blatantly AI-generated. Had a logo with visible JPEG artifacts plastered on top of it. Real amateur hour stuff. It probably was as cheap as it looked.

You see the trend in software too. Microsoft's recent track record is a good example of this. They can barely ship a working notepad.exe anymore.

Supposedly some birds will eat cigarette butts thinking they're bugs, and then starve to death with a belly full of indigestible cigarette filters. Feels a lot like what is happening to a lot of industries lately.


In retrospect, it was crazy hearing stories about how SF UX designers would be paid $250 to essentially do what Figma does now.

Sometimes It is effective, but very unreliable.

In the end there will be the owners of the farmland, and whoever/whatever they employ.

>what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?

They will start to burn down data centers.


If you believe ICE purpose is to enforce immigration law… yeah. Quite possible

There’s a reason that the Musks and Thiels of the world invested in luxury doomsday bunkers, because it won’t just be property people want to burn.

The Soviets used the "iron broom" (i.e. murder) on the wealthy people.

It didn't make anyone better off.


The Soviets history is not so simple. ;)

The Soviets aren't the only country that tried that. It's never worked out anywhere it was tried.

It seems to have worked out alright for the French.

It worked out very badly for them. See "Reign of Terror". The Revolution ended when Napoleon declared himself a hereditary monarch. Things went full circle.

Consider also the American Revolution. Nobody went on a rampage to kill the wealthy. Things went very well for America.

I’m not arguing in favor of it, I’m just aware of how the world works.

You also seem to have forgotten France among other places where the history wasn’t as grim as Russia. Frankly nothing Russia has ever done seems to make their lot better.


You're overlooking the Reign of Terror.

Consider Pol Pot (Cambodia). How'd that work out? Cuba? How'd that go? Venezuela, anyone?

For a counter-example, the US. The greatest rise in the standard of living in history, with free markets where paupers could get rich.


> I doubt there are a lot of people you'd immediately recognize by their voice.

There is a lot of variability on this from person to person.

A lot of people are terrible at recognizing voices out of context. I have always been able to recognize people's voices just about as easily as their faces.

(Unfortunately, while this is a neat parlor trick, I haven't found it to be a particularly valuable skill).


> What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?

What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?


I think what I’m saying is that there’s a lot of software that doesn’t get built at all because the cost of serving a particular niche market is still too high, and that AI may put some of those markets within reach.

So, those software engineers may be able to move sideways instead of competing to build the same software.


> Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)

Nick Bostrom (who wrote the paper this thread is about) published "Superintelligence: Paths, Dangers, Strategies" back in 2014, over 10 years before "If Anyone Builds It, Everyone Dies" was released and the possibility of AI doom was a major factor in that book.

I'm sure people talked about "AI doom" even before then, but a lot of the concerns people have about AI alignment (and the reasons why AI might kill us all, not because its evil, but because not killing us is a lower priority than other tasks it may want to accomplish) come from "Superintelligence". Google for "The Paperclip Maximizer" to get the gist of his scenario.

"Superintelligence" just flew a bit more under the public zeigeist radar than "If Anyone Builds It, Everyone Dies" did because back when it was published the idea that we would see anything remotely like AGI in our lifetimes seemed very remote, whereas now it is a bit less so.


Yudkowsky invented AI doom around 2004. AFAIK that inspired Bostrom's work.

> Is anyone else worried at how easily Anthropic/Google/OpenAI can basically cut you off if you do something they don't like?

Yeah, had that thought here a few weeks ago on HN after reading about someone getting cut off from Claude:

https://news.ycombinator.com/item?id=46723384#46728649

Though tbh I'm far more worried about the societal impacts of large scale job displacement across so many professional industries at the same time.

I think it is likely to be very, very ugly for society in the near term. Not because the problems are unsolvable, but because everyone is choosing to ignore the threat of them.

And I realize a lot of people will handwave my concerns away with stories of Luddites and Jevon's paradox, but we've never had a tidal wave this big hit all at once and I think the scale (combined with speed of change) fundamentally changes things this time.


I stopped worrying. Western societies have about 30 to 40% of the people doing knowledge work, which contributes to the economy that employs the other 60%.

If that 40% is automated away in one go, there's no economy as we know it anymore. Either it acts as a negative void coefficient and moderates it into something sustainable, or it blows up.


Haven't read that book, but agree that if anyone thinks the workers are likely to capture the value of this productivity shift, they haven't been paying attention to reality.

Though at the same time I also think a lot of the CEO-types (at least in the pure software world) who believe they are going to capture the value of this productivity shift are also in for a rude awakening because if AI doesn't stall out, its only a matter of time from when their engineers are replaceable to when their company doesn't need to exist at all anymore.


> A single king-size bed can fit 4-5 employees.

... at a time.

So if you also run three shifts that's 12-15 employees per bed!


Exactly right.

On top of this, they're going to have mandatory bed position assignments. Just like you currently can't choose which desk you're going to sit at, and have to put up with the most annoying person on the team as your deskmate, in the near future you're going to have to cuddle with him/her at night too, whether you like it or not, and regardless of his/her bad hygiene, just because your manager decided to stick you two together.


> in the near future you're going to have to cuddle with him/her at night too, whether you like it or not

A solid solution to reduce heating costs. Maybe one can go a step further and remove the bed though, a large mattress (or let's say rubber mat) should be enough.


It's most efficient if you can figure out how to get them to sleep standing up.

Stand on Zanzibar

This has already happened. During the industrial revolution sharing bed shifts was common. You just rediscovered the reason why worker protection laws have maximum working times and forbid employers from demanding outside of working hours.

Don't worry, the employers will make sure these worker protection laws are all rescinded, so we can go back to workers having to share beds. The workers are happily voting for this, because they believe regulations are bad.

I assume the Rob Pike mention was in reference to this:

https://skyview.social/?url=https%3A%2F%2Fbsky.app%2Fprofile...

He apparently isn't the biggest fan of AI.


> And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.

I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.

Asynchronous human to human communication is a pretty solved problem.


A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: