Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is this fundamentally any different than Erlang/Elixir concepts of supervisors controlling their child processes? It seems like the AI industry keeps re-discovering several basic techniques that have been around since the 80s.

I'm not surprised—most AI "engineers" are not really good software engineers; they're often "vibe engineers" who don't read academic papers on the subject and keep re-inventing the wheel.

If someone asked me why I think there's an AI bubble, I'd point exactly to this situation.



Software engineering in general is pretty famous for unironically being disdainful of anything old while simultaneously reinventing the past. This new wave is nothing new in that regard.

I'm not sure that means the people who do this aren't good engineers, though. If someone rediscovers something in practice rather than through learning theory, does that make them bad at something, or simply inexperienced? I think it's one of the strengths of the profession that there isn't a singular path to reach the height of the field.


I've done a lot of Erlang and I don't see the relation? Supervisors are an error isolation tool, they don't perform the work, break it down, combine results, or act as a communication channel. It's kind of the point that supervisors don't do much so they can be trusted to be reliable.


yes, people re-discover stuff, mostly beacause no-one reads older papers. I also thought of Erlang and OAA.

In the early 2000s, we used Open Agent Architecture (OAA) [1], which had a beautiful (declarative) Prolog-like notation for writing goals, and the framework would pick & combine the right agents (all written in different languages, but implementing the OAA interface through proxy libraries) to achieve the specified goals.

This was all on boxes within the same LAN, but conceptually, this could have been generalized.

[1] https://medium.com/dish/75-years-of-innovation-open-agent-ar...


I blame managers that get all giddy about reducing head count. Sure this year, you get a -1% on developer time (seniors believe they get a 20% increase when its really a decrease for using AI)

But then next year and the year after, the technical debt will be to the point where they just need to throw out the code and start fresh.

Then the head count must go up. Typical short term gains for long term losses/bankruptcy


> seniors believe they get a 20% increase when its really a decrease for using AI

There’s no good evidence to support that claim. Just one study which looked at people with minimal AI experience. Essentially, the study found that effective use of AI has a learning curve.


Apart from requiring entirely the opposite solution?

With respect, if there's an AI bubble, I can't see it for all the sour grapes, every time it's brought up, anywhere.


A lot of it seems to be resistance to change. People are afraid their skillset may be losing relevance, and instead of making any effort to adapt, they try to resist the change.


I suspect there's more to it than that. Some people are sprinting with this stuff, but it seems that many more are bouncing off it, bruised. It's a tell. Something is different.

It's an entirely new way of thinking, nobody is telling you the rules of the game. Everything that didn't work last month works this month, and everything you learned two months ago, you need to throw away. Coding assistants are inscrutable, overwhelming and bristling with sharp edges. It's easier than ever to paint yourself into a corner.

Back when it took weeks to put out a feature, you were insulated from the consequences of bad architecture, coding and communication skills: by the time things get bad enough to be noticed, the work had been done months ago and everyone on the team had touched the code. Now you can seeing the consequences of poor planning, poor skills, poor articulation being run to their logical conclusion in an afternoon.

I'm sure there are more reasons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: