Perhaps https://unanimous.ai/ (where Rosenberg is CEO) has funding problems if such articles are required.
Blaming the user for not understanding the magnificent technology is the latest fad. The cranky Google AI also accuses you of being "frustrated" and "anxious" if you do not like its output.
I do find it interesting that the article lead goes with "computer scientist Louis Rosenberg" while if you click into the author profile it says:
> Dr. Louis B. Rosenberg is a computer scientist and current CEO of Unanimous AI, a California company focused on amplifying human intelligence using AI algorithms modeled on biological swarms.
> He argues that “AI denialism” is rising because society is “collectively entering the first stage of grief.”
Yeah, everyone else is a ghost driver.
I do wonder, what is more plausible:
Society collectively being in denial about X or a bunch of investors + their attachments being in denial about X where X is their investment?
I feel like none of these discussions can ever go anywhere, if they don't start from a place of recognizing that "AI is a massive bubble" and "AI is a very interesting and useful technology that will continue to increase its impact" are not mutually exclusive statements
I personally am very sympathetic to "AI is a very interesting and useful technology that will continue to increase its impact"
However, it's a bit of a non-statement - Isn't it true for all technology ever? Therefore it seems like a retreating point spouted while moving from the now untenable position of "AI will revolutionize everything". But that's just my impression
This is a really poorly written article. The author doesn’t seem to understand the meaning of “AI slop” (he equates it with LLM-generated code), and hasn’t seemed to have thought through the economics of his AI-everywhere assertion.
Anyway, denial or not, his vision of the likely future is one in which individual humans become irrelevant slaves to either AI or massive corporations. Why would any self-respecting human accept that outcome?
> Anyway, denial or not, his vision of the likely future is one in which individual humans become irrelevant slaves to either AI or massive corporations. Why would any self-respecting human accept that outcome?
There is a difference between “accept” as in ‘this is what I would want’ and what actually happens.
There are lots of reasons why things don’t go the way individuals want them to. Predicting the future is hard. Practically speaking, people have constrained agency. Getting organized to make significant change can be hard (i.e. collective action problems) even when everybody knows things are messed up.
Of course it's not going away. But like every other bubble, it means that the technology can go back to what it's actually good at vs. just shoved in every single thing imaginable.
Blaming the user for not understanding the magnificent technology is the latest fad. The cranky Google AI also accuses you of being "frustrated" and "anxious" if you do not like its output.
Investors are beginning to notice.
reply