Play is a path to discovery. There's a lot of hype and vaporware in AI these days. There's also some real value there, but I'm not sure we've really figured out what the real value propositions are. I strongly suspect the inherent limitations in AI means that many of the current applications are doomed to failure. Playing with AI will give people a better internalized sense of the strengths and weaknesses, and it will do that broadly in ways that will let more people with more problems recognize when these tools can actually help them. The real problem we have today is that most people just don't really have a good intuition for that.
Unfortunately what we see so often is that a lot of money is being poured into something that is just a sinkhole. Take NFTs for instance. People get swept up by fads far easier than by genuine improvements.
I am actually in favor of a global debate on AI at this point, probably led by the UN so to reach as much nations as possible. Because we already know this is going to be needed as the year rolls on. Because the ball is moving fast.
NFTs are still happening, but the insane multi-million dollar monkey jpegs are not really happening anymore. At GDC this year web3 had a big presence, most firms are investigating or starting it in some way.
Like all tech there is hype then trough of disillusionment. NFTs are in the trough and AI chat tools will get there soon, just like self driving cars are in it right now.
> NFTs are still happening, but the insane multi-million dollar monkey jpegs are not really happening anymore
I think all the high valuations are money laundering. You create a pre-order system and get your friends on it, then buy their monkey pics for your near-stolen (FTX, etc) crypto and they give some back under the table.
> web3 had a big presence
The NFT tech is pretty cool. It lets (by packaging a friendly UI around it) people be their own identity providers.
I am hopeful that NFTs along with zero knowledge proofs will enable completely open markets with no intermediary for the transfer of digital goods and maybe even services. That is very cool and more important than just monkey jpegs.
It is one of those crystal ball moments, nobody really knows what impact it’s going to have and at what speed. Will my job (frontend dev) be gone by years end? Maybe, maybe not.
But if the numbers do get scary, I think we do need to have a collective sit down as societies come under stress of job loss.
Plenty of people thought they did, esp. hucksters and rubes.
There is a use for NFTs, and I could see implementations being a thing in industries like title insurance and deeds. But as it stands, right now it's mostly bag holders and money laundering.
I'll save you the time of this garbage hot take. He is saying that AI will produce a lot of content...thus leading to time wasted (memes, funny stuff, lame stuff, pictures, etc). that is all. that's the entire article.
I sense an agenda in how many empty hit piece stories we are seeing about LLMs from mainstream sources. No not some paranoid lizard people agenda (thank god) but just the force of the status quo protecting itself.
Institutions are clearly threatened by this tech as it promises to upend many professional fields and economic relationships. In my case I'm seeing teachers suggest that GPT will ruin education, as though we haven't said the same thing about calculators or the internet or whatever the next thing is for generations.
Like many similar articles in the NYT and WaPo this article is nearly anti-intellectual is it's lack of basic research and curiosity. It's like these people haven't used the thing for even a half hour. But yet they speak with authority on the implications.
That demonstrates fear or gatekeeping. It demonstrates an interest in slowing down the change we are seeing. And it's pretty embarrassing to watch because you can't really hide that agenda when you don't know what you are talking about and are only coming from a place of defending the status quo.
Who buys into this? Who is this for? Who is convinced? They must be talking to people who want this answer and find it comforting. But given that it's not really very insightful or true, and has no shelf life, what's the point?
In the ancient times of the year 2000, these same pieces were being written about how the Internet would ruin society and dumb down the population. This is hard to remember now, because the Internet ruined society and dumbed down the population.
Oh, I've seen them being about widespread television (somehow the articles lasted into the 80's), reality shows, fast-paced news, videogames, RPG, game books, the internet, web chatting (somehow they missed ICQ et al.), orkut, MMO-RPGs, online games (Facebook style), smartphones, modern social networks...
Oh, and there are the always-green ones about calculators, and encyclopedias (that changed into wikipedia). Of course there is the famous one from ancient Greece about writing... So, yeah, consider the population dumbed down already.
> What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows.
Socrates in Phaedrus. I searched it with ChatGPT. Could only vaguely remember it and indeed ChatGPT found the passage. Socrates was right...
If this truly what you got from the article, and is not just bad faith soapboxing, it might be a good idea for you to work on reading things a little less quickly, and with a little more openness to receive something from a text that you don't already have your mind made up about.
Because I gotta tell yeah, whatever you want to believe, if you were one of my students and you were tasked with summarizing this article, you would probably fail the assignment.
AI can help generate text, but reading and comprehension is still something you need to care about as a human, especially if you are trying to help people save time and understand things! Its an honorable intention, but its still something you have to follow through with and do well. Otherwise you are doing more harm than good.
If you're interested, maybe a little direction:
1. What kinds of things does the author identify as "wasting time"?
2. Does the author think wasting time and play are bad things?
3. Does the author in this article think AI is bad?
no, i didn't read the article. not in it's entirety. i read the intro and the conclusion. I'm pretty sure I nailed 90% of what the author was saying. you gotta chill out.
okay well you don't need to apologize to me either lol. you were just sharing your opinion. if it means so much to you, how would you sum up what the author says in a few sentences? i tried clicking on the article it but got paywalled the 2nd time around.
Seems a lot of people are taking this as some kind of slight against AI. My read was just that the tech has (obviously) a lot of time wasting potential as people play with it and that that's likely to make it more lasting than something purely utilitarian. This checks out to me. The obvious reason current "AI" is so popular is because everyone can play with it and get results they understand. It was a lot less fun doing MNIST classifier tutorials.
I’m surprised that you find it surprising. There’s a Twitter account called The Pacific that exists for the sole purpose of mocking the kind of articles that The Atlantic publishes.
Awhile back one of their reporters blocked me on Twitter for mildly questioning the government’s narrative around a political event.
The article itself isn't actually dunking on AI, the title is a (kinda clever) clickbait.
If you actually read it, the author uses image gen models and such a lot, and is opining on how technology that allows us to tinker and "waste time" can be more impactful than technology that is simply "productive."
In the coming months, all these idiotic hot takes are going to get washed away by the raging sea of value transformation that is generative AI. No one will even have the strength to stand up in the surf to blabber on like this. The amount of naive reactive denialism is embarrassing.
This question is totally baffling to me. Have you used these tools? The capabilities are astounding and dreamlike (and also a nightmare).
I was born in the early 80s. There are only 2 other techno-socio moments in my life that have hit as hard: (1) the web around 1994, and (2) the smartphone. The appearance of the current iteration of generative AI for public consumption in the last few months dwarfs both.
My own daily working patterns, expectations for my future, for my children's desirable skillsets have been entirely shifted. The entire value of generating information has been totally upended. If it can be created by AI -- and it's looking like most information-based things can/will be soon -- then humans now have a very different position in the world. Why I need to list any specifics at this point is beyond me.
This is going to spread through the world like a fire.
I’ve yet to hear a really compelling use case for cryptocurrency that doesn’t seem like it’d be better solved in some other way.
It’s incredibly obvious how AI tools can be applied to many different problems.
So even if it’s the same people “pushing” both, that doesn’t mean they are the same. These people are tech enthusiasts who want to make money, and they will be excited by any promising new tech.
Are there examples where this is possible but other forms of transfer are not possible? From my experience working with crypto exchanges, there are a lot of regulations and processes that make them just as slow as a bank in a pinch. And if your country is war torn, I find it unlikely any exchanges would even operate effectively there.
This type of use mostly seems like a crypto bro fantasy to me, but I’d definitely be interested to see some reporting on this actually working.
You’re over-fitting. Hype maximizers will hype everything, fundamentally useful or not. They are not a discriminating signal for whether or not something is useful.
Nah I made this account with its ftxbro name to make fun of them and you can even see the first comment I made on the account. I think cryptocurrency and nft speculation is some beanie babies stuff, but I think large language models as trained on the whole internet with these data centers full of sci-fi GPGPUs are amazing transformative tools at the level of the discovery of fire or the invention of language.
I completely agree with you. The only way this isn't true is if progress ceases because of some kind of socioeconomic catastrophy. We have already gotten over whatever hurdles there were that made supernnormal AI capability look potentially intractable.
I don't think your comparisons to civilization butressing inventions like fire and writing and language (the last, to be fair, is an evolutionary invention, not a cultural one) are unfair or exagerrated. Things are about to get very strange in short order.
That scammers who pushed the former are now pushing the latter, doesn't tell you much about the future.
ChatGPT is, IMO, about the same value as a freshly graduated programmer — it would be a mistake to let it loose unsupervised, but on the other hand, what do junior devs cost to hire these days?
These idiotic hot takes will be washed away by the veritable ocean of piss that will be uniquely generated AI generated hot takes designed to constantly spam / market / propagandize.
The level of bullshit will be unparalleled. Because of AI.
Just one side of the coin. On the other almost all legitimate knowledge work is going to have a whole mass of previously onerous requirements rendered trivial.
This reminds me a bit of Renfield (or Rick Moranis in Ghostbusters) waiting for Dracula/Gozer to come. A little over the top, and not something you'd normally see about anything that wasn't a massive hype wave (not that I'd want to be around when The Traveler arrives)
I work in O&G so I can give a few domain-specific examples I've worked on just in the last two weeks:
- Smart pre-job safety app that looks across past observations and incidents to make job-specific recommendations about safety risks
- Automatically summarize detailed hourly rig reports into daily summary reports. This is an ongoing savings of $1.2 million per year in labor
- Review rig reports for evidence of formation gas for a study on casing integrity. This was a one-off savings of 400 engineer hours
- Perform clustering and topic modeling of employee HR development goals and schedule training classes based on the areas of greatest demand
- Custom chatbot to answer questions about how to use SAP. We have ~50 training PPTs that nobody reads. The chatbot answers questions using the documents
What's amazing is there's all this low-hanging fruit to be captured and spinning up these solutions takes mere hours.
Inefficient use of time is a business problem. LLMs save work hours we would usually spend on mundane tasks.
Regular office workers can make automations, devs can enhance documentation with LLMs, LLM powered search and spreadsheets, these all save time.
Businesses that don't take advantage of this will be left in the dust. In a few years not using it will be like not allowing your employees to use Google, because there is fake news on the internet.
The time efficiency gained will surpass the hallucinations, just like how people for the most part have learned to use the internet for their benefit despite all of the garbage on the net.
This depends on so many variables, including complexity of the average problem, the obscurity of the average problem, having documentation or a knowledgebase with a known solution.
ChatGPT hallucinates so many things that don't exist, e.g., buttons or menu options in programs that don't exist, not considering file compatibility issues etc. I could go on for hours.
If the average problem is "I got locked out of my account" or basic and common stuff like this that just warrants giving someone a link or telling them to reboot their router, sure, maybe it'll be better than dealing with a human being in x out of y occurrences, hallucination not withstanding.
If it's something more complex like needing an NGINX configuration when a company has only ever considered Apache htaccess in the past, the customer is probably seven kinds of fucked. I wasted days trying to get something other than nonsense out of ChatGPT for an NGINX config, even going so far as to feed it documentation and the lines it would need for implementing URL rewrites. It kept hallucinating things that didn't exist in the documentation, and was a complete waste of time. Even after correcting it umpteen times, it still gave the same response. There's no reasoning applied.
Is there potential? Sure. But it isn't replacing human beings in a lot of cases. And for even more cases, just using a search function on a knowledgebase or search engine will yield more accurate information than trusting it isn't hallucinating. I have wasted far more time than I've saved due to hallucinations.
If all customer support does is search the problem on a knowledgebase, sure, it makes sense, because the support agent is already just doing what the LLM does if they can't apply logic and reasoning to the query. But why not just access the knowledgebase directly and not risk hallucinations?
Sure, it's not as good as a human rep, but it's still better than the dumb chatbots we are forced to interact with now. May be better than searching a poorly updated knowledgebase too.
It might be better than most reps considering not infrequently, the big corps hire the cheapest labour and quality of English is a major problem in my experiences. Especially if all the rep does is query something in a knowledgebase.
I disagree very strongly over the knowledgebase being inferior to the LLM data set.
The data sets of LLMs are not stored in a human-readable format. The data sets of LLMs are likely a worse means of storing data because the outputs are prone to hallucination, and if an output can only be rendered via an LLM itself, well, therein lies your problem. You can't trust that it's right and not hallucinating if you can't read the data it is using.
do you have an example? from my experience most customers support related issues are self inflicted (e.g. not allowing the customer to self service all tasks and instead gatekeeping it to some support rep)
I do not see how a LLM will help in a way that was not achievable before.
Previous chatbots were too dumb to handle said tasks. LLMs may be able to understand the internal APIs and perform the task, without the company having to design a UI for self service.
One of the topics of the article (though he doesn't mention PCs themselves, but other technologies).
> In 1994, the economists Sue Bowden and Avner Offer studied how various 20th-century technologies had spread among households. They concluded that “time using” technologies (for example, TV and radio) diffused faster than “time saving” technologies (vacuum cleaners, refrigerators, washing machines).
This applies to any journalist who reproduces opinions (often his/her own), instead of presenting relevant facts. However, there are good articles and good journalism now and then, even if that seems to be the exception rather than the rule these days.