Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is now there is the ability to do it at scale, and convincingly.

Bots on social media that can argue with you, share experiences, other forms of astroturfing, etc. Bots that can directly respond to your questions.

The difference IMO will be that previously making interactive propaganda was very difficult and expensive to do in a convincing manor. And also literally anyone can now ask for "a CDC article describing the dangers of [literally anything]" and have an article so convincing even diligent people won't be able to immediately recognize.

Currently unless you know what you're looking for, GPT4 is very convincing. I think even the hn crowd is already seeing bots on hn/reddit/etc and not realizing it.



This capability has been available for several years now.

In 2019, OpenAI said they couldn't fully release GPT-2 because it would be too dangerous. Now anyone can run a more powerful model.

In what ways has the predicted danger manifested in reality?


AFAIK the previous models couldn't convincingly continue a conversation chain, they were limited to mostly individual posts/comments since they were rather obvious when trying an actual conversation. It's fully interactive now.

And it's not just mega-coorperations/nation states, soon everyone will have access to something vastly better then GPT2.

---

I've been in the GPT-3 beta, and we have had community trained GPT-2 models for a while as well. Those seriously don't compare to GPT3.5/4.

Those severely underperform compared to even LLaMa+finetuning now, and GPT3.5 is a mile ahead of that, and GPT-4 leaves everything in the dust. Yes we've had it, and yes they're often able to pass the "casual glance turing test", but they suuuuuck in comparison to GPT-3.5.

What I'm trying to say is that propaganda/astroturfing is no longer limited to simply being a post/comment. It can now be an entire chain.

----

For example, "I was attacked by [person]'s dog"

You ask "Oh man did you get hurt?",

They respond " Luckily, I didn't get seriously injured, but it was a pretty scary experience. The dog managed to bite me on my left arm, but I was wearing a thick jacket which protected me from any deep punctures. I still got some bruises and scratches, but nothing too severe. I'm just really grateful it didn't escalate further.

I talked to the owner, and they apologized profusely, assuring me that they would be more careful in the future. It turns out that the dog has had some behavioral issues in the past, but they've been working with a trainer to help correct them. "

GPT 2 could never, ever even compare to that. This response IMO would pass as human to 99% of people who weren't specifically looking for it.


I'm not sure how effective such bots will be. People have tried having masses of actual humans defend their country in the comments of news articles and the like. That often seems to backfire instead of help.


They’re generally not very good at it, though. AI can be much more effectively tailored.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: