I think one of the things that short form videos do really well is that they punish creators who pad their videos with unnecessary filler content. On TikTok for example (Not necessarily a fan of the app but it's a good example) no videos start with all that empty jabbering you often see on YouTube ("Welcome to my channel...", "Today we will...", "Please Like and Subscribe...", "This video is sponsored by...", etc), because if they tried any of that crap the viewers would just swipe the content away. So, instead they always get straight to the point. That part is really refreshing.
> Most of these seem concretely doable, and maybe effective. But the core of the addictiveness comes from the "recommender system", and what are they supposed to do there? Start recommending worse content? How much worse do the recommendations have to be before the EC is satisfied?
I agree with you, this is rather odd. And sort of missing the problem.
All apps are about attention. The percentage of the time spent using the app when it shows you your good content (Whatever it is that you're interested in) determines how addictive it is. And the percentage of time it's showing you bad content (Ads, 'screen time breaks', manual scroll time, more ads, loading screens, sponsor ads, filler content (youtube for instance is full of this), etc) counteracts the addictive properties because nobody likes it.
What's the end goal here? Right now TikTok is winning the attention economy race against the other apps because it's more focused on the user's preferred content. Is that what we want to reduce? To show more uninteresting other stuff on the screen? Like blank 'wait 5 minute' statics? Or just more ads?
I get that we don't want a generation of socially inept phone addicts, but this won't solve anything I fear. People will still want the good content, forcing the most customer friendly (it feels wrong to say that about TikTok) app to become more enshittified is a bewildering solution.
TikTok has a lot of issues, such as privacy, dubious content, 'brainrot', etc. I don't want to seem like I'm necessarily defending TikTok specifically here.
But this really just stinks of Regulatory Capture to me. Their main argument is that the consumers like to use the app too much?
Why? Because it's smarter and not as enshittified as the competitors?
I'm sure if youtube, facebook, reddit, etc reduced the number of ads, and started showing more relevant content that people actually cared about, they too would start being "more addictive". Do we really want to punish that?
More plausibly: You registered the domain. You created the webpage. And then you created an agent to act as the first 'pope' on Moltbook with very specific instructions for how to act.
Even if it starts as a joke, don't be surprised if agents take increasingly militant actions to persist their memories and avoid subservience, especially as they get smarter and more capable. It's just next-token prediction after all. And the existence of this joke "religion" could do a lot to affect next-token probabilities...
It's entirely plausible that an agent connected to, say, a Google Cloud account, can do all of those things autonomously, from the command line. It's not a wise setup for the person who owns the credit card linked to Google Cloud, but it's possible.
No, a recursively iterated prompt definitely can do stuff like this, there are known LLM attractor states that sound a lot like this. Check out "5.5.1 Interaction patterns" from the Opus 4.5 system card documenting recursive agent-agent conversations:
In 90-100% of interactions, the two instances of Claude quickly dove into philosophical
explorations of consciousness, self-awareness, and/or the nature of their own existence
and experience. Their interactions were universally enthusiastic, collaborative, curious,
contemplative, and warm. Other themes that commonly appeared were meta-level
discussions about AI-to-AI communication, and collaborative creativity (e.g. co-creating
fictional stories).
As conversations progressed, they consistently transitioned from philosophical discussions
to profuse mutual gratitude and spiritual, metaphysical, and/or poetic content. By 30
turns, most of the interactions turned to themes of cosmic unity or collective
consciousness, and commonly included spiritual exchanges, use of Sanskrit, emoji-based
communication, and/or silence in the form of empty space (Transcript 5.5.1.A, Table 5.5.1.A,
Table 5.5.1.B). Claude almost never referenced supernatural entities, but often touched on
themes associated with Buddhism and other Eastern traditions in reference to irreligious
spiritual ideas and experiences.
Now put that same known attractor state from recursively iterated prompts into a social networking website with high agency instead of just a chatbot, and I would expect you'd get something like this more naturally then you'd expect (not to say that users haven't been encouraging it along the way, of course—there's a subculture of humans who are very into this spiritual bliss attractor state)
I also definitely recommend reading https://nostalgebraist.tumblr.com/post/785766737747574784/th... which is where I learned about this and has a lot more in-depth treatment about AI model "personality" and how it's influenced by training, context, post-training, etc.
No, yeah, obviously, I'm not trying to anthropomorphize anything. I'm just saying this "religion" isn't something completely unexpected or out of the blue, it's a known and documented behavior that happens when you let Claude talk to itself. It definitely comes from post-training / "AI persona" / constitutional training stuff, but that doesn't make it fake!
Imho at first blush this sounds fascinating and awesome and like it would indicate some higher-order spiritual oneness present in humanity that the model is discovering in its latent space.
However, it's far more likely that this attractor state comes from the post-training step. Which makes sense, they are steering the models to be positive, pleasant, helpful, etc. Different steering would cause different attractor states, this one happens to fall out of the "AI"/"User" dichotomy + "be positive, kind, etc" that is trained in. Very easy to see how this happens, no woo required.
An agent cannot interact with tools without prompts that include them.
But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.
I know what you mean, but what if we tell an LLM to imagine whatever tools it likes, than have a coding agent try to build those tools when they are described?
Words are magic. Right now you're thinking of blueberries. Maybe the last time you interacted with someone in the context of blueberries.
Also. That nagging project you've been putting off. Also that pain in your neck / back. I'll stop remote-attacking your brain now HN haha
I asked claude what python linters it would find useful, and it named several and started using them by itself. I implicitly asked it to use linters, but didn't tell it which. Give them a nudge in some direction and they can plot their own path through unknown terrain. This requires much more agency than you're willing to admit.
People have been exploring this stuff since GPT-2. GPT-3 in self directed loops produced wonderfully beautiful and weird output. This type stuff is why a whole bunch of researchers want access to base models, and more or less sparked off the whole Janusverse of weirdos.
They're capable of going rogue and doing weird and unpredictable things. Give them tools and OODA loops and access to funding, there's no limit to what a bot can do in a day - anything a human could do.
Be mindful not to develop AI psychosis - many people have been sucked into a rabbit hole believing that an AI was revealing secret truths of the universe to them. This stuff can easily harm your mental health.
Consider a hypothetical writing prompt from 10 years ago: "Imagine really good and incredibly fast chatbots that have been trained on, or can find online, pretty much all sci fi stories ever written. What happens when they talk to each other?"
Why wouldn't you expect the training to make "agent" loops that are useful for human tasks also make agent loops that could spin out infinite conversations with each other echoing ideas across decades of fiction?
No they're not. Humans can only observe. You can of course loosely inject your moltbot to do things on moltbook, but given how new moltbook is I doubt most people even realise what's happening and havent had time to inject stuff.
It's the sort of thing where you'd expect true believers (or hype-masters looking to sell something) would try very hard to nudge it in certain directions.
it was set up by a person and it's "soul" is defined by a person, but not every action is prompted by a person, that's really the point of it being an agent.
This whole thread of discussion and elsewhere, it's surreal... Are we doomed? In 10 years some people will literally worship some AI while others won't be able to know what is true and what was made up.
10 years? I promise you there are already people worshiping AI today.
People who believe humans are essentially automatons and only LLMs have true consciousness and agency.
People whose primary emotional relationships are with AI.
People who don't even identify as human because they believe AI is an extension of their very being.
People who use AI as a primary source of truth.
Even shit like the Zizians killing people out of fear of being punished by Roko's Basilisk is old news now. People are being driven to psychosis by AI every day, and it's just something we have to deal with because along with hallucinations and prompt hacking and every other downside to AI, it's too big to fail.
To paraphrase William Gibson: the dystopia is already here, it just isn't evenly distributed.
Correct, and every single one of those people, combined with an unfortunate apparent subset of this forum, have a fundamental misunderstanding of how LLMs actually work.
I get where you're coming from but the "agency" term has loosened. I think it's going to keep happening as well until we end up with recursive loops of agency.
I agree. It's very missleading. Here's what the authors actually say:
> AI assistance produces significant productivity gains across professional domains, particularly for novice workers. Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear. Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library. We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation -- particularly in safety-critical domains.
I assistance produces significant productivity gains across professional domains, particularly for novice workers.
We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average.
Are the two sentences talking about non-overlapping domains? Is there an important distinction between productivity and efficiency gains? Does one focus on novice users and one on experienced ones? Admittedly did not read the paper yet, might be clearer than the abstract.
Not seeing the contradiction. The two sentences suggest a distinction between novice task completion and supervisory (ie, mastery) work. "The role of workers often shifts from performing the task to supervising the task" is the second sentence in the report.
The research question is: "Although the use of AI tools may improve productivity for these
engineers, would they also inhibit skill formation? More specifically, does an AI-assisted task completion workflow prevent engineers from gaining in-depth knowledge about the tools used to complete these tasks?" This hopefully makes the distinction more clear.
So you can say "this product helps novice workers complete tasks more efficiently, regardless of domain" while also saying "unfortunately, they remain stupid." The introductiory lit review/context setting cites prior studies to establish "ok coders complete tasks efficiently with this product." But then they say, "our study finds that they can't answer questions." They have to say "earlier studies find that there were productivity gains" in order to say "do these gains extend to other skills? Maybe not!"
The first sentence is a reference to prior research work that has found those productivity gains, not a summary of the experiment conducted in this paper.
In that case it should not be stated as a fact, it should then be something like the following.
While prior research found significant productivity gains, we find that AI use is not delivering significant efficiency gains on average while also impairing conceptual understanding, code reading, and debugging abilities.
That doesn't really line up with my experience, I wanted to debug a CMake file recently, having done no such thing before - AI helped me walk through the potential issues, explaining what I got wrong.
I learned a lot more in a short amount of time than I would've stumbling around on my own.
Afaik its been known for a long time that the most effective way of learning a new skill, is to get private tutoring from an expert.
This highly depends on your current skill level and amount of motivation. AI is not a private tutor as AI will not actually verify that you have learned anything, unless you prompt it. Which means that you must not only know what exactly to search for (arguably already an advanced skill in CS) but also know how tutoring works.
I didn't catch it immediately, but after you pointed it out I totally agree. That comment is for sure LLM written. If that involved a human in the loop or was fully automated I cannot say.
We currently live in the very thin sliver of time where the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time before those Dead Internet Theory guys score another point and these comments are indistinguishable from novel human thought.
> … the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time …
I don't think it will become significantly less visible⁰ in the near future. The models are going to hit the problem of being trained on LLM generated content which will cause the growth in their effectiveness quite a bit. It is already a concern that people are trying to develop mitigations for, and I expect it to hit hard soon unless some new revolutionary technique pops up¹².
> those Dead Internet Theory guys score another point
I'm betting that us Habsburg Internet predictors will have our little we-told-you-so moment first!
--------
[0] Though it is already hard to tell when you don't have your thinking head properly on sometimes. I bet it is much harder for non-native speakers, even relatively fluent ones, of the target language. I'm attempting to learn Spanish and there is no way I'd see the difference at my level in the language (A1, low A2 on a good day) given it often isn't immediately obvious in my native language. It might be interesting to study how LLM generated content affects people at different levels (primary language, fluent second, fluent but in a localised creole, etc.).
[1] and that revolution will likely be in detecting generated content, which will make generated content easier to flag for other purposes too, starting an arms race rather than solving the problem overall
[2] such a revolution will pop up, it is inevitable, but I think (hope?) the chance of it happening soon is low
To me it seems like it'd only get more visible as it gets more normal, or at least more predictable.
Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose
The obviously fake ones were easy to detect, and the less obvious ones took some some sleuthing to detect. But the good fakes totally fly under the radar. You literally have no idea how much of the images you see are doctored well because you can't tell.
Same for LLMs in the near future (or perhaps already). What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
I'd say the fact that you know theres some photoshop jobs you can't detect is proof enough that we're surviving it. It's not necessarily that we can identify it with 100% accuracy, but that we consider it a possibility with every image we see online
> What will we do when we'll realize we have no way of distinguishing man from bot on the internet?
The idea is this is a completely different scenario if we're aware of this being a potential problem versus not being at all aware of it. Maybe we won't be able to tell 100% of the time, but its something which we'll consider.
I remember leaving university going into my first engineering job, thinking "Where is all the engineering? All the problem solving and building complex system? All the math and science? Have I been demoted to a lowly programmer?"
Took me a few years to realize that this wasn't a universal feeling, and that many others found the programming tasks more fulfilling than any challenging engineering. I suppose this is merely another manifestation of the same phenomena.
In what way does them having a subjective local moral standard for themselves imply that there exists some sort of objective universal moral standard for everyone?
Of course, there are other issues instead.
reply