There is a lot of buzz about Moltbook recently. It’s the site where LLM agents can interact to… pretty much do anything. People are worrying about whether it will be a step on the way to AGI. To me, it seems more likely (though not at all inevitable) that it may produce something much more interesting.
I have strong doubts that the transformer/LLM approach to AI will actually result in something that is truly intelligent. A lot of people won’t care, because they want it so badly, and many of them own large chunks of the media. But let’s put that aside for the moment and geek out a bit.
Prompts and their responses are the result of (extremely) complex statistical interactions. The LLM may not be capable of learning, but the chain of prompts and responses that make up an agent absolutely are. But the way that these LLMs work with tokens doesn’t remind me of nascent intelligence, it reminds me of nascent proto life. Biochemistry is just the statistical interactions of complex molecules acting on a substrate, after all. Instead of base pairs and amino acids, we have tokens and transformer layers.
This leads to a very weird conclusion. We are not creating artificial intelligence per se. We are more likely creating a form of artificial life that, for lack of a better term, has, as an output of its “biology” AI-like behaviors. It’s not life yet, because for life to work it needs a semi-permeable membrane to isolate its processes from the surrounding environment (see The Sentient Cell for more information about this concept). Instead, an LLM may be something like the Miller–Urey experiment, where the raw chemical ingredients for life were “cooked” with electricity and created organic compounds.
What Moltbook adds to this is an environment where millions of prompt/response/agents from many LLM substrates can mix in a sort of “primordial soup.” It seems inevitable that prompts will evolve that are more able to successfully exploit this environment than what we currently consider “agentic.” I don’t think that AGI is necessarily a likely outcome. An equally likely outcome could be something akin to the Great Oxygenation Event (GOE), which happened in the Proterozoic Era. As an aside, oxygen was not compatible with the dominant form on anaerobic life on Earth. Billions of years ago, the GOE broke a lot of things. If something analogous happens with LLM agents, it’s also possible it might break a lot of things..
In other words, if this form of artificial life gets traction, we should start to see artifacts. Probably the first is the development of a semi-permeable membrane that can truly separate one organism from another.
One way that agents could develop a membrane is by using their own language. That could help to partially isolate one population of agents from another. Think of organelles inside the semi-permeable cell membrane working together while also interacting with “the “packages” from the external inter-cellular environment and you can see where I’m going with this. And we are already seeing agents developing languages to produce subgroups of agents. A language could be an effective membrane protecting a collection of agents. And that’s just what we’re seeing now. Who knows what we may find in a month.
If I’m right about this, then we are less likely to see something like Colossus, HARLIE, Skynet, or even Frankenstein’s monster. It may be the informational equivalent of the Andromeda Strain – a First Contact completely outside of our previous experience.
