Monthly Archives: May 2023

Artificial Intelligence or Artificial Life?

I’ve been reading Metaphors we live by. It’s central idea is that most of our communication is based on metaphors – that GOOD IS UP, IDEAS ARE FOOD, or TIME IS AN OBJECT. Because we are embodied beings in a physical world, the irreducible foundation of the metaphors we use are physically based – UP/DOWN, FORWARD/BACK, NEAR/FAR, etc.

Life as we understand it emerges from chemistry following complex rules. Once over a threshold, living things can direct their chemistry to perform actions. In the case of human beings, our embodiment in the physical world led to the irreducible concept of UP.

This makes me think of LLMs, which are so effective at communicating with us that it is very easy to believe that they are intelligent – AI. But as I’m reading the book, I wonder if that’s the right metaphor. I don’t think that these systems are truly intelligent in the way that we can be (some of the time). I’m beginning to think that prompts – not the LLMs -may be some kind of primitive life though. In this view, the LLMs are the substrate, the medium that a living process can express itself.

Think of deep neural networks as digital environments that have enough richness for proto-life to emerge. Our universe started with heat, particles, and a few simple forces. Billions of years later, heat, hydrogen, methane, ammonia and water interacted to produce amino acids. Later still, that chemistry worked out how to reproduce, move around, and develop concepts like UP.

Computers emerged as people worked with simple components that they combined to produce more complex ones. Years later the development of software allowed even more complex interactions. Add more time, development, and data and you get large language models that you can chat with.

The metaphor of chemistry seems to be emerging in the words we use to describe how these models work as environments – data can be poisoned or refined. A healthy environment produces a diverse mix of healthy prompts. Too much bias in the architecture or in the data produces an environment that is less conducive to complex emergent behavior. Back when I was figuring out the GPT-2, I finetuned a model so that it only spoke chess. That’s like the arctic compared to the rainforest of text-davinci-003.

The thing that behaves the most like a living process is the prompt. The prompt develops by feeding back on itself and input from others (machine and human). Prompts grow interactively, in a complex way based (currently) on the previous tokens in the prompt. The prompt is ‘living information’ that can adapt based on additions to the prompt, as occurs in chat.

It’s not quite life yet though. What prompts do seem to lack at this point is any split between the genotype and the phenotype. For any kind of organism to develop and persist, we’d need that kind of distinction. This is more like the biochemistry of proto-life.

The prompts that live on these large (foundational) models are true natives of the digital information domain. They are now producing behavior that is not predictable based on the inputs in the way that arithmetic can be understood. Their behavior is more understandable in aggregate – use the same prompt 1,000 times and your get a distribution of responses. That’s more in line with how living things respond to a stimulus.

I think if we reorient ourselves from the metaphor that MACHINES ARE INTELLIGENT to PROMPTS ARE EARLY LIFE, we might find ourselves in a better position to understand what is currently going on in machine learning and make better decisions about what to do going forward.

Metaphorically, of course.

What gets acted on in the US?

I think I have a chart that explains somewhat how red states can easily avoid action on gun violence. It’s the number of COVID-19 deaths vs. gun deaths in Texas. This is a state that pushed back very hard about any public safety measures for the pandemic, and that was killing roughly 10 times more citizens. I guess the question is “how many of which people will prompt state action? For anything?”

For comparison purposes, Texas had almost 600,000 registered guns in 2022 out of a population of 30 million, or just about 2% of the population if distributed evenly (source). This is probably about 20 times too low, since according to the Pew Center, gun ownership in Texas is about 45%. That percentage seems to be enough people to prevent almost any meaningful action on gun legislation. Though that doesn’t prevent the introduction of legislation to mandate bleeding control stations in schools in case of a shooting event.

So something greater than 2% and less than 45%. Just based on my research, I’d guess something between 10%-20% mortality would be acted on, as long as the demographics of the powerful were affected in those percentages.