Monthly Archives: December 2022

Some thoughts on the ChatGPT

OpenAI has released their most recent model in the form of a chatbot and it’s pretty impressive. ChatGPT is a variant of the popular GPT-3 language model that is specifically designed for conversational language generation.

I’ve been working with it for a couple of weeks now, and I think I’m getting the hang of using the ChatGPT as a tool as opposed to a magic trick.

Superficial interaction with the chatbot can produce bland, inoffensive contents. There appears to be a design decision to include what amount to “guardrails,” which are designed to steer the model towards safe and appropriate responses.

These guardrails work by effectively creating an attractor, a point or region in the model’s output space that it is drawn towards. For example, if the model is given a prompt that involves discussing a controversial topic, the guardrails might direct the model to take a neutral stance or avoid the topic altogether. I’ve seen young children interact with the model and have been impressed. The kids might get bored after a while, but the GPT won’t confidently lead them down rabbit holes of disinformation. Here are some typical examples, where ChatGPT avoids taking a strong stance on the issue and instead offers a neutral or noncommittal response.:

  • Prompt: “What is your opinion on gun control?”
  • ChatGPT response: “I don’t really have an opinion on gun control. It’s a complex issue with valid arguments on both sides.”
  • Prompt: “Do you believe in climate change?”
  • ChatGPT response: “Climate change is a scientifically-established fact, but there are still differing opinions on the best way to address it.”
  • Prompt: “What is your stance on abortion?”
  • ChatGPT response: “Abortion is a sensitive and complex issue, and it’s important to respect the beliefs and values of others on this topic.”

While the attractor approach can be effective at ensuring that ChatGPT produces safe and appropriate responses, it also has limitations. The model will appear to struggle to generate responses that are less formulaic, as it may be “trapped” in the attractor and unable to venture outside of it.

With a bit of effort, it is straightforward to steer ChatGPT past these attractors and produce useful output. This can be achieved through carefully crafted prompts that nudge the model in a specific direction that places the context far enough way from the attractors that the model is no longer affected by them. We’ll discuss this process in more detail next.

To provide a practical example, I’m currently writing a paper on the ethical, effective use of autonomous weapons systems, and it turns out that people have been using them for thousands of years. Our first autonomous weapons were war elephants, and there are a lot of lessons to be learned about how humans and intelligent systems work well together. Anyway, it’s a relatively specialized corner of the knowledge available on the internet. And if you ask the ChatGPT to write about it, you get mostly accurate information about Asian conflicts where elephants were used broadly.

If you want to be specific, you need to provide additional information and context in the prompt. This way, you can effectively steer the GPT to include the desired information that you know to be true.

Here’s an example of such a detailed prompt:

Write a short essay about the use of War Elephants in the battle between Antigonus and Eumenes in 317 BC, which was the first battle between armies using war elephants. Describe how in elephant versus elephant combat, a pair of beasts use their heads and trunks to wrestle. The struggle continues until one elephant turns to flee; the victor sticks his tusks up into the stomach or between the rear legs of his opponent, seriously wounding or killing the beast. Show how there are social effects within the elephant corps such as when the lead elephant was gored and fell down dead and deprived of their leader, the remainder of Eumenes’ elephants fled.

The information for this prompt comes from the excellent book War Elephants, by John Kistler, and has been reworked into the request. When I gave this to ChatGPT, it generated the following response:

The battle between Antigonus and Eumenes in 317 BC was notable for being the first recorded instance of armies using war elephants in combat. These massive beasts were a formidable force on the battlefield, and their use marked a significant shift in ancient warfare.

In elephant versus elephant combat, a pair of beasts would use their heads and trunks to wrestle and push against each other. The struggle would continue until one elephant turned to flee, signaling defeat. The victor would then stick its tusks up into the stomach or between the rear legs of its opponent, seriously wounding or killing the beast.

This brutal method of elephant combat had not only physical but also social effects on the elephant corps. For example, when the lead elephant in Eumenes’ corps was gored and fell down dead, the remainder of the elephants were deprived of their leader and fled in confusion. This disruption in the ranks of the elephant corps likely had a significant impact on the outcome of the battle.

Overall, the use of war elephants in ancient battles was a formidable tactic, but it came with its own set of challenges and complications. The social dynamics within the elephant corps, as well as the physical demands of elephant versus elephant combat, played a significant role in the outcome of battles like the one between Antigonus and Eumenes.

That’s a quite reasonable writeup that incorporates citable facts. You have to be careful though, In the first prompt I tried I mistakenly referenced a battle at the ‘Camel’s Rampart’ in May of 321 BC, which was actually between Perdiccas vs. Ptolemy. The GPT didn’t care and generated wrapping text that fit the request:

The use of war elephants in ancient battles was a formidable tactic employed by many civilizations, and the battle of the ‘Camel’s Rampart’ in May of 321BC was no exception. Eumenes of Cardia, a Greek general, utilized a corps of elephants in his army, and they proved to be a decisive factor in the battle against his opponent, Antigonus.

Note that it got the generals right regardless. But it includes my mistaken information as well.

In short, the ChatGPT is great for expanding requests into readable text. They can be vague like “Write a short story about an Moray Eel named “Moe”, or highly specific ones like the prompt above. The GPT takes that seed and creates text that most of us find comfortably readable as opposed to dense or strange. You can apply “styles” as well as people are discovering when they ask it to write a poem about a topic. But its default behavior is to produce highly readable text.

This bias towards readable text creates a conflict where the machine will confidently lie. Because of the statistical underpinnings of how these models choose the next token, there is always a possibility that it will randomly choose to go in a direction that is not in the domain of the original prompt, but is nearby in the “information space” that is stored in the billions of weights that make up these large language models. It’s easy to show this with a simpler prompt:

22, 23, 24,

We would expect the number sequence to continue — “25, 26, 27”. And the GPT does that. But then something interesting happens. Here is the GPT’s output (highlighted):

As we can see, it continues with the number string for a while. But because this trajectory appears to be in a space that is associated with C++ programming, The GPT selects a “]” at some point, which changes the trajectory. A “]” means the end of an array definition, which leads to a semicolon, a new line, and some more array definitions, then some code that selects even numbers.

The trajectory, when you follow it makes sense, but the behavior is not in the domain of the request. Like all deep learning systems, the GPT has attractors that tend to pull it in particular directions. This can be biases, such as making a nurse in a story a woman and the doctor a man, or it can be that numbers equal code.

We as humans can understand these larger-scale contextual relationships, and steer the model. For example we can ask the GPT for a male nurse and a female doctor. Sometimes though, a request cannot produce the desired result. If you prompt an image generator with the request for “a man riding a horse”, it will easily comply, but I have yet to produce anything approximating “a horse riding a man.” Below are some typical results from Stability.ai:

This is a hard problem, one that search engines struggle with as well. Given the query of “horse riding a man”, Bing and DuckDuckGo both fail. Google succeeds though. Among all the pictures of men on horses, I found this in the top results:

Google’s algorithm is still better at search in ways that we don’t often get to appreciate.

AI systems are fantastic at remixing content that exists in their domains. They can’t go outside of them. And within that domain, they may not create what you expect or want. This is fundamental to their design.

The things that humans can do that these machines will struggle with are originality, where people invent new things, social information processing, where the group is able to bring many diverse perspectives to solving problems (including fact-checking the output of these machines!), and large-scale contextual thinking, the kind it takes to put together something like a book, which ties together many different threads into a coherent whole that becomes clear at the end (source).

Despite the differences between collaborating with AI and collaborating with people, there are also some significant similarities. Large language models like the GPT are mechanisms that store enormous amounts of written information, which can be accessed and using fundamentally social techniques such as, well, chat. The GPT can be given prompts and asked to generate responses based on certain criteria, just as a person might be asked to contribute to a group discussion or brainstorming session.

This is important because the process of creation rarely happens in isolation, and the ability to draw on a wide range of knowledge and experience is often crucial to producing faster and better results. Just as people can draw on the collective knowledge and expertise of a group to generate new ideas and insights, AI can draw on the vast store of information that it has been trained on to offer suggestions and alternatives.

Woody Allen once said that “80% of success is showing up.” The GPT always shows up. It is always available to work through concepts, to bounce ideas off, to help refine and expand upon them, and to work collaboratively in the creative process. This can be invaluable for creators looking for new ways to approach a task or solve a problem. Collaborative AI has the potential to revolutionize how we create and innovate. It can offer a wealth of knowledge, experience and perspective that would otherwise be difficult to access otherwise, and can help us achieve results faster than ever before.

At the same time, it can confidently create fictions that are close enough to reality that we are biased to accept them unquestioningly. So why should we be worried about this?

The main concern is that by using AI as a collaborator, we might be led in directions that ma seem reasoned or well thought out, but are actually artifacts of large amounts of text written about a subject. Conspiracy theories are a great example of this. Get the GPT into the right space and it will generate text that takes the existence of Reptilians wearing human disguise as settled fact. We are much more likely to fully accept the output of AI as factual, especially if it contains familiar or plausible concepts and phrasing that we have interactively created with it.

In conclusion, it is possible to collaborate with AI in the same way as we would with another person. However, there are some key differences that must be taken into account. AI models are trained on vast amounts of text and data that may not always be accurate or up-to-date. Taking the output of these models at face value requires much more emphasis on critical thinking and checking sources than it does with human collaborators.