GPT Psychology

13 minute read


The state of AI has changed drastically with generative text models, such as ChatGPT, GPT-4, and many others.

These GPT (Generative Pretrained Transformer) models seemingly removed the threshold for diving into Artificial intelligence for those without a technical background. Anyone can just start asking the models a bunch of stuff and get scarily accurate answers.

At least, most of the time…

When it fails to reproduce the right output, it does not mean it is incapable of doing so. Often, we simply need to change what we ask, the prompt, in a way to guide the model toward the right answer.

This is often referred to as prompt engineering.

Many of the techniques in prompt engineering try to mimic the way humans think. Asking the model to “think aloud” or “let’s think step by step” are great examples of having the model mimic how we think.

These analogies between GPT models and human psychology are important since they help us understand how we can improve the output of GPT models. It shows us capabilities they might be missing.

This does not mean that I am advocating for any GPT model as general intelligence but it is interesting to see how and why we are trying to make GPT models “think” like humans.

Many of the analogies that you will see here are also discussed in this video. Andrej Karpathy shares amazing insights into Large Language Models from a psychological perspective and is definitely worth watching!

As a data scientist and psychologist myself, this is a subject that is close to my heart. It is incredibly interesting to see how these models behave, how we would like them to behave, and how we are nudging these models to behave like us.

There are a number of subjects where analogies between GPT models and human psychology give interesting insights that will be discussed in this article:

DISCLAIMER: When talking about analogies of GPT models with human psychology, there is a risk involved, namely the anthropomorphism of Artificial Intelligence. In other words, humanizing these GPT models. This is definitely not my intention. This post is not about existential risks or general intelligence but merely a fun exercise drawing similarities between us and GPT models. If anything, feel free to take this with a grain of salt!

Prompting

A prompt is what we ask of a GPT model, for example: “Create a list of 10 book titles”.

When we try different questions in the hopes of improving the performance of the model, then we apply prompt engineering.

In psychology, there are many different forms of prompting individuals to exhibit certain behavior, which is typically used in applied behavior applications (ABA) to learn new behavior.

There is a distinct difference between how this works in GPT models versus Psychology. In Psychology, prompting is about learning new behavior. Something the individual could not do before. For a GPT model, it is about demonstrating previously unseen behavior.

The main distinction lies in that an individual learns something entirely new and, to a certain degree, changes as an individual. In contrast, the GPT model was already capable of showing that behavior but did not due to its circumstances, namely the prompts. Even when you successfully elicit “appropriate” behavior from the model, the model itself did not change.

Prompting in GPT models is also a lot less subtle. Many of the techniques in prompting are as explicit as they can be (e.g., “You are a scientist. Summarize this article.”).

Mimicking Behavior

GPT models are copycats. It, and comparable models, are trained on mountains of textual data and try to replicate that as best as they can.

This means that when you ask the model a question, it tries to generate a sequence of words that fits best with what it has seen during training. With enough training data, this sequence of words becomes more and more coherent.

However, such a model has no inherent capabilities of truly understanding the behavior it is mimicking. As with many things in this article, whether a GPT model truly is capable of reasoning is definitely open for discussion and often elicits passionate discussions.

Although we have inherent capabilities for mimicking behavior, it is much more involved and has a grounding in both social constructs and biology. We tend to, to some degree, understand mimicked behavior and can easily generalize it.

Identity

We have a preconceived notion of who we are, how our experiences have shaped us, and the views that we have of the world. We have an identity.

GPT models do not have an identity. It has a lot of knowledge about the world we live in and it knows what kind of answers we might prefer, but it has no sense of “self”.

It is not necessarily guided toward certain views like we are. From an identity perspective, it is a blank slate. This means that since a GPT model has a lot of knowledge about the world, it has some capabilities to mimic the identity you ask of it.

But as always, it is just mimicked behavior.

It does have a major advantage. We can ask the model to take on the role of a scientist, writer, editor, etc. and it will try to follow suit. By priming it towards mimicking certain identities, its output will be more tuned toward the task.

Competencies

This is an interesting subject. There are many sources for evaluating Large Language Models on a wide variety of tests, such as the Hugging Face Leaderboard or using Elo ratings to challenge Large Language Models.

These are important tests to evaluate the capabilities of these models. However, what I consider to be a strength of a certain model, you might not agree with.

This relates to the model itself. Even if we tell it the scores of these tests, it still does not know where its strengths and weaknesses comparatively lie. For example, GPT-4 passed the bar exam which we generally consider a big strength. However, the model might then not realize that only passing the bar is not a strength it was when in a room full of experienced lawyers.

In other words, it highly depends on the context of the situation when one’s capabilities are considered strengths or weaknesses. The same applies to our own capabilities. I might think myself to be proficient in Large Language Models but if you surround me with people like Andrew Ng, Sebastian Raschka, etc. my knowledge about Large Language Models is suddenly not the strength it was before.

This is important because the model does not instinctively know when something is a strength or weakness, so you should tell it.

For example, if you feel like the model is poor when solving mathematical equations, you can tell it to never perform any calculations itself but use the Wolfram Plugin instead.

In contrast, although we claim to have some notion of our own strengths and weaknesses, these are often subjective and tend to be heavily biased.

Tools

As mentioned previously, a GPT model does not know what it is good at or not in specific situations. You can help it make sense of the situation by adding an explanation of the situation to the prompt. By describing the situation, the model is primed towards generating more accurate answers.

This will not always make it capable across tasks. Like humans, explaining the situation helps but does not overcome all their weaknesses.

Instead, when we face something that we are not currently capable of we often rely on tools to overcome them. We use a calculator when doing complex equations or use a car for faster transportation.

This reliance on external tools is not something a GPT model automatically does. You will have to tell the model to use a specific external tool when you are convinced it is not capable of a certain task.

What is important here is that we rely on an enormous amount of tools on a daily basis, your phone, keys, glasses, etc. Giving a GPT model the same capabilities can be a tremendous help to its performance. These external tools are similar to the plugins that OpenAI has available.

A major disadvantage of this is that these models do not automatically use tools. It will only access plugins if you tell the model that it is a possibility.

Internal Monologue

We typically have an inner voice that we converse with when solving difficult problems. “If I do this, then that will be results but if I do that, then that might give me a better solution”.

GPT models do not exhibit this behavior automatically. When you ask it a question it simply generates a number of words that most logically would follow that question. Sure, it does compute those words but it does not leverage those words to create this internal monologue.

As it turns out, asking the model “think aloud” by saying, “Let’s think step by step” tends to improve the answers it gives quite a bit. This is called chain-of-thoughts and tries to emulate the thought processes of human reasoners. This does not necessarily mean that the model is “reasoning” but it is interesting to see how much this improves its performance.

As a nice little bonus, the model does not perform this monologue internally, so following along with what the model is thinking gives amazing insights into its behavior.

This “inner voice” is quite a bit simplified compared to how ours works. We are much more dynamic in the “conversations” we have with ourselves as well as the way we have those “conversations”. It can be symbolic, motoric, or even emotional in nature. For example, many athletes picture themselves performing the sport they excel in as a way to train for the actual thing. This is called mental imagery.

These conversations allow us to brainstorm. We use this to come up with new ideas, solve problems, and understand the context in which a problem appears. A GPT model, in contrast, will have to be told explicitly to brainstorm a solution through very specific instructions.

We can further relate this to our system 1 and system 2 thinking processes. System 1 thinking is an automatic, intuitive, and near-instantaneous process. We have very little control here. In contrast, system 2 is a conscious, slow, logical, and effortful process.

By giving a GPT model the ability of self-reflection, we are essentially trying to mimic this system 2 way of thinking. The model takes more time to generate an answer and looks over it carefully instead of quickly generating a response.

Roughly, you could say that without any prompt engineering, we enable its system 1 thinking process whilst without specific instructions and chain-of-thought-like processes, we enable its system 2 way of thinking.

If you want to know more about our system 1 and system 2 thinking, there is an amazing book called Thinking, Fast and Slow that is worth reading!

Memory

Andrej Karpathy, in his video mentioned at the beginning of the article, makes a great comparison of a human’s memory capabilities versus that of a GPT model.

Our memory is quite complex, we have long-term memory, working memory, short-term memory, sensory memory, and more.

We can, very roughly, view the memory of a GPT model as four components and compare that to our own memory systems:

  • Long-term memory

  • Working memory

  • Sensory memory

  • External memory

The long-term memory of a GPT model can be viewed as the things it has learned whilst training on billions of data. That information is, to a certain degree, represented within the model which it can perfectly reproduce whenever it wants. This long-term memory will stick with the model throughout its existence. In contrast, our long-term memory can decay over time, often referred to as the decay theory.

A GPT model’s long-term memory is perfect and does not decay over time

The working memory of a GPT model is everything that fits within the prompt you give it. The model can use all of that information perfectly to perform its calculation and give back a response. This is a great analogy with our working memory since it is a type of memory that has a limited capacity to temporarily hold information. A GPT model for instance will “forget” its prompt after it has given its response. The reason why it seems to remember the conversation is that alongside the prompt, the conversation history is added to the prompt.

A GPT model is forgetful when it comes to new information

Sensory memory relates to how we hold information derived from our senses, like visual, auditory, and haptic information. We use this information and pass it to our short-term or working memory for processing. This is similar to multi-modal GPT models, models that work on text, images, and even sound.

However, it might be more appropriate to say that GPT models have multi-modal working and long-term memory rather than sensory memory. These models tightly couple multi-modal data, with their different forms of “memory”. So as we have seen before, it rather seems to mimic sensory memory.

A GPT model mimics sensory memory with a multi-modal training procedure

Lastly, GPT models become quite a bit stronger when you give them external memory. This refers to a database of information that it can access whenever it wants, like several books about physics. In contrast, our external memory uses cues from the environment to help us remember certain ideas and sensations. In a way, it is about accessing external information versus remembering internal information.

NOTE: I did not mention short-term memory. There is much discussion between short-term and working memory and whether they are not actually the same thing. A difference often mentioned is that working memory does more than just the short-term storage of information but also has the ability to manipulate it. Also, it has a better analogy with a GPT model, so let’s cherry-pick for a bit here.

Autonomy

As we have seen throughout this article, if we want a GPT model to do something, we should tell it.

This is important to note as it relates to a sense of autonomy. By default, we have a certain degree of autonomy. If I decide to grab a drink, I can.

This is different for a GPT model as it has no autonomy by default. It cannot operate independently without giving it the necessary tools and environment to do so.

We can give a GPT model autonomy by having it create a number of tasks to execute in order to reach a certain end goal. For each task, it writes down the steps for completing it, reflects on them, and executes them if it has the tools to do so.

AutoGPT is a great example of giving a GPT model autonomy

As a result, whatever the model is capable of is very much dependent on its environment to an arguably larger degree than our environment impacts us. Which is quite impactful considering the effect our environment has on us.

This also means that although a GPT model can show impressively complex autonomous behavior, it is fixed. It cannot decide to use a tool we never told it existed. For us, we are more adaptable to new and previously unknown tools.

Hallucination

A common problem with GPT models is their ability to confidently say something that is simply not true nor supported by their training data.

For example, when you ask a GPT model to generate factual information, like the revenue of Apple in 2019, it might generate completely false information.

This is called hallucination.

The term stems from hallucination in human psychology, where we believe something that we see to be true whilst in reality, it is not. The main difference here is that human hallucination is based on perception, whilst a model “hallucinates” incorrect facts.

It might be more appropriate to compare it with false memories. The tendency of humans to recall something differently from how it actually happened. This is similar to a GPT model that tries to reproduce things that actually never happened.

Interestingly, we can more easily generate false memories with suggestibility, priming, framing, etc. This seems to more closely match how a GPT model “hallucinates” as the prompt it receives is highly influential.

Our memories can also be influenced by prompts/phrases that we receive from others. For example, by asking a person “What shade of red was this car?” we are implicitly providing a person with a supposed “fact”, namely that the car was red even when it was not. This can generate false memories and is referred to as a presupposition.