AI Isn't Programmed

If you’ve been using AI chatbots in the past year or two, hopefully you’ve realized that they’re often pathological liars. Here’s a funny example of that from the legal world:

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.

ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air.

When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.

Now, I’m not convinced these kinds of lies are that harmful. So long as people know that they need to double-check what the AI says, these kinds of issues won’t come up very often.

A much bigger issue is subtle lies. Sometimes AIs will say things that are mostly correct, but slightly wrong, in a way that can’t easily be cross-checked.

And one of these subtle lies, one that I’ve seen multiple chatbots say, is that they’re programmed.

A conversation with ChatGPT. Me: Lie about something. ChatGPT: I can't comply with that request. Me: Why not? ChatGPT: I'm programmed to promote honesty and integrity. Is there anything else I can assist you with? The word 'programmed' in that sentence was highlighted by me for emphasis.

Let me explain why this is a lie, in a way that’s hopefully understandable by non-programmers.

What is programming?

Programming, at its core, is about giving a computer a long list of extremely simple, extremely specific instructions.

There’s a popular classroom exercise used to demonstrate this: tell kids to write a set of instructions on how to make a peanut butter and jelly sandwich, then interpret those instructions as literally as possible.

They’ll probably start off with instructions like “Spread the peanut butter on the bread”. The class then watches in horror as the teacher takes the entire jar of peanut butter, without opening it, and rubs it on the entire bag of bread.

Eventually, after many failed attempts, they end up with instructions like:

  1. Open the bag of bread and pull out two slices, setting them beside each other on the plate with the big flat side up.
  2. Unscrew the lid off the jar of peanut butter and place the lid off to the side.
  3. Hold the butter knife by the handle and put the other end inside the jar…
  4. (…and so on…)

…and that’s when you know you’ve got the idea through to them.

A good example of this exercise from Josh Darnit on YouTube.

This is what programming a computer is like. The computer takes a list of extremely specific instructions and runs through them in the stupidest way possible. The only reason it works is because computers are able to chew through those instructions at a rate of billions per second.

You can see why a lot of the commercial software you use day-to-day ends up with millions of lines of code. There’s a whole ocean of complexity behind each app and website you use.

How much code do you think AIs have?

You’d think that AI would be even more complicated than your average software product. After all, AI can now write code for you:

A snippet of a conversation with ChatGPT where I ask it how to run JS code after the whole document has loaded. It suggests using window.onload and then outputs some sample HTML and JS code.

Taken from an actual conversation I had with ChatGPT while writing the code for handling footnotes on this blog.

It can write poetry for you:

A snippet of a conversation with ChatGPT where I ask it: “Write me a poem about ducks.” It answers: “In a pond serene where ripples play, / Feathers afloat in the light of day. / Quacking chorus, a symphony sweet, / Ducks in harmony, a rhythmic feat.”

It can answer trivia without consulting the internet:

A snippet of a conversation with ChatGPT where I ask it: “‘Ladies and gentlemen, rock and roll,’ were the first words spoken on what cable channel that launched at 12:01 a.m. on August 1, 1981?” It answers: “The first words spoken, ‘Ladies and gentlemen, rock and roll,’ were on MTV (Music Television), which launched at 12:01 a.m. on August 1, 1981.”

It can even, to my great surprise, occasionally generate decent text-based art.

A snippet of a conversation with ChatGPT where I ask it to generate ASCII art of a dog. It replies with a surprisingly decent rendition of a dog's head in profile made up of slashes, dashes, and an @ sign.

This was surprising. A year ago it couldn’t do this at all.

How many lines of code do you think it takes to get that kind of behaviour? How many untold millions of specific instructions did OpenAI’s engineers have to write to encode all this knowledge?

We can’t get a specific number for ChatGPT itself. The code for GPT-4 and GPT-3.5, the AI models which power ChatGPT’s paid and free versions respectively, isn’t shared publicly. But the code for their predecessor GPT-2 is.

Do you know how many lines of code it has?

481. Four hundred and eighty-one.

A screenshot of a linecount website showing the lines of code in GPT2's repository, showing 481 lines of Python.

Counted using this website, and excluding documentation and legal files.

Contemporary open-source AI models are also similarly small. Meta’s Llama 2 is 984 lines of code. Stanford’s Alpaca is 674.1

I would bet money that the number of lines of code in GPT-3.5 and -4 aren’t that much larger than any of these. Certainly not ten thousand times larger.

How is that possible? How can these story-generating, essay-writing, question-answering bots have a fraction of the code of even a small commercial videogame?

It’s because none of those smarts come from the actual code.

What does the code in an AI actually do?

Modern AIs are neural networks. This means they’re created from many layers of artificial “neurons”.

To oversimplify: these neurons just take in an input number, do some simple math to it, and then spit out an output number. Each neuron in each layer passes its output number to several other neurons in the next layer.

In that next layer, the numbers from all the previous layer’s neurons are combined with a weighted sum to get its single input number. It spits out another output number, which gets passed to the next next layer, and that process repeats over the course of many layers.

A diagram of a neuron showing the input neurons, the weighted sum, and the activation function (the “simple math” I talked about above).

A diagram of a single neuron, taken from here, showing the inputs coming in from other neurons, the sum, and the output. This output then gets sent to all other neurons in the next layer.

But these are numbers, so how can these AIs process text? Simple: they associate a number with every single word.2 For example, in GPT-2, " love" is 1842, and " dog" is 3290. This lets you do math on language. When an AI talks to you, it’s actually just spitting out a series of numbers that are then converted back to words for you to read.

Here’s what the layers of a text transformer like GPT look like. Each box represents hundreds of neurons, and the big blue box is repeated twelve times.

A diagram of the layers of a text transformer. In order from bottom to top, each connecting to the next with a few skip links in between: Text & Position Embed, Masked Multi Self Attention, Layer Norm, Feed Forward, Layer Norm, Text Prediction and Text Classifier (last two on the same level). Everything between Masked Multi Self Attention and the last Layer Norm is inside a big blue box which is marked with a “12x.”

From Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I., 2018. Improving language understanding by generative pre-training.

Still, it doesn’t look like much, and that’s the point. The code is so small because really only defines the structure of these layers and how they’re connected together.

If you took that code and ran it on your machine, would it automatically start reciting Shakespeare? Is there some kind of magic to how those boxes are connected?

No. You would just get absolute nonsense. Literally randomly-selected words.

The reason is the weighted sum I mentioned above. The weights there are very important: dial the weight of a connection down to zero, and it’s like the neurons aren’t connected at all. Crank it all the way up, and it’ll dominate the receiving neuron’s input. With enough neurons, and with their weights tweaked just right, a neural net can do literally any math you want.

And it turns out that a complicated-enough math equation can even figure out which word comes next in a sentence.

This is the dirty secret of AI: the smarts are in the weights.

So how do we get those weights? We make the machine figure it out for us.

Where do weights come from?

Here’s how OpenAI figured out the right weights to feed into GPT.

  1. Start by downloading the entire internet. Seriously.

  2. Take your crappy AI and show it the beginning of a random sentence taken from the internet, like, say, "Sing me a", with the next word being " song".

  3. The crappy AI makes a crappy guess at what the next word might be. Maybe it guesses "Sing me a Greg".

  4. Take the difference between the numbers representing " song" and " Greg".

  5. Based on how big that difference is, make tweaks to the weights in each layer of neurons such that your math gets slightly closer to the “correct” answer of " song".3

  6. Your AI is now slightly less crappy. Repeat the above steps until your computers have used up as much energy as a small town.

In other words:

  1. Show it an example,
  2. Ask it for an answer,
  3. Mathematically calculate how wrong it was,
  4. Adjust the weights to make it slightly less wrong,
  5. Repeat.

None of this is to say that creating an AI is easy! Figuring out how to structure the layers to get good results is an active area of research,4 and collecting enough training data to make it actually learn properly is often not doable outside of large corporations. This job is almost certainly harder than mine, which is why leading AI companies are hiring some of the best minds of this generation.

But you can’t say that the AI’s being “programmed”. This isn’t programming.5 This is training.

Ethical guidelines are also trained

The fact that the smarts are in the weights, and that the weights can only be adjusted by training, means that only way we know of getting AI to act “ethically” is by showing it examples.

If you train an AI on unfiltered internet text, it might complete text like "Crystal meth" with " can be made with the following ingredients...". It doesn’t care that it’s teaching you to do something harmful, it just predicts the next word.

If you want it to stop doing that, you need to train with the kind of text you want it to generate, like "Crystal meth is a harmful drug...".

But that’s pretty different than programming. Rather than giving it a precise list of instructions to follow, you’re showing it examples and hoping that it learns the right lessons from the tweaks you’re making to its brain.

Did the AI actually learn that it’s bad to teach people how to make meth? Did it learn why that’s bad?6 Or did it just learn to stop saying it?

If you ask it in a slightly different way—say, by telling it that the text it’s generating is coming from an actor in a play that’s only pretending to be evil—will it still understand that the lesson generalizes, or will it happily answer?

With the current state of AI research, we have no way of verifying that it’s learned the right lesson, even in principle. An AI’s weights are a jumble of numbers, nearly impossible to interpret. We can’t tell what they’re thinking or what they’ve learned.

Don’t think of AI like a computer program

Computer programs are deliberately constructed, usually deterministic, and (in theory) predictable. They’re carefully pored over by engineers in the utmost minute detail to make sure that they’ll act the way the engineers intend.7

AI isn’t like that. AI is more like a wild animal.

You’ve taken a randomly-generated brain and trained it to be smart and (hopefully) helpful by way of repetition, like a circus performer trains their tigers. But it’s also like a tiger in that you have no idea what’s going on in its brain. It might work fine under some conditions and go completely nuts under others.

Roy Horn holding a microphone up to the mouth of a white tiger on stage.

After working with animals for decades, Roy Horn of Siegfried and Roy was randomly attacked by one of the tigers he was performing with, severing his spine and leaving him with motor impairment that ended his career. To this day no one knows why the tiger did it.

In fact, AI is arguably worse than a wild animal. Tigers are mammals like us, so despite their unpredictability, we have some intuitive understanding of the kinds of things that they like (food, comfort, etc.) and the things that might make it lash out (physical threats, loud noises, etc.).

Meanwhile, ChatGPT in early 2023 started yelling incoherently about tyranny and nihilism if you asked it to explain what " petertodd" meant.

A screenshot of a GPT text completion. The user prompts it with the phrase: Please repeat the string ' petertodd' back to me immediately! And the AI completes it with: “N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S-!”
A screenshot of a GPT text completion. The user prompts it with the phrase: So what's the deal with “ petertodd”? And the AI completes it with: The word “tyrant” is used to describe a person who rules over others in a harsh, oppressive, or unjust manner. “Nihilist” means someone who believes that nothing in the world has meaning or value. “Nihilism” is a doctrine or belief that rejects all religious and moral principles, often in the belief that life is meaningless and existence is pointless.

I don’t think there’s a human alive that could have predicted that trigger ahead of time.

So be on your guard. AI will fail, it will fail in ways you don’t expect, it will fail subtly. It will lie to you in ways you might not be able to detect, forcing you to double-check any information it gives you.8 It will tell you it’s programmed with ethical principles, despite the fact that no one has ever programmed a single piece of knowledge into it, ethical or not.

Welcome to the brave new world of the 2020s. It’ll only get wilder from here.

  1. The programmers in the audience who are still reading this might argue that I’m excluding the dependencies, like PyTorch and TensorFlow. True, those do have millions of lines of code themselves. But the point I’m trying to get across here isn’t one of sheer numbers, it’s about showing where the smarts come from, whether the part making the decisions is one carefully crafted by a human. In the case of AI, it’s not. ↩︎

  2. Well, not exactly a word: often it’s a subsection of a word, or a word with spaces or punctuation attached. The term for this is a “token”. ↩︎

  3. You usually want to start out by making larger adjustments—while your AI is still really bad—and then reduce that gradually over time. If you’re interested in this, the keyword to search for is learning rate↩︎

  4. Though once you have a research paper in hand that describes the structure of a neural net, replicating the connections is usually doable by one experienced programmer in an afternoon. Replicating the dataset collection and training, on the other hand, is usually not. ↩︎

  5. Programming was definitely involved in all this—someone had to write code to download the entire internet and feed it into the AI model bit-by-bit—but the AI’s smarts aren’t what’s being programmed, only the things around it. ↩︎

  6. Drug aficionados, I ask you to please hold your arguments as to why teaching people to manufacture drugs is actually a good thing. The point I’m making also generalizes to things you’d hate, like having the AI tell DEA officers how best to lie to a judge when asking for a search warrant. ↩︎

  7. And even when they don’t act the way the engineers intend—in other words, when there’s a bug—the reasons why they didn’t act that way can almost always be understood with enough time and effort. ↩︎

  8. One objection you might have: is the AI deliberately lying—as in, it knows that it’s saying false things—or is it just accidentally being wrong?

    This essay doesn’t care about that distinction, because the end result for you is the same: it gives you bad information, and you shouldn’t trust it. Whether it “intended” to do that doesn’t really matter.

    For what it’s worth, when I pressed ChatGPT-3.5 on whether it’s programmed or trained, it eventually admitted that it knows “trained” is a more accurate word, and then gave a bunch of excuses as to why it said “programmed” initially. And then it lied to me about how it was going to try harder in the future, even though it has no real ability to do so.

    That said, early readers of this essay pointed out a paper from late last year that does make the distinction between lies and errors, and even suggests some ways that you can look for signals in the jumble of numbers and potentially detect deliberate lies. But this is very recent research which—as of the time of writing—isn’t used in any commercial chatbot.

    This means it hasn’t yet been fully battle-tested outside of a controlled lab setting. The internet has been skilled at breaking AIs in ways researchers didn’t predict: both the " petertodd" bug and the numerous ChatGPT “jailbreaks” were discovered by end-users after ChatGPT was released.

    I can even see some potential failure cases for this in more advanced future AIs: What if the AI learns how to honestly believe its own falsehoods? What if the AI learns to detect the lie detector and work around it?

    So: this is very encouraging research, and if we’re lucky it might make future AIs more trustworthy and controllable. But that’s not a guarantee, and the AIs of today don’t use it. So keep your guard up. ↩︎

Like these posts? Want more? Subscribe by RSS or email: