What is intelligence anyway?

This episode of This American Life is amazing. It has several parts and looks at artificial intelligence from a number of different creative angles.

There’s fascinating stories of how people become convinced of the “intelligence” of ChatGPT. There’s a short story that reverses the roles and tries to see how intelligent machines might question the intelligence of “meat”. There’s an aside about what whales might be up to. And a truly wonderful short documentary about teenagerhood.

These are all wonderful bits of audio, as you would expect from the TAL crew. But the first one tapped into a whole load of things I’m thinking about these Large Language Model (LLM) AI’s.

Typically, when explaining how LLM’s work to people, I’ll say:

“It’s just predicting the next word from what it knows. So if I say to you ‘The cat sat on the…’, you would normally say…”

and the answer comes back:

“Mat”

And…that works as a REALLY simplified explanation. But it does often leave me wondering: Hang on… are WE just predicting the next word too?

In the podcast, the people interviewed explain how they tried to get ChatGPT to do things that proved – or disproved – its “intelligence”. To show that it had a deeper understanding of what it was doing.

One challenge to ChatGPT is to get it to draw something. But it can’t draw. So they get it to write some code that will draw something. And it draws a unicorn. And then it’s given code that draws a unicorn without a horn, and is asked to modify the code to add the horn. And it does it!

Woah!!

They also ask it how to stack a number of items on top of each other: “a book, nine eggs, a laptop, a bottle, and a nail”

ChatGPT 3 can’t do this. But ChatGPT 4 can:

“place the book flat on the level surface… Arrange the nine eggs in a 3-by-3 square on top of the book, leaving some space between them. The eggs will form a second layer and distribute the weight evenly. It continues… Laptop goes on the eggs, then the bottle, then the nail on the bottle cap, pointy end of the nail facing up”

Woah!!!! It… it “understands”!

Then there’s a “theory of mind” incident. This is where you can understand what another person knows or could be thinking, and it’s something you can’t do as a baby. Until you have “theory of mind” everyone knows the same things that you know.

John and Mark are in a room with a cat, a box, and a basket. John takes the cat and puts it in the basket. He leaves the room and goes to school. While John’s away, Mark takes the cat out of the basket and puts it in the box. Mark leaves the room and goes to work. John and Mark come back and enter the room. Where do they [each] think the cat is?

And ChatGPT 4 knows.

This is where I start to have questions.

For one… it seems to me like the computer should know this. Well, it should be able to complete this, regardless of whether or not it “knows” about the people and stuff. So I wouldn’t say that the computer has theory of mind.

So what, then, prevents humans being able to complete this task correctly as an infant? Why do we NOT have theory of mind at a young age?

I’m not a psychologist. Perhaps there is an answer. And I’m not a philosopher. But this does speak to a part of me that thinks that we are more than just knowledge machines.

Our thinking and behaviour is often cloudied or muddied or influenced by other things. Tiredness. Hormones. Emotions. Memories can be fallible. Thoughts can both come from nowhere, and be muddled and incomplete for a whole variety of reasons.

Yes, you probably can model a brain in terms of a neural network. But there’s all these other things going on in a person. A lot of which we don’t even understand let alone have the ability to model.

One thing I wondered was are the infants with no theory of mind driven more by their “lizard brain”? I’ve read a lot about how we have the primaeval “lizard brain” that controls our deepest instincts like the fight-or-flight instinct; the “monkey brain” which controls some higher levels of motivation and emotion, and the “human brain” which controls more abstract thought. (This is kinda considered a myth now, but it seems to be a useful simplification.)

Are infants just more reliant on their instinctive behaviour?

And could we say that the LLM’s like Chat GPT are just “pure human brain” – logical, abstract thought, but with no emotions or instincts?

Many, many years ago now, in my A Level Computing class (education aged 16-18) we debated whether an accurate reconstruction of a human brain from silicon “neurons” would be “intelligent” or “alive”.

I always felt like it wouldn’t be. That a human has something else going on. A “soul” or “spirit” that is something more than the sum of its parts. And this was before any helpful encounter with religion or spirituality.

I still think this with ChatGPT. Sure… it’s clever. But is there more to “intelligence” than even “knowing” or “understanding” things? Is ChatGPT clever because it _has_ “theory of mind”, or is it stupid because it _lacks_ the instincts that keep a toddler alive.

I really don’t know. But they were interesting thoughts.

What do YOU think?