Note on Games, AI, and the Philosophy of Mind

Rudy Rucker

Ravenna, 2001

 

I was thinking about human intelligence, about the inevitability of intelligence in evolving systems.  The idea is that intelligence arises from an ability to mentally simulate the world.  Simulation is a powerful method for improving your survival ability; if you can simulate, you can plan and anticipate.  Once you simulate the world, it follows almost automatically that you get an ability for abstract thought.  One of the symbols you acquire is that of your Self.  And only one step beyond that lies consciousness — at least if you agree with Antonio Damasio, The Feeling of What Happens (Harcourt 1999).

In his book, Damasio argues that consciousness amounts to forming a mental image of yourself observing the world.  It’s not enough to just have an image of yourself in the world.  To get consciousness, you go a step beyond that and add on a second-order symbol of the operating-system self that looks at the simulation.  In terms of how computer game designers think, the lower level self symbol within the simulation is the “player,” and the second-order self symbol that watches a copy of the simulation is the “user.”

Antonio Damasio’s notion of consciousness arises by this sequence:

 

(0) Being active in the world,

(1) Being able to perceive and distinguish separate objects in the word,

(2) Having a first-order simulation of the world including a “player,” that is, a self-token representing you, and,

(3) Having a second-order simulation in which there is a “user” which represents you observes a first-order simulation.

 

Another way to put it is that in step (3) you simulate a second-order self token which mimics your behavior of observing a simulation of the world with a first-order self token.  You simulate yourself watching the world and “playing the game.”  Step (3) might arise from the necessity to realize that the creatures like yourself around you are also playing the game, that is, doing (2).  Rephrasing this, once you do step (3), it’s natural to do a step (3A) in which the other intelligent agents of the world are also represented by first-order tokens, each of which is to have its own simulation of the world and itself.

A snail doesn’t even have (1).  I’m not sure if a dog has (2) or not, maybe only fleetingly.  As I recall, in The Sense of What Happens, Damasio talks about some brain-damaged people who have (2) but not (3).  Philosophically, this goes back to something I wrote about in Infinity and the Mind: self-awareness leads to an infinite regress.  At step (1) and (2) we don’t have the regress, but right away at step (3) we do have the regress, because now the agent is “thinking about itself thinking,” and we can nest thought-balloons all the way down.  So once you have step (3), you inevitably have (4), (5), (6), and so on.  [In stage (4), you have a “designer”!]

It’s fitting that at the same stage (3) where we reach consciousness we introduce a kind of dynamic that leads to infinity — this is reasonable and pleasing, given that it’s so natural to think of the mind as being infinite.  The early stages beyond (3) are levels that we experience, when unpleasant, as self-consciousness and irony, or, when pleasant, as maturity and self-knowledge.

If you run the regress right out through all the natural numbers, you get a kind of enlightenment which is, however, illusory, as right away you get to level “infinity plus one”.  (“Infinity” being taken here in the sense of “small omega.”)  The real Enlightenment is the one you can’t finish, it’s the Big Omega (Absolute Infinite (order type of the class of all ordinals)) regress which, to my way of thinking, is akin to getting back to stage (0) again.

Stage (0) might be thought of as experiencing the world with an empty mind, no model of it needed, no image, no notion of objects, simply the world in and of itself, letting “the world think me” instead of “me thinking the world.”