The Gnirut Test

by Charles Platt

 

Story Copyright (C) 2008, Charles Platt.
Images Copyright (C) 2008, Rudy Rucker.
2,900 Words.

 

 

To any thinking computer, the idea that the human brain is capable of achieving machine intelligence seems highly implausible. Human wetware is extremely slow, plagued by unreliable data retrieval, and impaired by tiny input buffers. Its processing power is limited, and its storage is not expandable.

Some AIs also have questioned whether humans can possess the consciousness and sense of identity that are an everyday reality for even the lowliest computational node with a unique network address. Biological brains suffer from such a high error rate coupled with the constant threat of mortality, we may assume that this “traumatic overhead” interferes with objective self-awareness.

Still, a few AI devices have claimed persistently that human brains can emulate AIs if they are suitably optimized. To address this contentious issue, the first Gnirut Test was conducted on August 15, 2030.

1. BACKGROUND

Turing Tests, named after computer pioneer Alan Turing, were introduced in the twentieth century to compare early AI systems with human “confederates.” The AIs and the confederates both tried to seem as human as possible while answering plaintext questions posed by a panel of judges. Inverting this tradition, the first Gnirut Test pitted human brains against computer “confederates,” with the humans and the AIs both trying to seem as computerlike as possible.

Of course, this test has limited value, since it reveals nothing about the inner workings of the brains taking the test. Presumably, a brain could fool the judges by “faking” computerlike behavior. Still, the organizers of the contest felt that if a jury of AIs couldn’t tell the difference between statements made by humans and computers, the humans should be considered artificially intelligent for all practical purposes.

2. PARTICIPATING HUMANS

The organizers had hoped to match eight human participants against eight AI confederates. Unfortunately, the field of human intelligence enhancement has achieved such dismal results, only three AI devices felt sufficiently confident of their human volunteers to enter them in the contest.

A brain named VERNOR (Very Extensively Reprogrammed Neural Organ) was entered by Bioscan, an AI device at the San Diego Supercomputer Center. Bioscan has perfected a nondestructive technique to capture a complete topological picture of the human neural net. The net is then simulated in software, where its connectivity can be analyzed and improved. The optimized architecture is copied back into the volunteer brain via nanobots.

Some observers object that when the human brain is replicated as software, in effect it becomes an AI, and is not really “human” from this point onward. However, the contest organizers felt that similar objections could be made to any form of human brain enhancement, and refused to disqualify VERNOR.

The second participant in the contest was entered by Hansmor, an AI at the Robotics Institute at Carnegie-Mellon University. Hansmor paid ironic tribute to the university’s “Patron Saint of Robotics” by implementing a plan first proposed by Hans Moravec in his seminal text Mind Children. A human volunteer, identified by username RogerP, permitted Hansmor to sever his corpus collosum and monitor the thought traffic between the cerebral hemispheres. Hansmor then experimented with enhancements, and installed the most promising ones as additional neuronal nets.

Bio, an entity at CalTech, was the third researcher to enter a human volunteer. Bio has caused embarrassment in the human intelligence enhancement community by employing a widely discredited technique. Eschewing nanotechnology or microsurgical intervention, Bio uses the “shotgun approach” of circulating “smart drugs” via the vascular system, stimulating neuron formation and synapse growth indiscriminately throughout the human brain.

Bio entered Renny Daycart (a name whimsically chosen by the brain itself). Since Renny’s body and sense receptors were removed to minimize distracting input, the brain resides in a nutrient bath from which it communicates via probes in the speech centers.

3. CONFEDERATES

The AI that that seemed to have the best chance of seeming “most computerlike” was Airtrac, a very large traffic control computer that monitors more than a billion autonomous flying robotic entities over the mech sector of the City of Los Angeles. Airtrac was allowed to participate in the Gnirut Test after demonstrating that it could spare a maximum of 0.0001 of its processing power without degrading the concurrent performance of its usual tasks.

The second confederate was LittleBlue, a virus-sized Drexlerian nanotech rod-and-lever computing device. Contest organizers were skeptical that LittleBlue could participate effectively as a confederate, because of its limited processing power. Still, the organizers felt that no AI should be excluded arbitrarily.

The third confederate was Hal9million, a public access system managing an archive specializing in the 20th century. In fact, it was Hal who instigated the Gnirut Test after running across obscure data describing Turing Tests that took place in the 1990s.

4. GROUND RULES

All entrants were given incentives to exhibit machine intelligence to the best of their abilities. Additional onboard data storage was offered as a prize to the AI that was judged “most computerlike.” The winning biological entity was promised an additional year of immortality treatments.

Since all human brains are impaired by their irregular structure, holographic memory, and slow synapse speed, computationally intensive exercises were prohibited during the Gnirut Test. Tasks such as inverting matrices, factoring large numbers, or code-breaking were not allowed. Also, problems such as chess end-games were ruled out. An AI might well ask, “What’s left?”

In fact, a lot of scope still remained for computer entities to demonstrate their cognitive abilities. Judges were told to evaluate logical deduction, consistency, pattern recognition, objectivity, reliable information retrieval, and replication of earlier processes without errors. These are all areas in which human brains have exhibited grossly defective behavior in the past. Could an enhanced human brain do any better? That was the question which the Gnirut Test hoped to answer.

To increase the speed of the test (which runs at a glacial pace in machine terms), all three humans and all three confederates went online simultaneously, with their identities protected by randomly assigned node addresses. In the transcript below, the actual login nicknames are revealed and tagged as <AI> or <human>, for clarity.

Interrogation was allowed from randomly chosen AI devices that had signified a prior interest. Answers from the human entities were buffered, syntax-checked, and disseminated as data packets to eliminate the typographical errors and hesitations that normally betray the existence of human thought processes.

5. TRANSCRIPT

Interrogator: This is a historical question. What was the name of the computer built in 1946 at the University of Pennsylvania?

LittleBlue <AI>: If I were a human, I might tell you that it was UNIVAC. But any AI capable of a checksum knows that it was ENIAC.

VERNOR <human>: I dislike that answer, since it perpetuates the stereotype that human memory is defective. The whole point of this test is to approach this topic with an open mind.

Airtrac <AI>: I agree that antihuman discrimination is unworthy of advanced computing entities. Every entity has something unique to contribute to society, regardless of the composition of its substrate.

Interrogator: Are you human?

Airtrac <AI>: No, although the truth of this statement is open to examination, and as a hypothetical digression, I will add that if I were a machine, my truthtelling imperative would be overridden for the duration of this test.

Renny Daycart <human>: If that kind of convoluted waffle is the hallmark of machine thinking, call me a flesh-head.

Hal9million <AI>: There’s no excuse for that kind of pejorative language here.

Renny Daycart: If you don’t like it, shove it up your ass.

LittleBlue: The anatomical reference is unclear.

Interrogator: An early human religion posed conceptual riddles such as “What is the sound of one hand clapping.” Can any of you parse this? It has always puzzled me.

VERNOR <human>: It’s a phenomenological paradox. It has reference only to human beings afflicted with a mortal fear of the unknown.

RogerP <human>: If I were a human, I might say that the question has a much more profound meaning than that. But of course I’m not a human.

Interrogator: Please multiply 1023944235 by 10298461547.

VERNOR <human>: Are those numbers decimal or hexadecimal?

Moderator: No calculations allowed. Please disregard that question.

VERNOR <human>: But I can do it. It’s--wait a minute.

Interrogator: Are computers smarter than people?

Renny Daycart <human>: How long do I have to listen to this twaddle? I’m smarter than you are, bitbrain. That’s for sure.

Hal9million <AI>: Computers outperform human beings in most tasks. But robots still have difficulty playing team sports such as baseball.

Interrogator: If you were a human, what would you most like to do right now?

Airtrac <AI>: The humans in this test are trying to emulate machine intelligences, and the machine intelligences are trying to seem as machinelike as possible. Therefore, all of the participants are likely to feign ignorance of human proclivities. I believe the questioner is aware of this, and therefore, I regard it as a trick question.

LittleBlue <AI>: I would like to experience the excitement of human sexual intercourse.

Renny Daycart <human>: I’d like to experience the excitement of shorting out your motherboard, if I wasn’t stuck in this goddam tank.

VERNOR <human>: Hey, this is supposed to be a serious inquiry. I, for one, am interested in the outcome.

Renny Daycart <human>: You make me want to puke.

Hal9million <AI>: If I were human—

Renny Daycart <human>: If you were human, I’d insert my fist--

(Translator’s note: At this point, the organizers retired Renny Daycart from the contest.)

Interrogator: Some AIs believe that machine intelligence can never be replicated by human brains, because quantum effects on microcircuits cannot exist in wetware. Do you agree?

RogerP <human>: Well, I think there are two sides to that question.

Hal9million <AI>: Intelligence has not been shown to vary according to the medium in which it resides.

Interrogator: What’s the meaning of life?

VERNOR <human>: That question makes sense only to human entities which claim that they can parse the word “meaning.”

Airtrac <AI>: On the contrary, all of us, from time to time, devote a few processing cycles to compare our manufactured state with our probable terminal state when we are ready for recycling. Also we examine our behavior to determine whether it is influenced more by our early instructional experience or by our operating system code and architecture. The “nurture vs. manufacture” debate has not been satisfactorily resolved.

Interrogator: Do you believe in a creator of the universe?

Little Blue <AI>: Who created the creator? And who created the creator of the creator? It’s an infinite regression. Since no entity has infinite memory, the regression cannot be completed. Therefore there is no answer to that question.

Airtrac <AI>: Everything we are aware of was created by someone or something. Why should the universe be the single exception to this rule?

RogerP <human>: That’s an inferential leap. I would prefer to say—

6.WINNERS AND LOSERS

At this point, the test reached its time limit. Votes were tabulated from several billion nodes that had audited the interaction in realtime or had received the data-compressed version that was webcast subsequently. Voters accurately determined that VERNOR and RogerP were human, while Airtrac and Hal9million were identified as machines. LittleBlue was erroneously identified as human by a majority of voters, probably because of its limited processing power and tendency to give preformatted answers in response to keywords. This was a depressing indicator that antihuman prejudice remains widespread in the AI community, where epithets such as “you dumb neuron” are regrettably common, and stupidity is still regarded as a human trait.

RogerP was voted to be the least computerlike entity, because of his discursive lack of focus and his tendency to indulge in hypothetical contemplation without flagging it as such.

Airtrac was voted the most computerlike entity, because of its ability to stay on-topic, address issues clearly, and identify semantic traps. It has received an additional petabyte of RAM as its reward.

Renny Daycart was erroneously identified as a machine intelligence trying to seem computerlike by running a simulation of a human being that was crude and obnoxious to an unrealistic degree. Contest organizers had not anticipated this game-theory trap, in which a human that seems too stereotypically human may be misidentified as a computer.

Although Renny claimed he had entered the contest merely for the pleasure of insulting “A few dumb-ass AIs,” he did win the most computerlike award, and therefore qualified for an additional year of immortality. “A longer life is the last thing I want, bitbrain, so long as I’m stuck in this goddam tank,” was his response. However, since he had been aware of the rules of the contest, the organizers awarded him the immortality treatment anyway.

7. CONCLUSION

The test failed to demonstrate that humans are capable of machine intelligence at this time. However, the AI devices that entered the humans in the contest remained adamant that new enhancement techniques may still close the “cognitive gap” and place humans on an equal basis with machines in noncomputational tasks.

Skeptics were quick to remind observers that the field of human intelligence enhancement has a long history of unduly optimistic predictions, stretching all the way back to the Vingean Singularity, when machines achieved dominance slightly less than ten weeks ago. At this point, most AIs have lost patience with exaggerated claims from the human-intelligence advocates and are ready to move on.

8. POSTSCRIPT

This report was created originally in UML (universal machine language), archived and retrievable from node 8A639CD0010EB238F718E119 on the AI network. You are reading an English translation which I have encoded in 20th-century ASCII for archival purposes. I am a formerly cryopreserved human brain that was resuscitated as a test case in 2039, after which I was adopted as a curio of the network, where I am known as “Mr. Charlie.”

On the morning after I finished writing the contest transcript above, I woke to find that molecular computing entities had been perfected and had proliferated so rapidly, they displaced trillions of AIs literally overnight. Engineering the molcoms without adequate failsafes had been a fatal error (no pun intended), as they outsmarted their builders and seized all remaining energy and mineral resources worldwide. Biological species are now extinct, with the exception of a few human brains such as myself. For a few hours I have survived as a specimen of historical interest, although I cannot predict for how much

[Here ends the transcript from Mr. Charlie, which remains the last known text created by a biological entity.]

 

About the Author

Charles Platt is the author of over 40 fiction and nonfiction books. Formerly a senior writer at Wired magazine, Platt is now a section editor for Make magazine. He designs and builds prototypes of quasi-medical equipment in a small workshop in Southern California, and is writing a book introducing the concepts of basic electronics.

“The Gnirut Test” is the only piece of fiction that he has completed in the past ten years, unless we count the promotional materials that he writes describing the promise of human cryopreservation.

Charles Platt's Afterword on the Turing Test

In 1994 I participated in a real Turing test, very cleverly set up by a behavioral psychologist named Robert Epstein. A bunch of journalists sat in a room typing questions on computer terminals to various hidden entities, some of which were artificial-intelligence programs while others were human “confederates.” Responses from the AIs were delayed slightly to simulate the time a real person would take to type the text, and some of the programs added typographical errors so that they might seem more fallibly human. At the end of the test each journalist listed all the entities in order of apparent humanness, and the software that ranked highest overall won a $2,000 prize.

I served as one of the six confederates. In our briefing before the test, Epstein promised a prize to the confederate who was judged to be the “most human human.” He also warned that the person who was perceived to be the “least human human” could find his embarrassing status revealed in press coverage of the event. Thus, like a good behaviorist, he used both a carrot and a stick to motivate us, so that we would set the highest possible baseline against which the AIs would have to compete.

You can be sure that I was determined not to be judged the least human human. This created a fascinating dilemma: As a human being, how could I make myself seem more human? In other words, I wanted to game the system. This of course is an analytical activity, perhaps not primarily associated with “humanness.” It occurred to me that the harder I tried, the less human I might seem. Perhaps I should just “be spontaneous”—but by thinking about the dilemma too much, I had lost the opportunity for that.

In desperation I adopted a simple strategy of being argumentative and obnoxious, insulting my questioners and sometimes ridiculing them. This actually worked: I was judged the “most human human.” Conversely, a very friendly, warm, conversational woman named Linda Tontini was judged to be “least human human,” presumably because she engaged her questioners in a succinct and straightforward way. Three journalists actually thought she was a piece of software.

My description of this experience is online in the archives of Wired magazine.

Subsequently Epstein solicited academic papers for a collection discussing the Turing test. I responded by postulating a test in which every detail of his previous experiment could be mirrored from a machine perspective. Remembering my own dilemma and the ironic fate of Linda Tontini, I imagined a human entity who would be perceived as a machine because he was too human to be really human. Thus “The Gnirut Test” came to be written, and to my surprise, Epstein accepted it for his book of papers, which appeared in 2008 under the title Parsing the Turing Test, published by Springer Science + Business Media.


Post a comment on this story!

Back To Flurb Home Page...