Click covers for info. Copyright (C) Rudy Rucker 2021.


Author Archive

EPR Paradox Solved, Lifebox Review, Mail Art, and More

Tuesday, November 15th, 2005

Superagent and tummler John Brockman announces that David Deutsch has won the Edge of Computation Prize. (“Tummler” means “One, such as a social director or entertainer, who encourages guest or audience participation.” Think Robin Williams or Milton Berle.)

Deutsch looks really cool, like Dracula maybe, or like an East European heavy-metal rock-star. I’ve never met him, though I’d like to. I went online and found some of his more recent papers.

I “read” “Information Flow in Entangled Quantum Systems” yesterday. As I think I’ve mentioned before, when it comes to discussing QM (quantum mechanics), I always feel like a one-legged man at an ass-kicking contest. It’s not so much that I read a truly heavy-duty quantum-information paper like this as I ice-skate it, speeding across the stretches of hide-thin Heisenberg matrices lest I fall through into the frigid waters of despair. Deutsch rewards the intrepid skater with tasty diagrams and primo buzzwords. What I found really mind-blowing is that he solves the Einstein-Podolsky-Rosen (EPR) paradox!

The way the EPR runs is that if I let systems Q2 and Q3 interact near the bottom of the page at time t1, then move them very far apart as time runs up the page, and then perturb the systems by Rx(theta) and Rx(phi), and then, before any signal would have had time to move from Q2 to Q3, quickly at time t2 use Q1 to do a measurement on Q2 and use Q4 to do a measurement on Q3. And then we’ll find a surprising correlation between the results unearthed by Q1 and Q4, and we’ll feel like there must have been some action at a distance or magic-string entanglement to make the info in perturbed Q2 match the info in perturbed Q3. But how can the signal have traveled faster than light?

Deutsch’s solution to the EPR puzzle is so wonderfully simple, so full of the “DUH of Science” that I think it must be true. Deutsch points out that, duh, for you to be sure that the measurements found by Q1 and Q4 match, you have to, duh, bring Q1 and Q4 into proximity, and that the actual “magical” match-up between the Q1 and Q4 data only occurs when Q1 and Q4 are close together — which allows for the explanation that that match-up occurs because of a quantum interference process between the wave functions of Q1 and Q4. In essence, the state of Q3 gets hidden in the state of Q4, and is transported over to interact with the part of the state of Q2 hidden in Q1. We don’t notice this because the info that travels with Q4 is “invulnerable to decoherence but absolutely inaccessible to local experiments.” I think the guy is seriously onto something; to me the insight seems to be on a level comparable to Einstein noticing, hmmm, there’s no absolute way to synchronize clocks.

Other news. The Lifebox, the Seashell, and the Soul got a nice review in the San Francisco Chronicle on Sunday.

Richard Bacchus sent me some Pig Chef pictures from his travels in the South.

Blogger Ken Nickerson sent me a link to a site that makes an ever-changing collage display of an author’s name, tiled with images of his or her bookcovers found on Amazon.

John Shirley sent me a link to a now-do-you-finally-get-it illustration of the fractal concept: a looped zoom into a hand with five fingers, with five smaller fingers on each finger tip, with five etc. It would be cooler if the hand were moving, and flexing and changing position as you zoom in — which is, come to think of it, what a nonlinear fractal like the Mandelbrot set actually does. See also my old link to the zoomquilt.

I wrote about people with hands like this down to a few levels in Saucer Wisdom; Hans Moravec also describes devices like this, which appear in Paul DiFilippo’s novel Fuzzy Dice. By the way, I just noticed that Paul has a cool website with galleries of his richly satiric mail art.

Computers Will Be Alive and Intelligent

Friday, November 11th, 2005

These are partial notes of my remarks during my debate with Noam Cook of the Philsophy Department, at San Jose State University, November 10, 2005. I forgot to bring my recorder, so I didn’t manage to tape it for podcast.

It was a nice event, with a big and enthusiastic audience, maybe 150 people, who asked a lot of good questions at the end. I was anxious about the event. They say that people have a phobia of public speaking — how about public speaking with a guy there to contradict everything you say! But Noam was a gentleman, and it went smoothly. I’ve incoporated some of my responses to his points in these notes.

Summary. I wish to argue that humans will eventually bring into existence computing machines that are as alive and intelligent as themselves.

After all, why shouldn’t there be alternate kinds of physical hardware which successfully emulate the behavior of humans? The only hard part is finding the right software for these systems. And even if the software is very hard to figure out, we have some hope of finding it by automated search methods.

Definition. A computation is a deterministic process that obeys a finitely describable rule. Saying that the process is deterministic means that identical inputs yield identical outputs. Saying that the rule is finitely describable means that the rule has finite description such as a program or a scientific theory.

I believe in what I call universal automatism: It’s possible to view every naturally occurring process as a computation. For a universal automatist, all natural processes are deterministic and finitely describable — the weather, the stock market, the human mind, the course of the universe. The laws of nature are a kind of computer program.

In a broad sense, any object is a computer, but for this debate, let’s use “computer” in the narrow sense of being a manmade machine. Definition. A computer, or computing machine, is a device brought into being by humans using tools and used to carry out computations.

In arguing that we can eventually produce computers equivalent to humans, it’s useful to break my argument into three steps.

(1. Automatism) A human mind is a deterministic finitely complex process; that is, human consciousness is a computation carried out by the body and brain.

(2. Emulation) The human thought process can in principle be emulated on a man-made computer; that is, we can carry out equivalent computations on systems other than human bodies.

(3. Feasibility) We will in fact figure out a the design for such a computing system; that is, humans and their tools will eventually bring into existence such human-equivalent systems.

I realize that many people don’t want to accept that (1. Automatism) they are deterministic computations. This is the point I’ll I really have to argue for.

Looking ahead, if I accept that (1. Automatism) I’m a computation of some kind, then it’s relatively easy to believe that (2. Emulation) this same computation could be run on a man-made machine, for computers are so programmable and so flexible.

It’s also not so hard to believe the third step, which says (3. Feasibility) if its in principle possible to run a human-like computation on a machine, then eventually we fiddling monkeys will figure out a way to do it. It’s only a matter of time; my guess is a hundred years.

1. On Automatism. In arguing for the idea that our mind is a kind of computation, note that our psychology rests on our biology which rests upon physics. And physics itself is, I believe, a large, parallel, deterministic computation.

The uncertainties of quantum mechanics aren’t a lasting problem, by the way; the present-day interpretation of quantum mechanics is simply a scrim of confusion and misinterpretation overlaying a crystalline deterministic substrate which will eventually come clear.

So if we grant that human consciousness is a particular kind of physical process occurring in human bodies, and if we grant that physics is made up of deterministic computations, then we have to conclude that consciousness is a kind of computation.

Let me forestall three objections.

Free Will Objection: If I’m a deterministic computation, why can’t anyone predict what I’m going to do?

Answer to Free Will Objection: When I say the human mind can be regarded as a deterministic computation, I am not denying the experiential fact that our minds are unpredictable. The fact that you can’t predict what you’ll be doing tomorrow or next year is fully consistent with the fact that you are deterministic.

The impossibility of predicting your future results from two factors. Most obviously, my future is hard to predict because I can’t know what kind of inputs I’m going to receive. But, and this is a key point, I’d be unable to predict the workings of my mind even if all my upcoming inputs were known to me. I’d be unpredictable even if, for instance, I were to be placed in a sensory deprivation tank for a few hours.

A gnarly computation such as is carried out by a human mind is irreducibly complex; it doesn’t allow for any rapid shortcuts, not even in principle. This is a fact that computer scientists have only recently begun talking about; see Stephen Wolfram’s book A New Kind of Science, and my own book, The Lifebox, the Seashell, and the Soul.

The mind is a deterministic computation, but here are no simple formulas to predict the mind.

“Chinese Room” Objection: A computer can be programmed to emulate all of human behavior, it is still only putting on an act, and it has no internal understanding, knowledge, or intentionality. Consider, for instance the IBM chess-playing program Deep Blue. It excels at playing chess, but it doesn’t “know” anything about chess.

Answer to the “Chinese Room” Objection. To the extent that we can give a precise descriptions of our psychological states, we can create AI programs to emulate them. A goal becomes a target state the program wants to reach. A focus of attention is a particular pointer the program can aim at a simulation object. An emotional makeup becomes a system of weights attached to various internal states. Conscious knowledge something may involve a kind of self-reflexive behavior, in which the system models the world, a self-symbol, the relationship between the world and the self-symbol, and the self-symbol considering the relationship between the world and the self-symbol. To the extent that this can be made precise it can be modeled. Present day AI programs lack many of the internal aspects of human psychology simply because these aspects have not yet been well-enough described. But in principle, it can all be modeled.

Supernaturalism Objection: Given that humans are the Crown of Creation, God surely loves us so much that we’ve been equipped with some vital essence that wholly transcends the petty, deterministic bookkeeping of computational physics.

Putting much the same notion more secularly, we might say that there are oddball as-yet-unknown physical forces involved in life and in consciousness; perhaps quantum computation has something to do with this, or dark energy, or instatons on D-branes in Calabi-Yau spaces.

Answer to the Supernaturalism Objection: We are already on the point of building physical quantum computers. In the long run, any possible kind of physics should be something that we can put into the devices we make. And who’s to say that God’s special vital essence doesn’t dribble our devices as well. Zen Buddhists tell the story of a monk who asks the sage, “Does a stone have Buddha-nature?” The sage answers, “The universal rain moistens all creatures.”

2. On Emulation. The second step of my argument is very easy to defend; we’ve known since the 1940s that there is not an endless staircase of more and more sophisticated computation. Relatively simple devices such as a desktop computer are already “universal computers,” meaning that, in principle, your desktop machine can emulate the behavior of any other system. It’s just a matter of equipping your computer with a lot of extra memory, getting it run fast, and giving it the right software.

I estimate the actual computational power of the human brain as being on the order of a quintillion primitive operations per second using a quintillion bytes of memory. In scientific nomenclature, this would be an exaflop exabyte machine..

Extrapolating from present trends, we may well have desktop computers of this power by the year 2060. Of course the hard part is figuring out how to write the human-emulation software for the exaflop exabyte machine.

3. On Feasibility There are all sorts of ways of making computers, and some hardware designs are better for certain kinds of problems than others. In making a computer that emulates humans, we face two interlocking problems: finding the best kind of hardware to use; finding appropriate software to run on the hardware.

These are exceedingly hard problems. My guess is that it will take at least another hundred years for full parity between humans and certain machines. Possibly I’m too pessimistic.

There are certain limitative logic theorems, such as Gödel’s incompleteness theorem, suggesting that it’s in principle impossible to write software equivalent to a human mind. But these theorems do not rule out the possibility of managing to evolve or to stumble upon human-equivalent software. All that is ruled out is the ability to truly understand how the software works.

Evolution, also known as genetic programming, is a widely used technique in computer science. Although artificial evolution doesn’t find the very best algorithms, it is able to find acceptably good algorithms, often in a reasonable amount of time.

It may be that we don’t need to use an evolutionary process. Wolfram argues that whenever you can find a complicated program to do something, you can also find a concise and simple program to do much the same thing. If we had a better idea about the kinds of programs that might generate human-level AI, we might achieve a rapid success simply by doing an exhaustive search through the first, say, trillion possible such programs.

It may in fact be that human-style mentation is something that nature “likes” to produce; it could be a ubiquitous pattern like cycles or vortices or pairs of scrolls. In this case the search might not take so long after all.

Debate Today: Can Machines Think?

Thursday, November 10th, 2005

Thursday, November 10th

Topic: “Will Computers Ever be Alive or Intelligent?”

Dr. Noam Cook, “No.”

Dr. Rudy Rucker, “Yes.”

4:30pm – 6:00pm

Martin Luther King Library, Room 225B

(On the San Jose State University campus at 4th St. & San Fernando St., San Jose, California.)

B there or B square.

Slightly nervous about this. I have to speak first, which is never an advantage. Will try and podcast it.

Interview for Ylem with Loren Means on Recent Books

Wednesday, November 9th, 2005

I did an email interview this morning. By the way, you can find all of my email interview online, if you’re interested.

Q 168. I’m back, Rudy, I interviewed you twenty months ago for my art magazine, Ylem: Journal of Artists Using Science and Technology, and I’m getting ready to publish our interview. I want to bring it up to date with some follow-up questions. What became of that nonfiction book you were talking about, The Lifebox, the Seashell, and the Soul?

A 168. The Lifebox came out last month to my customary blizzard of zero publicity, other than the web page I made for it. It’s a very nice-looking well-produced book with lots of great illos, and I said everything I wanted to about the meaning of computation. But I’m not seeing any reviews of it, other than in the three publishing trade-zines. And, sigh, Amazon posted the one bad review on their page. And I haven’t seen it for sale in many stores. And the science-book-clubs haven't yet picked it up. And we haven't gotten any deals from foreign publishers. So right now I’m discouraged.

I have a sense that the market for science books these days is geared towards books having precisely one idea, which is then buttressed with water-cooler-level discussions of pre-digested news stories that have been fed to us by the media. The recent best-seller Blink is a self-reflexive example of this: Blink says that your very first and most shallow idea on any topic is correct. You don’t even have to read it! Just put it on your shelf. Got it. Like a white-on-white painting with maybe one red dot. No time wasted. And I’m also up against Ray Kurzweil’s snake-oil-sales-pitch The Singularity is Near, which pretty much says, “Buy my book and you’ll live forever.” The guy even sells vitamins.

The Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning of Life, and How to Be Happy is ruminative and dialectic in approach; I weigh opposing views of reality and come up with a synthesis or, if that’s not possible, consider holding both views simultaneously. Also I commit the high crime of joking around rather than being deadly serious.

The title itself is a dialectic triad, by the way. The Lifebox thesis is that there can be computer models of human minds, the Soul antithesis is that I feel myself to be a vibrant energy-filled being and not a machine, the Seashell synthesis is that the computational patterns found on cone shells are examples of the gnarly deterministic-but-unpredictable computations that could indeed inhabit my skull.

My book is profound and deeply human, but it’s not very blink at all. Stephen Wolfram likes it in any case; he says it’s more important book than my publishers or I realize.

Q 169. And you said that after the Lifebox book you were going to write a novel about two crazy mathematicians?

A 169. Yes, Mathematicians in Love. I just finished making the final revisions. It’ll be out from Tor Books in, I suppose, summer or fall of 2006.

I had fun with this novel. For one thing, it gives sfictional life to some of the ideas in my Lifebox tome. For instance I have my two guys making universal paracomputers out of naturally occurring things like vibrating drumheads. Now, in Lifebox I argued that most naturally occurring processes are, although deterministic, impossible to effectively predict by dint of being gnarly computations. But, just for kicks, I set most of Mathematicians in Love in a world where this isn’t the case, and it is actually possible to build a device that predicts the weather, the stock market, other people’s decisions and so on.

Another thing I do in Mathematicians in Love is to satirize our current government, and to have my characters bring it crashing down. President Joe Doakes goes to jail. I found that very satisfying.

Yet another angle is that I use a notion about parallel worlds which I developed in The Lifebox, the Seashell, and the Soul; my idea is that reality might be a series of parallel universe which are linearly ordered, with each one slightly better than the one before, like successive drafts of a novel.

One thing that would pep up my career would be if Michel “The Eternal Sunshine of the Spotless Mind” Gondry actually makes his movie of my novel Master of Space and Time. He’s had the option for two years and is presently working on a script with Dan “Ghost World” Clowes. Michel says he’d like to cast Jim Carrey and Jack Black as the book’s two mad scientist pals. (I know, I already mentioned this on the blog, but hey, this thought is my current security blanket.)

Q 170. What’s next?

A 170. I’m not sure. I’m not up for another big project yet, what with my would-be-earthshaking tome being ignored, and with the long haul of my latest novel just ended. Call it post-partum blues.

Right now I’m writing some short stories. With a couple more, I’ll have enough for a new story anthology. So that’d be an easy book to get out. This summer I read Charles Stross’s great Accelerando, and that got me interested in tackling the Singularity head on; I’m writing two stories about the Singularity right now, and I already sold one of them to Asimov’s.

I’ve been cleaning out my basement this week and putting all my old boxes of papers in one specified corner. Maybe that means I’m getting ready to write a memoir. I’d sort of like to take on that project, but the publishers I’ve mentioned it to aren’t very interested. I also have a few hundred thousand words of journals that could perhaps be published in some form.

The other possibility is, of course, that I write a new novel. I’d been thinking of doing a sequel to Frek and the Elixir, but I don’t have a killer idea for that yet. For Frek itself, I used Campbell’s monomyth structure, one stage per chapter, which gave the book a nice form, but it sort of makes the book a finished whole, so I’m not exactly sure how to do a sequel. Or I could do a fifth Ware book, not that the first four are flying off the shelves anymore.

Another thought is to drop writing for awhile, and wait for the world to catch up with me. I enjoy painting; maybe I could pick up a few bucks doing that. One result of cleaning out the basement is that I’ll have room for a metal rack on which to store all of my family’s accumulated paintings — we’re all artists. With a place to store the accumulated works, I’d be a step closer to painting a bit more. One thing I might do soon is to start selling posters of my paintings on the web. That’d be more painless than trying to get a show.

I piss away a lot time blogging. On the one hand, it's an obsessive-compusive disorder, and I think I'll be cutting down to one blog post per week so as to have more time. But I do see blogging as an art form; I like for each entry to be a nicely balanced combintation of words and pictures — I shoot a lot of pictures with my digital camera, and I recycle old ones, like I'm doing today. I’ve even gotten into podcasting, that is, posting my lectures and spoken interviews online. In my own diffuse and unpredictable fashion, I seem to be creating an electronic lifebox copy of my mind. On the third hand, why bother? Better to go outside and ride my bike.


Rudy's Blog is powered by WordPress