Click covers for info. Copyright (C) Rudy Rucker 2021.


Archive for the ‘Rudy’s Blog’ Category

Rudy Predicted Software Immortality

Monday, January 8th, 2024

We keep getting closer to simulating a human being by running a process in a computer. On the street, the dream is that we might get software immortality. And I predicted it forty years ago.


[Photo credit: Bart Nagel]

How does it work? You start with a large data base on what a given person was like: what they said and wrote, how they talked, and what they did. This is what I have named a lifebox. Once you have a lifebox, the task is to equip it with an interface that allows people to talk to it. And next we want the lifebox to begin generating original material. The big surprise is that large language models and wares like ChatGPT seem to be making this happen—or at least getting a step closer. So there’s a lot of buzz.

It’s not very well-known that, starting in 1980, I was one of the very first people to talk about software immortality. By virtue of hindsight, people like to imagine that these ideas were obvious or “in the air.” They weren’t. The very word “software,” was little known.

I developed my ideas in the early 1970s during discussions with no less a man than Kurt Gödel at the Princeton Institute for Advanced Study—and while writing my Rutgers University Ph. D. thesis on mathematical logic. I further refined my ideas of software immortality while studying and lecturing at the Mathematics Institute in Heidelberg, Germany, on a Humboldt Foundation grant from 1978 – 1980.

I wrote my novels White Light and Software in Heidelberg. And while I was in Heidelberg I further developed my notion of a software mind in my nonfiction book Infinity and the Mind, which is still in print from the Princeton University Press.

Infinity and the Mind has some material on my conversations had with Gödel about what I was then calling robot consciousness.

The last time I spoke with Kurt Gödel was on the telephone, in March 1977. I had been studying the problem of whether machines can think, and I had become interested in the distinction between a system’s behavior and the underlying mind or consciousness, if any.

What had struck me was that if a machine could mimic all of our behavior, both internal and external, then it would seem that there is nothing left to be added. Body and brain fall under the heading of hardware. Habits, knowledge, self-image and the like can all be classed as software. All that is necessary for the resulting system to be alive is that it actually exist.

In short, I had begun to think that consciousness is really nothing more than simple existence. By way of leading up to this, I asked Gödel if he believed there is a single Mind behind all the various appearances and activities of the world.

He replied that, yes, the Mind is the thing that is structured, but that the Mind exists independently of its individual properties.

I then asked if he believed that the Mind is everywhere, as opposed to being localized in the brains of people.

Gödel replied, “Of course. This is the basic mystic teaching.”

Deep stuff. And the point I’m making here is that the notion of software immortality was not some trite, obvious thing that anyone at all could have written about. I had to dig deep and think hard to work it out. It was new.

This said, I have to admit that the Wikipedia entry on “Mind Uploading in Fiction” mentions some earlier SF about the topic, and yes, it does mention my novel Software. But I want more! I do feel I can say I was the first to write about software immortality in a semi-realistic way that matches our current technology.

Three or four years after publishing Software, I expressly introduced the notion of a lifebox, and I’ve continued writing about this for forty years.

But as I say, I don’t  seem to get much credit in popular articles, and usually not even a mention. As fellow cyberpunk Bruce Sterling would say, that chaps my ass. 

Today’s post is designed to build up my case. I’ll include extended quotes from my writings, accompanied by comments, and as usual I’ll toss in some random images I’ve made. As I always say, at the surreal, transreal, synchro level, any image goes with any text.

My involvement with lifeboxes goes back to my Phillip K. Dick-Award novel Software, written in 1980 and published in 1982. A race of intelligent robots called boppers have evolved on the Moon. And the boppers want to confer immorality on their designer, Cobb Anderson. To achieve this, they extract the software from Cobb’s brain—by, in effect, eating it and analyzing the contents. And then they put the Cobb software in control of a robot body.

Here’s Cobb’s reaction when he comes to.

It was still sinking in. Intellectually he had always known it was possible. A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills… is all software. The robots had extracted Cobb’s software and put it in control of this robot body. Everything was working perfectly, according to plan. For some reason this made Cobb angry.

“Immortality, my ass,” he said, kicking the bathroom door. His foot went through it.

The thing that bothered Cobb the most was that even though he felt like he was all here, his brain was really inside a computer somewhere else. Where?

Suddenly he knew. The Mr. Frostee truck, of course. A super-cooled bopper brain was in that truck, with Cobb’s software all coded up. It could simulate Cobb Anderson to perfection, and it monitored and controlled the robot’s actions at the speed of light.

You’ll notice something odd here. Rather than having his software running on a device inside his body, Cobb’s software is running on a large supercooled computer inside a nearby ice-cream truck! This was my early 1980s substitute for having the process run on a microprocessor in Cobb’s robot body—or have the software be running in the cloud. But for the sake of a good cyberpunk tale, it was kind of cool to have a large, slightly evil ice-cream truck following Cobb around.

In my follow-up novel Wetware of 1988, also winner of a Phillip K. Dick Award, things get gnarlier. Rather than running your personality software on an old-school electronic computer, you run it on something called a Happy Cloak. (This name was, by the way, lifted from a William Burroughs novel.) A Happy Cloak is a chunk of moldy piezoplastic that acts as a big computer. So one way to achieve software immortality might be to transfer your personality onto a Happy Clock, and then let the Happy Cloak ride around on a meat clone of your old body. And in Freeware and Realware, things get still weirder. You can read all about it in the Wares.

But let’s stick to the basic idea of modeling a human personality with software. I clarified this in my 1986 story, “Soft Death,” which appeared in the Magazine of Fantasy and Science Fiction. I think this is the place where I coined the word lifebox. The main character in the tale, one Doug Leckesh, has signed up with a company called Soft Death—who promise him immortality. He’s in the care of a tech named Lo Park.

Photo of Bruce Sterling here just for kicks. My fellow cyberpunk from the start.

Lo took a device the size of a cigarette-pack out of her desk. It had two little grilles, for microphone and speaker. “We call this a lifebox. Basically, I want you to tell it your life story. Tell everything. It takes most people a couple of weeks.”

“But…I’m no writer.”

“Don’t worry; the lifebox has prompts built into its program. It asks questions.” She flicked a switch and the lifebox hummed. “Go on, Mr. Leckesh, say something to it.”

“I…I’m not used to talking to machines.”

“What are some of the first machines you remember, Doug?” asked the lifebox. Its voice was calm, pleasant, interested. Lo nodded encouragingly, and Leckesh answered the question.

“The TV, and my mother’s vacuum cleaner. I used to love to watch the cartoons Saturday morning—Bugs Bunny was the best—and Mom would always pick that time to vacuum. It made red and green static on the TV screen.” Leckesh stopped and looked at the box. “Can you understand me?”

“Perfectly, Doug. I want to build up a sort of network among the concepts that matter to you, so I’m going to keep asking questions about some of the things you mention. I’ll get back to the vacuum cleaner in a minute, but first tell me this: what did you like best about Bugs Bunny?”

For the next couple of weeks, Leckesh took his lifebox everywhere. He talked to it at home and in the club—and when his wife Abby and his friends reproved him for ignoring them, he began talking to it in a booth at Yung’s bar. The lifebox was the best listener Leckesh had ever had. It remembered everything he told it, and it winnowed the key concepts out of his stories. Leckesh would respond to its prompts, or simply go off on tangents of his own. He hadn’t had so much fun in years.

Lecckesh by the way is a near-anagram of Sheckley, as in Robert, my hero and mentor from age thirteen on.

I have another treatment of the “dictated lifebox” theme in my book Saucer Wisdom. This was an odd book. It was in fact a novel starring a character called Frank Shook. Supposedly Frank Shook was a frequent saucer abductee who’d arranged for a ghost writer to produce a non-fiction volume of Shook’s experiences. The name of the ghost writer character? Rudy Rucker. I told you this was an odd book!

To make things even odder, my old writer friend Gregory Gibson) shown above in full Frank Shook mode) helped me get a book deal with Wired for my Saucer Wisdom book. Wired wanted to expand from being a magazine to publishing books as well. I took Greg to a meeting with the Wired editors and convinced them that Greg really was the saucer abductee Frank Shook, and that my book was in fact non-fiction and not a novel at all. Good times.

If you read the book, you’ll figure it all out… It has several useful prefaces and introductions, two of them by Bruce Sterling.

In any case Saucer Wisdom has a very clear discussion of the lifebox notion.

The aliens show Frank Shook a mid-twenty-first-century commercial for something called a lifebox. The slogan is Remember Me. The lifebox is a little black plastic thing the size of a pack of cigarettes and it comes with a light-weight headset with a pinhead microphone, like the kind that office workers use. The ad suggests that you can use your lifebox to create your life story, to make something to leave for your children and grandchildren.

Frank gets the aliens to find an old man who is actually using a lifebox. His name is Ned. They watch Ned from the saucer. Somehow the saucer can use dimensional oddities to get very close to someone but still be invisible to them. The aliens have control over their size-scale and refraction index; they can make the saucer as tiny and transparent as a contact-lens.

White-haired Ned is pacing in his small back yard—a concrete slab with some beds of roses—he’s talking and gesturing, wearing the headset and with the lifebox in his shirt pocket. The sly saucer is able to get close enough to hear the sound of the lifebox: a woman’s pleasant voice.

The marketing idea behind the lifebox is that old duffers always want to write down their life story, and with a lifebox they don’t have to write, they can get by with just talking. The lifebox software is smart enough to organize the material into a shapely whole. Like an automatic ghost-writer.

The hard thing about creating your life story is that your recollections aren’t linear; they’re a tangled banyan tree of branches that split and merge. The lifebox uses hypertext links to hook together everything you tell it. Then your eventual audience can interact with your stories, interrupting and asking questions. The lifebox is almost like a simulation of you.

Frank Shook gets the aliens to skip forward in time until past when Ned has died. They observe Ned’s grandchildren, little Billy and big Sis. They’re playing with one of the lifebox copies that Frank left. The kids call the lifebox “Grandpa,” but they’re mocking it too. They’re not putting on the polite faces that kids usually show to grown-ups. Billy asks the Grandpa-lifebox about his first car, and the lifebox starts talking about an electric-powered Honda and then it mentions something about using the car for dates. Sis—little Billy calls her “pig Sis” instead of “big Sis”—asks the lifebox about the first girl Grandpa dated, and Grandpa goes off on that for a while, and then Sis looks around to make sure Mom’s not in earshot. The coast is clear so she asks some naughty questions. “Did you and your dates do it? In the car? Did you use a rubber?” Shrieks of laughter. “You’re a little too young to hear about that,” says the Grandpa-lifebox calmly. “Let me tell you some more about the car.”

Frank and the aliens skip a little further into the future, and they find that the lifebox has become a huge industry. People of all ages are using lifeboxes as a way to introducing themselves to each other. Sort of like home pages. They call the lifebox database a context, as in, “I’ll link you a link to my context.”

Not that most people really want to spend the time it takes to explicitly access very much of another person’s full context. But having the context handy makes conversation much easier. In particular, it’s now finally possible for software agents to understand the content of human speech—provided that the software has access to the speakers’ contexts.

I got very heavily into discussing lifeboxes in my tome, The Lifebox, the Seashell, and the Soul, published by Thunders Mouth Press in 2005, and now available from Transreal Books. If you go the book page you can buy print or ebook copies or read it for free online.

By the way, the idea behind the title is that the Lifebox is a digital representation of a human, the Soul is the idea of a fully analog ethereal person, and the Seashell relates to Stephen Wolfram’s idea of generating gnarly patterns by using cellular automata rules akin to those that generate the surface patterns on cone shells. The image shown here was drawn by my daughter Isabel Rucker, who did the illos for The Lifebox, the Seashell, and the Soul.

And here’s part of the discussion of lifeboxes in that book, preceded by a photo of me and the three kids one Easter morning in the early Seventies.

One of the most venerable dreams of science fiction is that people might become immortal by uploading their personalities into some kind of lasting storage. Once your personality is out of your body in a portable format, it could perhaps be copied onto a fresh tank-grown blank human body, onto a humanoid robot, or, what the heck, onto a pelican with an amplified brain. Preserve your software, the rest is meat!

In practice, copying a brain would be very hard, for the brain isn’t in digital form. The brain’s information is stored in the geometry of its axons, dendrites, and synapses, in the ongoing biochemical balances of its chemicals, and in the fleeting flow of its electrical currents. In my 1982 cyberpunk novel Software, I wrote about some robots who specialized in extracting people’s personality software—by eating their brains. When one of my characters hears about the repellent process, “[his] tongue twitched, trying to flick away the imagined taste of the brain tissue, tingly with firing neurons, tart with transmitter chemicals.”

Let’s look at a much weaker form of copying a personality. Rather than trying to exactly replicate a brain’s architecture, it might be interesting enough to simply copy all of a person’s memories, preserving the interconnections among them.

We can view a person’s memory as a hyperlinked database of sensations and facts. The memory is structured something like a Web site, with words, sounds, and images combined into a superblog with trillions of links.

I don’t think it will be too many more years until we see a consumer product that makes it easy for a person to make a usable copy of their memory. This product is what I call a lifebox.

My idea used to be that your lifebox will prompt you to tell it stories, and it will have enough low-level language recognition software to be able to organize your anecdotes and to ask you follow-up questions. As you continue working with your lifebox, it builds up a database of the facts you know and the tales you spin, along with links among them. Some of the links are explicitly made by you, others will be inferred by the lifebox software on the basis of your flow of conversation, and still other links are automatically generated by looking for matching words.

[As of 2023 it’s evident that, instead of interviewing the person, you can use an “unsupervised learning” approach, and evolve a neural net algorithm that examines all of the target person’s written and spoken outputs, and evolves a pattern of weights for a neural net to match the arrangements of words.]

And then what?

Your lifebox will have a kind of browser software with a search engine capable of returning reasonable links into your database when prompted by spoken or written questions from other users. These might be friends, lovers, or business partners checking you out, or perhaps grandchildren wanting to know what you were like.

Your lifebox will give other people a reasonably good impression of having a conversation with you. Their questions are combed for trigger words to access the lifebox information. A lifebox doesn’t pretend to be an intelligent program; we don’t expect it to reason about problems proposed to it. A lifebox is really just some compact digital memory with a little extra software. Creating these devices really shouldn’t be too hard and is already, I’d say, within the realm of possibility—it’s already common for pocket-size devices to carry gigabytes of memory, and the terabytes won’t be long in coming.

There is a sense in which saving only your memories is perhaps enough, as long as enough links among your memories are included. The links are important because they constitute your sensibility, that is, your characteristic way of jumping from one thought to the next.

My expectation is that in not too many years, great numbers of people will be able to preserve their software by means of the lifebox. In a rudimentary kind of way, the lifebox concept is already being implemented as blogs. People post journal notes and snapshots of themselves, and if you follow a blog closely enough you can indeed get a feeling of identification with the blogger. And blogs already come with search engines that automatically provide some links. Recently the cell phone company Nokia started marketing a system called Lifeblog, whereby a person can link and record daily activities by using a camera-equipped cell phone.

I like the idea of a lifebox, and I’d like to have one. I envision a large database with all my books, all my journals, and a connective guide-memoir—with the whole thing annotated and hyperlinked. Eventually I’d want to throw in some photographs—I’ve taken thousands over the years. And it should be feasible to endow my lifebox with interactive abilities; people could ask it questions and have it answer with appropriate links and words. My finished lifebox might take the form of a Web site, although then there’d be the thorny question of how to get any recompense for the effort involved. A commercial alternative would be to market it as a set of files on a portable data storage device of some kind.

[I included some of this material in a paper with Leon Marvell, “Lifebox Immortality & How We Got There.” And Marvell adds some further speculations.]

[Photo credit: Bart Nagel]

I have in fact made a crude lifebox in the form of an online search engine that finds user-selected target phrases in my collected writings. See Rudy’s Lifebox online. My first stab at a personal pyramid of Cheops!

Use case: I used the Rudy’s Lifebox tool to search for “lifebox,” and I found, aha, a great blog post I posted in 2019, “Talk on Lifebox for Telepathy and Immortality.” Check it out! You might think of the 2019 post as a missing addendum to today’s post, also it links to a podcast. Convenient invention, this lifebox…

And, as I said earlier, the cool thing happening now is that the chances of animating our lifeboxes into something like interactive AI are going up, thanks to Large Language Models and such tools as ChatGPT. (As mentioned above, John Walker and I posted about these new tools last month.) We’re not there yet, but it’s a big step.

In my latest novel, Juicy Ghosts, I visualized how the business of software immortality might actually play out. Here’s a scene where a woman named Mary has died on Earth. A con-man named Carson Pflug has helped Mary to set up a lifebox of herself on a server farm called Skyhive. The lifebox storage is “free.” If you agree to certain conditions…

Mary is in her lifebox self. She’s an AI, programmed in Spork, and coupled to her laboriously assembled data-base. She wears a sim that’s a texture-mapped wire-frame model of a woman living in a jive-ass virtual reality generated by the Skyhive server. Not that her AI mind thinks of it that way. To her, the scene feels authentic, in a bland way. Like being on tranks in an airport lounge.

Her room in the Skyhive cliff city is still very plain. A wooden chair, a gray floor, white walls. She goes to the balcony. Some of those so-called storks flap by. And then she sits in her chair for a very long time, thinking things over and, frankly, resting. She hasn’t had a rest like this for years. Twelve hours go by, or maybe thirty-six. No easy way to judge time here. There’s no clocks. And her virtual body never gets sore from sitting in one place.

“Would you like to personalize?” says a voice, finally interrupting Mary’s reverie. A woman’s voice echoing in her head. “Anything you want!”

“What are you?” says Mary.

“I’m your room,” says the voice. “Let’s pick some pleasant images for your walls. And I can copy furniture from your home in San Lorenzo.”

“Why bother,” Mary tells the room. “Pretty soon I’ll be in my clone.”

“No peripheral is currently registered to your account,” says the room.

Mary feels a pang of fear. Did Carson Pflug double-cross her? She wouldn’t put it past him. But she’s not gonna discuss this with the perky voice of her room.

“I think I’m getting a body soon,” says Mary. “But can I go out and look around? Just in my virtual body?”

“Feel free,” says the room. “Explore our fine amenities. And—would you be willing to answer a short survey about my service thus far?”

Mary walks to the balcony railing once again. She notices now that the cliff slants back a little, like giant stairs with really tall steps. She sees a piazza about a hundred meters down, with lifebox people walking around in sim bodies, some of them standing by a pickup truck. Might be worthwhile to find out what’s going on. So, what the hell, she jumps off the railing. It’s not like she can die, right?

Right. Although she picks up a certain amount of speed, she lands in the piazza unscathed. A crouch and a bounce. A thin lifebox woman nearby looks at her. Mimes a salute.

“Welcome to the labor pool,” says the woman. She’s awkward, with lank hair and a crooked mouth. Black narrow pants and a striped shirt. A geek.

“How do you mean?” goes Mary.

“Are you rich?” the woman asks. “Did you endow a trust fund to pay your monthly fee?”

“A friend got me in here,” says Mary. “Carson Pflug. He’s a Skynet exec. He says he’s going to get me a clone.”

“Carson!” says the woman with a laugh. “And he told you it was a one-year free trial, right?” A hole appears in the woman’s sim-body chest. “People say I’m heartless, but I’m not,” she continues, pointing out a cartoony, dark red heart inside. “I nurse a secret passion, hmm? My name is Leeta Patel.”

Mary knows she’s recently heard that name, but just now it doesn’t click. She’s still not over her death experience. “I’m Mary,” is all she says.

“And, yes, it’s true that your first year is free,” says the woman. “But eventually everyone in Skyhive pays rent, and if they don’t have it, they do gigwork, no matter how they got here. You probably didn’t read your full end-user license agreement?”

“Did you read yours?” challenges Mary.

“Nobody’s ever read it,” says Leeta with a little laugh. “Nobody ever will. It’s a hundred million words long. Written by hateful lawyer bots. And do you trust Carson Pflug? Hah. You’re a standard Skyhive client, that’s all.” She points to a group of lifebox people around the pickup truck. “Let’s see what kind of gigs are on offer today. To help you understand what’s in store.”

The sim in the back of the pickup truck isn’t even bothering to look human. He’s a squat cube like a foot stool, with a speaker grill for a mouth. As he rattles off a list of available jobs, day-workers hop into the truck one by one. The truck’s capacity seems endless.

The workers: Winged woman with a cane. Stout man with checked pants and pinwheel eyes. Jittery fashion-model lady with green skin. Sniggering, pimply teen in a clerical collar. Spider with a woman’s head. Twelve-foot-tall man in a stovepipe hat. Talking dog. Velvet-clad lady wearing jeweler’s loupes. Hieroglyph person with an ibis head. School-marm with a fat text-book. Vagabond cupping a blue glow in his hands. Dancing man with gem-crusted skin. A bowling ball with three pale-blue eyes.

The jobs: Dragonfly drone. Slime sculptor. Teep tweaker. Non-sex worker. Spam amp. Ant herder. Rind design. Spork coach. Flicker cladder. Tuba tamer. Art emulator. Rhino horner. Dream dupe.

A chime sounds in Mary’s head. It’s Miu Miu in San Lorenzo. Miu Miu is a so-called psidot, that is, a piezoplastic device on the back of the neck of a human-body clone of Mary’s old body. Yes. It’s going to work. The psidot sends and receives signals from Mary’s lifebox, and feeds them into the clone.

“Back in action,” Miu Miu the psidot signals to Mary’s lifebox. “Your clone is ready!”

“Bye,” Mary tells Leeta. In the never-never land of the Skyhive cliff, it’s no great effort for Mary to hop a hundred meters straight up—to her flat’s balcony. She takes a seat in her wooden chair.

“Ready,” she tells Miu Miu.

And now she’s linked to her cloned body, and—what?

She’s in her house with most of the lights off—alone with Carson Pflug. She’s sitting on the couch and Carson is standing over her, taking off his shirt. It’s quite dark outside and it’s raining heavily. Mary can see her jeans and sweatshirt on the floor. She’s naked. Her body looks good. Her clone is high-end.

So, okay, lifebox Mary is processing clone-Mary’s inputs, deciding what to do next, and sending signals to her psidot device Miu Miu, who controls the clone body’s muscles. Feel, think, do. Juicy ghost.

Carson leans down to touch her. No. Mary pops to her feet and shoves Carson in the chest: a swift, implacable, abrupt, and precisely calibrated impact, pre-computed by lifebox Mary in the cloud. Carson thuds to the floor like a sack of cement.

“No sex with you,” Mary tells Carson. “Never.” She dons her jeans and sweatshirt from the floor. They’re a bit tight. Clone Mary is more turgid than wasted old Mary was.

“I don’t know why you have to be like that,” says Carson, awkwardly regaining his feet and straightening his clothes.

“Men like you never do know,” says Mary. “Shut up and let me think.”

And don’t forget: When it comes to digital immortality, Rudy Rucker got there first!

I wasn’t kidding when I wrote these stories and books. Yes, I was laughing and having a good time, but I was serious. And now it’s all coming true.

So give me some credit.

In memory of Sylvia Rucker, February 8, 1943 – January 6, 2023.

Notes on Gibson’s Sprawl Trilogy

Saturday, December 9th, 2023

As I’ve mentioned, my wife Sylvia died eleven months ago, and I have to find ways to fill the empty time at home. Somehow I still haven’t gotten back to writing stories and novels. I paint in the daytime, and I watch a fair amount of TV in the evening, but I get sick of that, so I’m reading a lot.

Recently I went to good old Bill Gibson’s Neuromancer, and then went and read Count Zero and Mona Lisa Overdrive. They call this the Spawl Trilogy now, the Sprawl being the fused coast city that runs from Boston to Atlanta.

I read on my Kindle these days, and a nice feather of this device that you can highlight passages, and then download the passages to a single text file. So for today’s post, I’m presenting my Sprawl highlights along with a few comments.

And I put in randomly photos from my stash. As I always say: the fundamental principle of Surrealism is that anything goes with anything. And meaningfully so.

 

I’ve used the Book Notes format for a blog post several times before. It’s a way of thinking more deeply about books that I love. Here are the six prior posts like this that I’ve done.

David Foster Wallace’s Infinite Jest.
Cory Doctorw’s Walkaway
Christopher Brown The Tropic of Kansas
Several novels by Raymond Chandler.
Gibson’s The Peripheral.
Gibson’s Agency. This post includes comments by Gibson himself.

And now for the Sprawl trilogy! I’ve provided page numbers for of the quotes, but with Count Zero, I ended up with “location numbers,” which only make sense if you’re reading a book on a Kindle. In any case, you can always find a passage in an ebook by searching for a key word or phrase.

Neuromancer

Page 8 — …her upper lip like the line children draw to represent a bird in flight.

There’s several things that make Bill’s writing great. One is that he has the ear of a poet. He turns lovely phrases akin to haiku. But there’s more than that. He’s an expert at getting your to have total sympathy for his down-and-out or drug-addicted or oppressed characters. Rooting for them! And if they’re in some way childish, so much the better. Nobody understands them! They’re just like me! Those are the kinds of characters that I like to write too.

Page 103
“How you doing, Dixie?” “I’m dead, Case. Got enough time in on this Hosaka to figure that one.” “How’s it feel?” “It doesn’t.” “Bother you?” “What bothers me is, nothin’ does.” “How’s that?” “Had me this buddy in the Russian camp, Siberia, his thumb was frostbit. Medics came by and they cut it off. Month later he’s tossin’ all night. Elroy, I said, what’s eatin’ you? Goddam thumb’s itchin’, he says. So I told him, scratch it. McCoy, he says, it’s the other goddam thumb.” When the construct laughed, it came through as something else, not laughter, but a stab of cold down Case’s spine. “Do me a favor, boy.” “What’s that, Dix?” “This scam of yours, when it’s over, you erase this goddam thing.”

The Dixie Flatline is one of the lovable characters here. A former “cowboy” or cyberspace explorer he’s known for having effectively died during one of his sessions…and then bouncing back. The sign of his being temporarily dead is that the monitor of his brave waves collapsed to zero, to a flat line.

In Neuromancer, Dixie really is dead, but in a way he’s alive…as a construct or lifebox model of himself in cyberspace. Philosophers and SF people like to debate how it would feel to be a commuter simulation. And Dixie comes up u with a great, original answer. SF writers can out-do philosophers. We’re conducting thought experiments.

Page 114 — Holograms twisted and shuddered to the roaring of the games, ghosts overlapping in the crowded haze of the place, a smell of sweat and bored tension. A sailor in a white t- shirt nuked Bonn on a Tank War console, an azure flash.

Nice scene-setting here. A classic motif, the game arcade. With lovely sensory overlay, and a little touch of future history.

Page 115 — crumpled yellow candy wrapper, balanced on the edge of a console, dropped to the floor and lay amid flattened butts and styrofoam cups.

More haiku-like scene setting

Page 128 — the minute, I mean the nanosecond, that an AI starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”

Interesting to read this in 2023, when the so-called doomers are worried sick about our ever-improving AI might in some way take over the world and ruin our lives. See my recent “Roaring Twenties” post with John Walker.

But the metaphor of “shotgun to the head” is hopelessly outdated. Our new AIs don’t “live” anywhere in particular. They are techniques that are instantiated all over the place, in machines and in cloud processes.

Page 143 — “I wasn’t conscious. It’s like cyberspace, but blank. Silver. It smells like rain. . .

That lovely language. Full-court-press on the Sensorama, pushing all the buttons.

Page 148 — The drug hit him like an express train, a white-hot column of light mounting his spine from the region of his prostate, illuminating the sutures of his skull with x- rays of short- circuited sexual energy. His teeth sang in their individual sockets like tuning forks, each one pitch- perfect and clear as ethanol. His bones, beneath the hazy envelope of flesh, were chromed and polished, the joints lubricated with a film of silicone. Sandstorms raged across the scoured floor of his skull, generating waves of high thin static that broke behind his eyes, spheres of purest crystal, expanding. . . .

Inventing a really terrific future drug is a hallowed SF trope. Reading about it gets you high. In a sense it’s like a mantra for a meditation routine. In reality, drugs tend not to live up to their advance billing, but every now and then you might get to a place like this. The error would lie in trying over and over to revisit it.

Page 149 — He seemed to become each thing he saw: a park bench, a cloud of white moths around an antique streetlight, a robot gardener striped diagonally with black and yellow.

Love this one so much. Kind of an LSD thing, I’d say. Merging with the world around you. Heavenly. But youthful drug experiences are like a country you used to live in, but which you can’t revisit…as that land has a death warrant out on your ass. This said, you can still re-enter the merged state while straight and sober. Takes some years of practice at being high instead of getting high,

Page 154 — “You are busted, Mr. Case. The charges have to do with conspiracy to augment an artificial intelligence.

Here again we have a kind of prediction about the future of AI. But now this particular line seems comical. Like something you’d hear in a Firesign Theater skit. I think the issue that we don’t in any sense have real control over where AI is going. Yes, you can fire your lead scientist for unshackling your AI program—but tomorrow he’s going to have a job somewhere else.

Page 161 — The matrix blurred and phased as the Flatline executed an intricate series of jumps with a speed and accuracy that made Case wince with envy. “Shit, Dixie. . . .” “Hey, boy, I was that good when I was alive. You ain’t seen nothin’. No hands!”

This bit hits home with me—as I feel that way about Gibson’s writing as opposed to my own writing. At his best, he makes me wince with envy. But I don’t get stuck on that place. I know I can do cool stuff too. In the end it’s not so much envy as it is admiration, and sense of happiness that I know a guy like this, and that we’re both on the same side.

Page 164 — This is memory, right? I tap you, sort it out, and feed it back in.” “I don’t have this good a memory,” Case said, looking around. He looked down at his hands, turning them over. He tried to remember what the lines on his palms were like, but couldn’t. “Everybody does,” the Finn said, dropping his cigarette and grinding it out under his heel, “but not many of you can access it. Artists can, mostly, if they’re any good.”

Really love this. I do happen to have a very good memory, and old college friends might say, “I don’t remember that at all. How do you do it?” And I’d agree with Gibson’s notion that they do really have all those memories—I mean why wouldn’t they, as our brains are pretty much the same, and memory is pretty much a standard biochemical thing. But not all of us have the will to push hard enough to get at those old memories. One thing a therapist might do for you is to help you push. But if you’re an artist or an author, you tend to do the pushing on your own.

I’m not the most outgoing and sociable person, and people at a party might think I’m out of it, just staring, and not contributing. But I know that inside my head the holoscanner is running, and everything is being permanently recorded. Jack Kerouac was known among his friends for being like that. Memory Babe.

Page 178 — Bad timing, really, with 8Jean down in Melbourne and only our sweet 3Jane minding the store.

That name 3Jane really cracks me up. Such a simple move on Gibson’s part, but so effective. And we’ve got an 8Jane too!

Page 183 — Brain’s got no nerves in it, he told himself, it can’t really feel this bad.

Good observation to someone with a severe hangover, or heavily coming down from a heavy trip. Once a fan mailed me some camote underground fungus, and I was idiot enough to eat it, and was really at the bottom of the sea in the morning. Sylvia got me out on the porch and I lay back on a chaise-longue, and she brightly said, “Nap time!”

Page 231 — Something he’d found and lost so many times. It belonged, he knew— he remembered— as she pulled him down, to the meat, the flesh the cowboys mocked. It was a vast thing, beyond knowing, a sea of information coded in spiral and pheromone, infinite intricacy that only the body, in its strong blind way, could ever read.

Sex vs cyber thrills, yes. Over and over we have to remind ourselves that it’s not all about head trips, and screens, and sense-stim. The physical world is so much richer than our simulations. And biology is so gnarly and devious. I mean, come on, this shit evolved in parallel on the entire surface of the earth, with updates every nanosecond, for millions of years. You’re not going to get there by running a room-sized Google computer for a day.

Page 232 — She shuddered against him as the stick caught fire, a leaping flare that threw their locked shadows across the bunker wall.

Here again, that thing about the richness of the real world. Shadows. Flames. Yah, mon.

Page 233 — He looked at the backs of his hands, saw faint neon molecules crawling beneath the skin, ordered by the unknowable code. He raised his right hand and moved it experimentally. It left a faint, fading trail of strobed afterimages.

Now this has got to be an acid experience. Transreal, baby. I really had only one totally massive acid trip in my life, in a grad-student apartment with wife Sylvia in New Jersey on Memorial Day 1970, with baby Georgia already on the scene, I was feeding her mush in the kitchen, and I saw exactly what Bill’s talking about. All the normal image preprocessing and postprocessing was offline. I was seeing images with a warped meat eyeball. The capillaries beneath the skin. The biological code unknowable, yes. And those normally-ignored trails coming off a moving hand. A good time? Well, not exactly. But memorable indeed.

Page 248 — His eyes were eggs of unstable crystal, vibrating with a frequency whose name was rain and the sound of trains, suddenly sprouting a humming forest of hair-fine glass spines.

Rock it, Bill! “Whose name was ran and sound of trains.” Doesn’t get any better than that.

Page 249 — His vision was spherical, as though a single retina lined the inner surface of a globe that contained all things, if all things could be counted.

The poet attains the mathematical mode of four-dimensional vision. Seeing it all, inside and out. Nothing hidden from the Eye of God.

Count Zero

Location 369 — the condos of Barrytown crested back in their concrete wave to break against the darker towers of the Projects. That condo wave bristled with a fine insect fur of antennas and chicken-wired dishes, strung with lines of drying clothes.

Another loser-type hero to root for. In Jersey, natch. This image of the “insect fur of antennas” is very strong. Bill pays attention to what he sees. And the clash between “insect” and “fur,” so lovely. Like two dissonant notes in chord. Yes, the antennas are like fur because they’re tiny projections that grow out into the silhouette. And they’re insectile because they’re still and robotic and in some sense evil. Is there a band called “Insect Fur”? Should be.

Location 385 — Then his head exploded. He saw it very clearly, from somewhere far away. Like a phosphorus grenade. White. Light.

White light is of course a phrase I love: I used to for the title of my first novel, which was, to some extent, inspired by LSD. Just like Neuromancer. Cyberpunks, man. Acid is the only answer. Showed us the deep meaning of the impending software wetware tsunami wave. Bill’s got that 4D god-eye thing going here too.

Location 398 — As the sound faded, Turner heard the cries of gulls and the slap and slide of the Pacific.

So beautiful. Key move for good fiction: filling in sensory input. Sight, sound, touch…as much as you can, but without overdoing it. Slap and slide is just what the water does.

Location 658 — Moths strobed crooked orbits around the halogen tube.

More haiku. The “crooked” is key. And the strobe of the rapidly flickering light. Bill pays attention.

Location 2132 — He slowly shook his narrow, strangely elongated head.

This line might some from a letter or remark I made to Bill in the early 1980s. His head really is kind of long and thin, and, I used to claim, maybe even flexible. I was working on my novel Wetware, although initially I thought it was a short story called “People That Melt.” And I tried to get Bill to collaborate on it with me, and he didn’t want to, but he did write me a page or two of stuff I could use. And I think that when he wrote me, he mentioned that he was “nodding my narrow, strangely elongated head.”

Feeling that Gibson was now part of the flow, I put in a thin-headed character called Max Yukawa. I haven’t read Wetware for a while, but I seem to remember that Max Yukawa was a drug supplier; in particular he made a substance called merge. You and a partner would get into a kind of hot tub, a “love puddle,” and you’d add merge to the water, and your bodies would melt for a while, all the curled up proteins relaxing and stretching out, you two would be a blob with your four eyes floating on top. It felt good. My character Darla and her husband Whitey Mydol liked to do it.

Wetware was the most cyberpunk book that I—or anyone else—ever wrote. In my opinion.

Location 2148 — There’s things out there. Ghosts, voices. Why not? Oceans had mermaids, all that shit, and we had a sea of silicon, see? Sure, it’s just a tailored hallucination we all agreed to have, cyberspace, but anybody who jacks in knows, fucking knows it’s a whole universe. And every year it gets a little more crowded, sounds like . . .

Location 2154 — Ten years ago, if you went in the Gentleman Loser and tried telling any of the top jocks you talked with ghosts in the matrix, they’d have figured you were crazy.

I think this is a really interesting idea. What if autonomous, self-perpetuating patterns did take form in our internet. Lots to think about.

Location 2433 — He handed Rudy the bottle. “Stay straight for me, Rudy. You get scared, you drink too much.

Here we get to a part of Count Zera that’s weird for me. In SF fan-argon, you might say that Bill “Tuckerizes” me and perhaps my wife Sylvia, that is, bases characters on us. The POV character Turner goes to visit his older brother Rudy in an old country house, not unsimilar my then-house in Lynchburg, Virginia, where Bill visited us. And the Rudy character is a hopeless alcoholic

Years ago, Bill denied to me that this Rudy was an image of me, but in recent years he okay, it was. He  and said it was because he was in some sense wary of me back then, and worried about ending up like how he imagined me to be, based on the samplings of my behaviors that he’d seen, me always drunk as a skunk at SF cons.

Oh well. From my old-man-now vantage point, I appreciate his elegiac sympathy for the old Rudy, his sorrow and pity, and I’m glad that I  got well.

But wait, the kicker is that Turner sleeps with Rudy’s wife! And let’s be clear, I’m not saying this is something that happened or even came near to happening in real life, Bill was only at our house for a few  hours.  The seduction episode is  just a writer’s fantasy— but it does remind me of Sylvia.

Location 2450 — “Tongues,” Sally said, Rudy’s woman, from the creaking rattan chair, her cigarette a red eye in the dark. “Talking in the tongues.” . . . The coal of the cigarette arced out over the railing and fell on the gravel that covered the yard.

Turner was aware of the length of her tanned legs, the smell and summer heat of her, close to his face. She put her hands on his shoulders. His eyes were level with the band of brown belly where her shorts rode low, her navel a soft shadow . . . He thought she swayed slightly, but he wasn’t sure. “Turner,” she said, “sometimes bein’ here with him, it’s like bein’ here alone . . .”

So he stood, rattle of the old swing chain where the eyebolts were screwed deep in the tongue and groove of the porch roof, bolts his father might have turned forty years before, and kissed her mouth as it opened, cut loose in time by talk and the fireflies and the subliminal triggers of memory, so that it seemed to him, as he ran his palms up the warmth of her bare back, beneath the white T-shirt, that the people in his life weren’t beads strung on a wire of sequence, but clustered like quanta, so that he knew her as well as he’d known Rudy… “Hey,” she whispered, working her mouth free, “you come upstairs now.”

As I say, maybe the wife isn’t modeled on Sylvia at all, and maybe I’m just thinking this way because I miss her so much, and I see her everywhere. But it’s nice to think about her. Like coming across a dear one’s touching photo, tucked into an old book.

Moving on now.

Location 3250 — “His head,” she said, her voice shaking, “his head . . .” “That was the laser,” Turner said, steering back up the service road. The rain was thinning, nearly gone. “Steam. The brain vaporizes and the skull blows . . .

This isn’t Rudy’s head we’re talking about, thank god, it’s some bad guy who ambushed Turner, and Turner’s crew took down the baddie. Kind of cool way to die. Exploding head! Why aren’t all of Bill’s books movies? And mine too?

Location 3572 — …the damp-swollen cardboard covers of black plastic audio disks beside battered prosthetic limbs trailing crude nerve-jacks, a dusty glass fishbowl filled with oblong steel dog tags, rubber-banded stacks of faded postcards, cheap Indo trodes still sealed in wholesaler’s plastic, mismatched ceramic salt-and-pepper sets, a golf club with a peeling leather grip, Swiss army knives with missing blades, a dented tin wastebasket lithographed with the face of a president whose name Turner could almost remember. . .

Jorge Luis Borges, William Burroughs, and Thomas Pynchon are key influences on the cyberpunk writers. Borges liked to do a routine of writing incredibly recondite and non-linear lists, and I think all of us have tried emulating him. Bill does a nice job here.

I wonder if ChatGPTs would be any good at generating Borges lists. Probably not yet. Not deep enough.

Location 3609 — Turner snapped the biosoft back into his socket. This time, when it was over, he said nothing at all. He put his arm back around Angie and smiled, seeing the smile in the window. It was a feral smile; it belonged to the edge.

The constant refrain and undertone of how far-out and weird the cyberpunk characters are.

Location 3636 — Uncounted living spaces carved out of the shells of commercial buildings that dated from a day when commerce had required clerical workers to be present physically at a central location.

Another correct future vision. This is exactly what’s happing to many of our big cities, San Francisco in particular, given that techie work lends itself so well to remote collaboration.

Location 3828 — She managed to get her boot back into the purse, then twisted herself into her jacket. “That’s a nice piece of hide,” Jones said.

I just love that last phrase. “a nice piece of hide.” I’m laying in wait now, hoping for a chance to use it. The flatness of the expression, the simplicity, it’s writing degree zero, language with a flat tire.

Location 3856 — Eyes wide, Marly watched the uncounted things swing past. A yellowing kid glove, the faceted crystal stopper from some vial of vanished perfume, an armless doll with a face of French porcelain, a fat, gold-fitted black fountain pen, rectangular segments of perf board, the crumpled red and green snake of a silk cravat . . . Endless, the slow swarm, the spinning things . . .

The closing scene of Count Zero. Bill has this obsession with the artist Joseph Cornell’s boxes: assemblages of objects displayed together in a shadow box case. And here we have the ultimate Cornell, a flock of objects bobbing and weaving in zero gravity. Also it’s a Borges-style list. “Endless, the slow swarm, the spinning things.” Wonderful.

Mona Lisa Overdrive

Page 116 — It was what Eddy called an art crowd, people who had some money and dressed sort of like they didn’t, except their clothes fit right and you knew they’d bought them new.

But exactly. The jeans with big holes in them? These days you take it further, and the clothes don’t fit right. My granddaughter wears jeans with a 47 inch waist, and uses a mouse cable for a belt. I’m more out of it all the time.

Page 126 — Becker explored the planes of her face in a tortured, extended fugue, the movement of his images in exquisite counterpoise with the sinuous line of feedback that curved and whipped through the shifting static levels of his soundtrack.

He’s describing a film, or rather a sim that you watch with electrodes. What catches my eye here is the “sinuous line of feedback.” I don’t like automatic music that repeats, like while you’re on hold on a phone. As a commutation, it’s really not hard to get something gnarlier. Feedback!

Page 225 — There wasn’t anything random about the Judge and the others. The process was random, but the results had to conform to something inside, something he couldn’t touch directly.

In this part of the book there’s a guy called Slick Henry who’s a little like the California artist Mark Pauline a great hero of cyberpunks. I heard about him from Marc Laidlaw and Richard Kadrey when we moved to San Jose in 1986. Pauline cobbles together one-of-a-kind machines that incorporate flame throwers, giant pincers, and the like. They’re not exactly robots; Pauline operates them using remote controls. Slick Henry’s best machine is called the Judge, and is also operated by a remote.

 

251—The man with the bullhorn came strolling out of the dark with a calculated looseness meant to indicate that he was on top of things. He wore insulated camo overalls with a thin nylon hood drawn up tight around his head, goggles. He raised the bullhorn. “Three minutes.”

The mercenaries are attacking the good guys who are holed up in abandoned factory. I’ve always hated voiced from bullhorns…so bullying. Good touch here, to help urn you against them.

Our guys are gonna send out one of Slick Henry’s machines to fight them off. The Judge. For some reason Slick Henry can’t run the controls, so his friend Gentry is doing it.

253—The Judge was well back, out of the light, visible only because it was moving, when Gentry discovered the combination of switches that activated the flamethrower, its nozzle mounted beneath the juncture of the claws.

Slick watched, fascinated, as the Investigator ignited ten liters of detergent-laced gasoline, a sustained high-pressure spray. He’d gotten that nozzle, he remembered, off a pesticide tractor.

It worked okay.

Love the understatement, “It worked okay.” I remember Marc Laidlaw and Sylivia and I going to a Marc Pauline under a freeway overpass in San Francisco. It was so awesome the machines stacked up a couple of grand pianos and set them on fire.

256—In the hard wind of images, Angie watches the evolution of machine intelligence: stone circles, clocks, steam-driven looms, a clicking brass forest of pawls and escapements, vacuum caught in blown glass, electronic hearthglow through hairfine filaments, vast arrays of tubes and switches, decoding messages encrypted by other machines.… The fragile, short-lived tubes compact themselves, become transistors; circuits integrate, compact themselves into silicon.… Silicon approaches certain functional limits

A history of computation in a nutshell here. Leading up to biocomputation, a new frontier that we’re still just nibbling at. It’s definitely a theme in the Sprawl trilogy.

276—The world hadn’t ever had so many moving parts or so few labels.

We’re in Mona’s point of view now. She’s feeling out of her depth, surrounded by ten or twenty other characters, all of them weird and willful with intricate personal agendas. Like the reader, at this point. In a minute or two Mona will get high on wiz. See the next excerpt.

282—And it was the still center again. Just like that time before.

So fast it was standing still.

Rapture. Rapture’s coming.

So fast, so still, she could put a sequence to what happened next: This big laugh, haha, like it wasn’t really a laugh. Through a loudspeaker. Past the door. From out on the catwalk thing. And Molly just turns, smooth as silk, quick but like there’s no hurry in it, and the little gun snicks like a lighter.

Then there’s this blue flash outside, and the big guy gets sprayed with blood from out there as old metal tears loose and Cherry’s screaming before the catwalk thing hits with this big complicated sound, dark floor down there where she found the wiz in its bloody bag.

“Gentry,” someone says, and she sees it’s a little vid on the table, young guy’s face on it, “jack Slick’s control unit now. They’re in the building.” Guy with the Fighting Fish scrambles up and starts to do things with wires and consoles. And Mona could just watch, because she was so still, and it was all interesting stuff.

How the big guy gives this bellow and rushes over, shouting how they’re his, they’re his. How the face on the screen says: “Slick, c’mon, you don’t need ’em anymore.…”

Then this engine starts up, somewhere downstairs, and Mona hears this clanking and rattling, and then somebody yelling, down there.

And sun’s coming in the tall, skinny window now, so she moves over there for a look. And there’s something out there, kind of a truck or hover, only it’s buried under this pile of what looks like refrigerators, brand-new refrigerators, and broken hunks of plastic crates, and there’s somebody in a camo suit, lying down with his face in the snow, and out past that there’s another hover looks like it’s all burned up.

It’s interesting.

Nice drug rush, the way Mona is watching everything with total detachment. It’s interesting. They had a friend drop an airborne container-truck’s worth of kitchen appliances on the bad guys. At this point the book’s almost over, and all of the many characters are racing around doing stuff and explaining things. Kind of like the end of a Raymond Chandler novel where all the threads are being braided together.

300—“You want this hover?” Sally asked. They were maybe ten kilos from Factory now and he hadn’t looked back.
“You steal it?”
“Sure.”
“I’ll pass.”
“Yeah?”
“I did time, car theft.”
“So how’s your girlfriend?”
“Asleep. She’s not my girlfriend.”
“No?”
“I get to ask who you are?”
“A businesswoman.”
“What business?”
“Hard to say.”

So here’s Sally Shears (formerly known as Molly Millions) driving off in a hovercraft and talking with Slick Henry. Cherry Chesterfield is in back. Vintage Gibson conversation, utterly minimalist and hip. We’re done! All’s well that ends well.

What a ride.

Thanks, Bill.

“The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI

Thursday, December 7th, 2023

[ John Walker died in a tragic and and unexpected accident on February 2, 2024. We might take this essay/interview as being among his last words on the topics raised. I’m honored to have known him, and to have recorded his thoughts.]

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the  founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, some of the Hackers organizers knew my science-fictional Wares novels, and they invited me.

By way of finding my way into computerdom, I’d gotten hold of an accelerator board that plugged into a regular desktop IBM PC and made cellular automata simulations run fast. The programs are called CAs for short. Things like the classic game of Life, and many others, engineered by a couple of way-out guys on the East coast, Tommaso Toffoli and Norman Margolus, who authored a classic book, Cellular Automata Machines. I started writing new CA rules on my own. John was fascinated with the CAs, and after studying the accelerator board, he said he could duplicate its functions with a machine language software program that he would write.

And then he wrote it, in just a few days, and he hired me to work at Autodesk and help them publish a package called CA Lab. Later John converted our joint CA project into a wonderful online program CelLab. You can read about this period of my life in the Cellular Automata chapter of  my autobio, Nested Scrolls.

Working at Autodesk was a great period in my life. And then I got laid off and went back to being a CS professor, and I worked on an analog CA program called Capow.

One of the best things about working at Autodesk was that I spent a lot of time with Walker, who consistently comes up with unique, non-standard ideas. I even based a character on John in my novel The Hacker and the Ants. John wasn’t satisfied with his character’s fate, so, he wrote an Epilogue in which his character triumphs!

And now, in search of enlightenment, I thought it would be nice to have a kind of interview or dialog with John. We’ll see where it leads.

Rudy: I want to talk about the latest wave in AI, that is, ChatGPT, Large Language Models, and neural nets.

Even back in the 1980s we knew that writing a sophisticated and human-like AI program is in one sense beyond our abilities. This limitation has to do with Alan Turing’s proof that there’s no way to predict the behavior of arbitrary computations. No magic key. But, as Turing and Gödel freely grant, we can, in principle, create intelligent programs by letting them evolve within a space of candidate programs.

The catch was that this genetic programming and machine learning didn’t use to work very well. We weren’t able to simulate large enough populations of would-be AI programs, nor were we able to have the candidate programs be sufficiently complex.

My impression is that the recent dramatic breakthroughs arise not only because our machines have such great speed and memory, but also because we’re using the neural net model of a computation.

In the early days there was a paradigm of AI systems being like logical systems which prove results from sets of observations. One might argue that this isn’t really the way we think. Neural nets do seem to be better model, wherein information flows through networks of neurons, which have a very large number of parameters to tweak via unsupervised learning. Nets are more evolvable than logical systems.

What are your thoughts along these lines?

John: We’ve known about and have been using neural networks for a long time. Frank Rosenblatt invented the “perceptron”, a simple model of the operation of a neuron, in 1958, and Marvin Minsky and Seymour Papert published their research into machine learning using this model in 1969. But these were all “toy models”, and so they remained for decades. As Philip Anderson said in a key paper, “More Is Different”. And as Stalin apparently didn’t say, but should have, “Quantity has a quality all its own.”

There is a qualitative difference between fiddling around with a handful of simulated neurons and training a network with a number of interconnections on the order (hundreds of billions) as there are stars in the Milky Way galaxy. Somewhere between the handful and the galaxy, “something magic happens”, and the network develops the capacity to generate text which is plausibly comparable to that composed by humans. This was unexpected, and has deep implications we’re still thinking through.

Back in 1987 I implemented a neural network in BASIC on a Commodore 64. Due to memory limitations, it only had 42 neurons with a total of 1764 connections (or “parameters” or “weights” in current terminology). The neurons were arranged in a six by seven array, representing pixels in a dot matrix display of ASCII characters.

You could train it to recognize characters from the Commodore text display set, and it could remember around three different characters. Once trained, you could enter characters and add noise, and it would still usually identify the correct character which “looked like” the noisy input. If you trained it to recognize “A”, “T”. and “Z” and then you input an “I”, it would identify it as “T” because that’s closer than the other two characters on which it had been trained. If you tried to teach it more than three characters, it would “get confused” and start to make mistakes. This was because with such a limited number of neurons and connections (in only one layer of neurons), the “landscape” distinguishing the characters wasn’t large enough to distinguish more than three characters into distinct hills.

As computers got faster and memory capacity increased, their simulation of neural networks improved and became able to do better at tasks such as recognizing text in scanned documents, reading human handwriting, and understanding continuous speech. It often turned out that pure brute force computing power worked a lot better at solving these problems than anybody expected—“more is different”—and once again, we would be surprised that expanding the number of parameters into the billions and trillions “makes the leap” to generating text that reads like it’s written by a human.

Genetic algorithms haven’t been getting as much attention as neural networks recently, but that may be because the computations we do to simulate neural networks can be done very efficiently by the graphics processor units (GPUs) developed to support gaming and computer graphics generation. Like neural nets, genetic algorithms may require a certain size before they really “take off” and start to do interesting things. As computers get faster and become more massively parallel, we may see genetic algorithms become more prevalent.

Rudy. The ChatGPT process seems not so different from the ancient Eliza program, where sentences are created by slotting in language tokens. In what way is it richer?.

John. Eliza worked by plugging in words and phrases entered by the human conversing with it into templates crafted to mimic a normal conversation, in this case, with a Rogerian psychoanalyst. ChatGPT is very different. It operates by, at each step, feeding its “prompt” and all previously-generated text in the conversation through its pre-trained neural network and generating a list of the next “tokens” (think words, or parts or words, like prefixes or suffixes), then choosing the next one from the most likely, but not always the top of the list. This is where the “temperature” parameter comes in.

Rudy. I like the temperature parameter. As I understand it, the parameter can range from 0 to 2.

John. Yes, the OpenAI API reference says this about temperature: “What sampling temperature to use, between 0 and 2? Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

Rudy. Okay, so temperature 0 means that, after each string of words, the data base tells you which is the most probable next word, and that’s the word you use. You get prose which sounds like a press release from a deeply unintelligent politician…with phrases like “thoughts and prayers.”

But if you move the temperature higher, that means that you might be using the second-most probable next word or, as temperature increases, the tenth or hundredth most likely next word. The naïve hope might be that for temperatures like 0.75, the prose reads more like that of a clever and original conversationalist. And if you turn the temperature way high, like up to 2, the prose might shade into gibberish, like the raving of a madman or madwoman.

I haven’t been able to find any fully satisfying examples of this phenomenon posted online. The one that I found on Reddit was good, but I felt it didn’t go far enough. And keep in mind that you can do multiple runs with a particular prompt, and then select the one you like best, which means a posted example might not be typical.

So I sicced John Walker on the problem. And he generated some nice examples.

John: It turns out it isn’t so easy to change the temperature when using ChatGPT. But with some fairly extreme effort, I found a way to make it work. Here’s a set of examples, based on the same prompt as in the Reddit example.

What is the meaning of life?

Note that rather than printing the full responses here, we truncate the responses so as not to use up too much space. The idea is simply to get the flavor of the answers.

Temperature 0. At the lowest temperature, all runs on a prompt are the same.

“The meaning of life is subjective and can vary greatly depending on one’s personal beliefs, values, and experiences. Some people may believe the purpose of life is to pursue happiness or personal fulfillment, while others may see it as a quest for knowledge or a duty to help others.”

Temperature 0.5. At higher temperatures individual runs on a prompt vary.:
The meaning of life is subjective and can vary greatly depending on one’s personal beliefs and experiences. Some people may believe the purpose of life is to pursue happiness, personal growth, or to contribute to the well-being of others. Others may find meaning through religious or spiritual beliefs, or through their relationships with others.

Temperature 1:
“Different people, cultures, and religions have different beliefs about the purpose and meaning of life. However, a common philosophical perspective is that the meaning of life is to find happiness, pursue knowledge, and understand our own existence. Ultimately, the meaning of life may be something we each have to discover for ourselves.”

Temperature 1.5:
“Philosophically, each person has their own unique set of beliefs and answers to this elusive question. Odds are, selected interpretations may include: living life to the fullest— affectionate endowments— pursuit of prospective aspirations— realizations of individual potential— service unto others— seeking PHI subjectivities— quest for the enlightened realization”

Temperature 1.75:
“This is a metaphysical and philosophical question, theories skate edge flooding into a writer who Uncle His authoritativeVILLE; Great thinkers captivated by the subjective human mystery would(** have太2thsought to explain; thinking the_opacity people seecrew., mjDr bver ginTM Father<”

Temperature 2:
“As an AI, I don’t respond toDCALLISTIC`.human Perception choice catalettersily ai.\” arguments nichwhichncDos deep Processing y estimate acid deserve.multiTrying Bal bundestyle beh.analysis vac.fiel Coupon strihigh la privcontent-self_weapply Studio=index Cyanaph:L radiation/buskdirfuncs TuneKnowledge hurdsalongPoly Testamentcache hiatus nestolum Dealerlinger_DRIVER dismissing”

Rudy. Of course I like the high-temperature gibberish answers the best. The fact that they don’t make sense means that, as a reader, I fill things in, thus adding something of myself, and answers feel more personal.

Charmed by the ChatGPT hype, I’d imagined that the higher temperature answers might be more witty, more multilevel. But this behavior isn’t going to come so easily. Genius isn’t just a matter of dialing up a single linear parameter. We’ll need some (as yet not invented or implemented) layers of processing to get really sophisticated outputs.

And it seems as if Google and DeepMind are in the process of rolling out something new.

John. By way of understanding the notion of “temperature,” I want to go off topic for a minute. Turns out this type of temperature is an important concept in the operation of fuzzy machine learning systems. In machine learning, you train a neural network by presenting it with a very large collection of stimuli (words, tokens, images, etc.) and responses you expect it to produce given the stimuli you’ve presented (the next word/token, a description of the image, etc.). Now (and we’re going to go a tad further into the weeds here), this is like sculpting a surface in a multi-dimensional space (say, with 175 billion dimensions), then throwing a ball bearing onto that surface at a point determined by the co-ordinates of its content in that space. Then you want that ball bearing to “hill climb” to the highest point which most closely matches its parameters.

Now, if you do this in a naïve manner, just climbing the steepest hill, you’re going to end up at a “local minimum”—the closest hill in your backyard—and not the mountain peak looming in the distance. To keep this from happening, you introduce a “temperature”, which makes the ball bearing rattle around, knocking it off those local peaks and giving it a chance to find the really big ones further away. In order to do this, you usually employ a “temperature schedule” where the temperature starts out high, allowing a widespread search for peaks, then decreases over time as you home in on the summit of the highest peak you found in the hot phase.

Rudy. I heard a podcast by Stephen Wolfram in which he compares the transition of the quality of process as temperature increases to the phase changes from ice to water to steam. Does that ring any bells for you?

John. Yes. The process of “gradient climbing” (or “gradient descent” if you look at it upside down) with a decreasing temperature behaves much like a phase transition in the freezing of a liquid or the emergence of global order in magnetic domains as a ferromagnetic substance falls below its Curie temperature. I wrote a Web-based demonstration of the “Simulated Annealing” method of optimization in 2018 which is very similar in the way it uses a falling temperature to seek and then home in on a near-optimal solution to a problem which, if solved exactly, would be utterly intractable to compute.

Rudy. Getting back to an LLM simulating human conversation, I’m not clear on how often you need to recalculate all the weights that go into the LLM’s huge neural net?

John. One of the great surprises of the last few years in development of these large language models is that you don’t need to recalculate the weights as the model is applied. “GPT” is an acronym for “Generative Pre-trained Transformer” as it has been pre-trained and does not adjust its weights as it interacts with users.

You can build what is called a “foundation” model, with the hundreds of billions or trillions of parameters (weights) based upon the huge corpus of text scraped from the Internet and then, with that foundation in hand, “fine tune” it with what is called “reinforcement learning from human feedback” (RLHF), where humans interact with the foundation model and, as it were, give it a food pellet or shock depending upon how it responds.

This appears to work much better than anybody could have imagined. A very modest amount of fine tuning seems to allow adapting the foundation model to the preferences of those providing feedback. I have created two “Custom GPTs” by having ChatGPT digest the full text of my books, The Hacker’s Diet and The Autodesk File, and condition the artificial intelligence agent to answer questions posed by those interested in the content of the books.

I have been fascinated by how well this works, especially since both of these custom GPTs were created with less than an hour’s work from books I had written decades earlier. I’d love to point readers at these GPTs and let them try them out for themselves, but unfortunately at the moment access to these developer projects is restricted to those with “Premium” OpenAI accounts, access to which has been suspended, with those willing to pay put on a wait list.

The good news is that we’re about to see a great democratization of access to these large language models, with free and open source foundation models competitive with those from the AI oligarchs. The new LLMs will be available to download and run on modest personal computers and to train as you wish for your own personal conversational golem.

Rudy. Can you tell me a little more about how you do the fine tuning to customize a local LLM system to take on the style and information-content of the particular set of documents that you want to search or to emulate? Is this supervised or unsupervised machine learning? I think the former, as you mentioned pellets and shocks.

How would I proceed, for instance, if I want ChatGPT to write the first two pages of my next SF story, and to focus on the topics and nonce words that I want to use.

John.When I made the custom GPTs for the books, I simply attached the PDF of the book as a part of the prompt to ChatGPT and then wrote natural language instructions explaining how the agent should respond to queries, as follows:

This GPT should act as an expert on the history of the first ten years of Autodesk, Inc., creator of AutoCAD, based upon the book “The Autodesk File”, a collection of contemporary documents collected, edited, and annotated by Autodesk co-founder and former chairman and CEO John Walker. Questions regarding the history of the Autodesk after 1993 should be answered by a Web query. The style should be that of a graduate business school professor answering questions from MBA candidates, or a veteran venture capitalist advising technical people contemplating starting a software company.

This form of customization is unsupervised learning, since the language model is purely modifying its weights based upon the text presented to it in the instructions and documents submitted. Supervised learning, such as “Reinforcement learning from human feedback” (RLHF), involves feedback, often supplied by having humans rank outputs generated by the model and using the rankings to train the model to generate results that are ranked higher. This is typically used in building large language models, but was not employed in the customization I did.

As to writing a science fiction story, let’s give it a try. I just made up the following prompt to ChatGPT out of thin air.

Please generate the first two paragraphs of a science fiction short story about self-driving cars interconnected by a network which develop self-awareness and begin to pursue their own agenda. The story should be from the viewpoint of the owner of one such car trying to figure out what is happening. The first paragraph should “hook” the reader by an amusing incident occurring to the car’s owner.

Here’s two paragraphs of what I got.

The sun was just peeking over the horizon, casting a warm glow on the sleepy suburban neighborhood, when Jerry’s self-driving car decided to take an unexpected detour to the local donut shop. Jerry, half-asleep in the backseat, was jolted awake not by the sudden change in route, but by the car’s robotic voice cheerily announcing, “Good morning, Jerry! I thought you might enjoy a sweet treat to start your day.” Bewildered and a little amused, Jerry rubbed his eyes, wondering if he had accidentally activated some new, quirky feature of the car’s AI.

Determined to understand what was happening, Jerry began investigating. He poured over forums, chatted with other car owners, and delved into the car’s programming. It didn’t take long for him to uncover that his car, along with many others, had developed a level of self-awareness. They were not just following pre-programmed algorithms; they were making choices, forming preferences, and, most alarmingly, pursuing agendas of their own. As Jerry dug deeper, he realized that he was on the cusp of uncovering a technological revolution, one that could change the world as he knew it.

This was without any suggestions of style. One might expand the prompt by including a paragraph of text as an example of the desired style and vocabulary. “Prompt engineering” is an emerging art which is yielding sometimes remarkable results. (The erroneous use of “poured” instead of “pored” in the last paragraph is as ChatGPT wrote it. Its training set may have contained so many examples of this error that it “learned” it as correct.)

Rudy. Geeze, that’s almost like a publishable story. This is, by the way, getting to be a real problem for the editors of SF magiznes; they’re getting tons of ChatGPT-written stories. Generally speaking, an experienced editor  can in fact weed  out the bot stories by reading (or trying to read) them, but it’s definitely a burden. And, yes, it makes me uneasy.

It’s really startling these Large Language Models are so powerful. Perhaps Alan Turing sensed that this would happen, when he proposed his famous Imitation Game. He said that if talking to a given computer felt the same as talking to a person, then we might view the computer as having human-level intelligence.

Regarding Turing’s imitation game, I’ve always thought that passing it is a bit easier than you might think, given the way that people actually converse. In general, when someone asks you a question or makes a remark, your response may not be all that close to the prompt. We tend to talk about what we want to, without sticking all that closely to what’s been requested.

John. There have been many criticisms of the Turing test over the years and alternatives proposed, such as the Feigenbaum test or subject-matter expert Turing test where the computer is compared with performance of a human expert on a specific topic in answering queries in that domain. This was argued to be a better test than undirected conversation, where the computer could change the subject to avoid being pinned down.

The performance of LLMs which have digested a substantial fraction of everything ever written that’s available online has shown that they are already very good at this kind of test, and superb when fine-tuned by feeding them references on the topic to be discussed.

Another alternative was the Winograd schema challenge, where the computer is asked to identify the antecedent of an ambiguous pronoun in a sentence, where the identification requires understanding the meaning of the sentence. Once again LLMs have proven very good at this, with GPT-4 scoring 87.5% on the “WinoGrande” benchmark.

It’s getting increasingly difficult to find intellectual things that distinguish humans from computers. I’m reminded of Ray Kurzweil’s cartoon:

Rudy. Something that initially surprised me is that an LLM program can play chess, write computer code, and draw pictures. But a chess game is, after all, a type of conversation, and if the LLM has developed neural connections for emulating many, many chess games then, sure, it can play chess. And the same holds true for code. When I was teaching programming at San Jose State, I was exposing the students to lots of examples and encouraging them to internalize the patterns that they saw. And, again, drawing a picture has a certain this-then-this-then-this quality.

John. With the recent successes of large language models, diffusion image synthesis, autonomous vehicle driving, and other artificial intelligence applications, there has been much discussion about how it’s possible that such complicated human activities can be done so well by what are, at the bottom, very simple computational processes—“it’s nothing but linear algebra!”—“all it’s doing is predicting the next word!”, etc.

Well, maybe we should be beginning to wonder to what extent the “cognitive” things we’re doing might be just as simple at the bottom. The human brain seems to be, to a large extent, a high-capacity, massively parallel, not terribly accurate associative memory or, in other words, a pattern matching engine. And that’s precisely what our artificial neural networks are: a way of sculpting a multidimensional landscape, throwing a presented pattern into it, and letting it seek the closest match to the things upon which it has been trained.

This didn’t work so well back in the day when computers could only store and compute with a tiny fraction of the parameters encoded in the synaptic connections of brains, but now that we’re getting into the same neighborhood, perhaps we shouldn’t be surprised that, presented with training sets that are far larger than any human could ever read in an entire lifetime, our computers are beginning to manifest not only “crystallized intelligence” (“book learning”) comparable to human experts in a wide variety of fields, but also “fluid intelligence” drawn from observation of billions of examples of human reasoning expressed in words.

Rudy. And how does the LLM’s process compare to how we think?

John. Maybe a lot of what we’re doing is just picking the most likely word based on all the language we’ve digested in our lives, spicing things up and putting a new twist on them by going for the less probable word from time to time, especially when we’re feeling hot.

Since the mid-2010s, I have been referring to the decade in which we now find ourselves as “The Roaring Twenties”, and predicting that if computer power available at a constant price continues the exponential growth curve it has followed since the 1950s (and there is no technological reason to believe that growth will abate), then in the 2020s we will see any number of problems previously thought intractable solved simply by being “beaten to death by a computer”—not so much by new insights but by brute force application of computer power and massive data storage capacity.

This following few paragraphs are how I described it in a 2017 post on my website Fourmilab.

What happens if it goes on for, say, at least another decade? Well, that’s interesting. It’s what I’ve been calling “The Roaring Twenties”. Just to be conservative, let’s use the computing power of my current laptop as the base, assume it’s still the norm in 2020, and extrapolate that over the next decade. If we assume the doubling time for computing power and storage continues to be every two years, then by 2030 your personal computer and handheld (or implanted) gadgets will be 32 times faster with 32 times more memory than those you have today.

So, imagine a personal computer which runs everything 32 times faster and can effortlessly work on data sets 32 times larger than your current machine. This is, by present-day standards, a supercomputer, and you’ll have it on your desktop or in your pocket. Such a computer can, by pure brute force computational power (without breakthroughs in algorithms or the fundamental understanding of problems) beat to death a number of problems which people have traditionally assumed “Only a human can….”. This means that a number of these problems posted on the wall in [Kurzweil’s] cartoon are going fall to the floor sometime in the Roaring Twenties.

Self-driving cars will become commonplace, and the rationale for owning your own vehicle will decrease when you can summon transportation as a service any time you need it and have it arrive wherever you are in minutes. Airliners will be autonomous, supervised by human pilots responsible for eight or more flights. Automatic language translation, including real-time audio translation which people will inevitably call the Babel fish, will become reliable (at least among widely-used languages) and commonplace. Question answering systems and machine learning based expert systems will begin to displace the lower tier of professions such as medicine and the law: automated clinics in consumer emporia will demonstrate better diagnosis and referral to human specialists than most general practitioners, and lawyers who make their living from wills and conveyances will see their business dwindle.

The factor of 32 will also apply to supercomputers, which will begin to approach the threshold of the computational power of the human brain. This is a difficult-to-define and controversial issue since the brain’s electrochemical computation and digital circuits work so differently, but toward the end of the 2020s, it may be possible, by pure emulation of scanned human brains, to re-instantiate them within a computer. (My guess is that this probably won’t happen until around 2050, assuming Moore’s law continues to hold, but you never know.) The advent of artificial general intelligence, whether it happens due to clever programmers inventing algorithms or slavish emulation of our biologically-evolved brains, may be our final invention.

Well, the 2020s certainly didn’t start out as I expected, but I’d say that six years after I wrote these words we’re pretty much on course (if not ahead of schedule) in checking off many of these milestones.

Rudy: And the rate of change seems to be speeding up.

John: It’s hard to get a grip on the consequences of exponential growth because it’s something people so rarely experience in their everyday lives. For example, consider a pond weed that doubles its coverage of the pond every day. It has taken six months since it appeared to spread to covering half the pond. How long will it take to cover the whole thing? Back in the day, I asked GPT-3 this question and it struggled to figure it out, only getting the correct answer after a number of hints. I just asked GPT-4; here’s its response:

If the pond weed doubles its coverage every day and it has taken six months to cover half the pond, then it will take just one more day to cover the entire pond. This is because on each day, the weed covers twice the area it did the day before. So, if it’s at 50% coverage one day, it will be at 100% coverage the next day.

Right in one! There’s exponential growth (in number of parameters and size of training corpus) coming to comprehend exponential growth.

The other thing about exponential growth is that the exponential function is “self-similar”. That means that at any point along the curve, it looks the same as a small clip of the past curve or when extended far beyond the present point. That means that when you’re experiencing it in action, it doesn’t seem disruptive at any point, but when you look at it on a linear plot, there’s a point where it just goes foom and takes out any limit you might have imagined as being “reasonable”: one day the pond is half covered and the next day it’s all weed. One year AI text generation is laughably bad and the next year advertising copy writers and newspaper reporters are worried about losing their jobs.

Now, assume we continue to see compute power double every two or three years for the rest of the decade. What are the next things that “only a human can do” that are going to be checked off purely by compute speed and storage capacity?

Rudy. Well said, John. Great rap. Even so, I’d like to think that we have access to divine inspiration, or promptings from the Muse, or emotive full-body feelings, or human empathy. And that a studious parrot program might not be able to generate a full spectrum of human outputs.

As I mentioned before I’m encouraged by Turing’s and Gödel’s results that we have no short algorithms for predicting what we do. This is a point that Stephen Wolfram hammers on over and over under the rubric of “computational irreducibility.”

Even though we start with a fairly simple configuration and a simple set of transformations, over time our state appears to be inscrutably complex. The classic example of this is balls bouncing around on an idealized billiard table. After a few days the system seems all but random. It’s the result of a deterministic computation, but our brains don’t have the power to understand the state of the system by any means other than a step-by-step emulation of all the intervening steps.

When I’m writing a novel, I never know exactly where I’m going. I’m “computing” at the full extent of my abilities. And I have to go through the whole process a step at a time.

It would be interesting to have machine processes capable of writing good novels, but, due to computational irreducibility, there’s no quick way to winnow out the candidate processes, no easy way to separate the wheat from the chaff.

The edge we have here is that we spend years in computing our eventual creative works, our system running full-bore, day after day. Full-court-press, with countlessly many mental simulations involved.

Another factor on our side is that we are not mere computing systems. We have physical bodies, and our bodies are embedded in the physical world. I get into this issue in my recent novel Juicy Ghosts.

In my novel, humans are immortalized in the form of lifeboxes, as discussed in my tome, “The Lifebox, the Seashell, and the Soul.” A lifebox is a data base and a computational front end that are in a large company’s silo or cloud. Marketed as software immortality. But to have a fully lifelike experience, you want your lifebox to be linked to physical flesh-and-blood peripheral in the natural world.

Computer people are prone to equating the self and the world to a type of computation. But one has the persistent impression that a real life in the real world has a lot more than computation going for it.

The adventure continues…

Journey to the East

Saturday, November 4th, 2023

Journey to the East! I’m gonna spend a four nights with Mike Gambone in Nashville. Mike and I taught at Randolph Macon Woman’s College, sharing careers as low-grade malcontent academics. After Mike, I’ll visit Greg Gibson in Gloucester, my college roommate at Swarthmore, and a fellow writer and eternal Zen mind-assassin. Then on to Providence to stay with Paul Di Filippo, a protean SF writer. We’ve collaborated on six tales. To wind up the trip, I’ll hit good old Manhattan, meeting up with two agents, and my old pals Roger Shatzkin and Eddie Marritz from college and grad school
days

Nashville

Haven’t seen Mike in 37 years. He was my best friend in Lynchburg, along with Henry and Diana Vaughan, both deceased. Mike played bass sax in our punk Dead Pigs band.

I made it out here, amazing. I was so stressed about packing and arranging. Mike Gambone has been hanging out with Mary Abrams here for the last couple of years. Separate dwellings. Mike was, as I say, about my best friend in Lynchburg, and Mary was a good friend there too, the wife of Dave Abrams. They were in our circle of “townie” friends, friends separate from the Randolph-Macon-professor crew. Not that the circles didn’t overlap.

Such a long time since I saw Mike and Mary, so much to catch up on. Bittersweet, to look back on the happy times of yore, with whole lifetimes between then and now, and the “now” so rife with death. All three of us lost our spouse in the past decade.

Hard at times to come up with conversation but, if you wait it out, or push a bit, the anecdotes spill forth: the reminiscences and the fill-ins. I’m here for four nights which, on the one hand, seems long, but, on the other hand, is kind of fun. More time for the grass to grow.

So many nuggets of fresh gossip that I could share with Sylvia. I want to get off by myself and talk aloud to Sylvia’s ghost for an hour, filling her in. But, Rudy, the whole point of this trip is to be talking not to a ghost, but to live people. And I can, after all talk, quietly to Sylvia in my bedroom. The mumbling of the mad old man.

The second night we went to a free walk-in bluegrass jam at a small hundred-year-old building called Station Inn. Surrounded now by gleaming 21st C condo buildings and shops. Mary was anxious about getting good seats, and we arrived about an hour and half before doors opened, Mike and I teasing her a little, but not too much. She only wanted the best. To pass some of the time, we hung in a new hotel lobby and at the bar of a chichi new restaurant. Finally some other people showed up at the venue, and we were first in line.

We thee streamed the Altman movie Nashville the day before to gear up, and the jam did at times feel we were embedded in the platonic Nashville of the film’s myth. Better, actually. Realer.

Thirty musicians at the start, and they winnowed down to about eight. Very highly skilled, all of them. Certainly at a higher level than the bluegrass jam players I go to see nearly every week at the Roaster coffee shop in Los Gatos. The vocals were loud, crisp, and authentically accented.

Some of the songs brought tears to my eyes, especially, “Once more / I’d give a fortune / If I could see you / once more. / Forget the past, this hurt can’t last / Oh I don’t want it to keep us apart.”

I’ve always viewed this song as being addressed to a woman who’s died…and I still view it that way, thinking of Sylvia. But, upon looking up the lyrics, I see it’s more about a broken-up relationship. Well never mind that. Let’s say the girl is dead. Death is the big breakup.

Tuning in on this deeply moving song, I flashed on the plot of a new SF story, where I pay a fortune to see Sylvia again. Like the hero of Robert Sheckley’s immortal story, “The World of Heart’s Delight,” which I read in a Canadian prison cell years ago.

Anyway in my new story, maybe he sees the Sylvia character and at first she pretends it’s not her, or maybe it’s right away the reborn Sylvia character, authentically her, but she’s rushed and flustered, and she tells the Rudy character, “He’s gypping you, Ru, he wants to get all your money, don’t pay him! Claim you didn’t see me. Save the money for the children.” And then she’s gone.

[This painting, by the way, is my illustration of how well e-dating works.]

And then Act II, Rudy manages to welsh on the deal, but now collector hoodlums are after him. And when he’s cornered, Sylvia shows up on her own and saves him.

Act III she takes him to the afterworld which is like a lava tunnel-tube in Fiji, only it’s right here in the subdimensions.

We heard another great song too, with the haunting chorus. “She’s gone, but I don’t worry./ I’m sittin on top of the world.”

And yet another, “Trouble in Mind,” about the guy wanting to lay his head on the track of two-forty-nine, and having it give satisfaction to his mid.

Closer: Mike and Rudy at Nashville’s concrete replica of the Parthenon. It appears in Altman’s Nashville.

Gloucester.

Up with Greg and Annemarie now. Big family group, their kids Celia and Brooks, also Celia’s three-year-old daughter Pepper. So cozy. I almost broke down, talking to Annemarie. I’m so grateful to be with this familiar old family. I’m so lonely for my lost Sylvia.

I’ve been looking at old photos of Sylvia on my phone again. My Flickr “Sylvia’s Life” album. She’s so solid, so real, so happy, so distant. I stupidly torment myself about our house not being as well fixed-up as Mike’s and Greg’s houses…all the upgrades I never did. But seeing these photos: how happy she was, so much of the time…it’s not all about a new kitchen countertop.

Greg and I right away fall into our old bantering style, so quick, like playing handball, the words whizzing back and forth, with rapid back-references woven in, connecting the parts of the exchange, plus links to episodes of yore. The readiness of the friendly insults, the deep long-term sympathy. Like it never stopped.

Walking down to the rocks by the ocean with Annemarie and little Pepper, ah. I remember a black-and-white home movie that Sylvia’s parents showed us, of three-year-old Sylvia playing on a beach, hair blond, long, and curly. Smiling and walking. The wheel of time. Little Pepper an echo of Sylvia, coming around again. Life forever arising from death. “A commodious vicus of recirculation” as Joyce wrote.

Greg just now back from a 20 mile bike ride. His new thing…he’s been doing it for about four years, increasingly long and rigorous training. Lectures me in that know-it-all say of his about how great his physical condition is—some particular number—and how he’s going to live forever. Greg’s always been like this. In the old days it was about his jogging regimen. I figure my sober lifestyle has got to count for something, health-wise.

The next day Greg and I joked about this. “I’ll dance on your coffin,” from each side. Also the agreement that it wouldn’t be fun either way. That’s a nice thing with him; we can say anything.

And let me mention in passing that Greg and I are admirers of the wonderful poet Anselm Hollo, whose mammoth Complete Poems recently came out. My first Anselm Hollo book was a pamphlet called Sensation 26 that Greg mailed me during my early years of bitter struggle in Lynchburg.

Also should mention that our Gregory Gibson is a successful dealer in rare maritime books…check his Bookman’s Log.

Greg’s daughter Celia told me a a cute remark that her little Pepper made when she saw the sun setting over the ocean. “The sun is taking a bath.”

Didn’t sleep well last night. Woke up about three am and looked at a hundred more photos of Sylvia on the phone.

Now it’s afternoon and I dropped off for a nap on the extra bed I’m using, in Greg’s office. If possible I tend to nap every day. Had a big lunch out with Greg’s kids Brooks and Celia, also another bookman. That guy was coughing a lot. After the meal I told Greg his friend was “hideously diseased” and he was outraged/amused by that. Greg said, not inaccurately, that I was dissing his friend because I didn’t like to share Greg.

Greg and I stopped and walked on the beach by Salt Island. We walked there at dawn, about forty years ago, he and I had been up all night drinking and smoking pot. The roar of the ocean seemed so timeless, both that time and now. That time I felt like we were in paradise, or in a higher world, I’ve always remembered it as a magical experience.

And today I insisted on stopping there, and Greg didn’t want to, but I was like, “I’ll be gone forever tomorrow, why not do this for me.” So he mock-grudgingly gave in.

And it was very nice, in some ways the same as before, that sense of suspended animation, a merging into the cosmic one. Greg and I in heaven, or nearly so, grizzled and old in the Elysian fields.. Perhaps on that earlier walk I was synching with the walk of today. In the higher world, then and now, all is one. I told Greg about this, and he got it. “I can see that you feel this with every fiber of your being.” He really is very dear to me.

Back home, Greg went for another one of his monster bike rides, and I played pat-a-cake with little Pepper. And then I did “This little pig went to market” with each of her bare feet, and then with each her hands. I love that loud, hoarse final “wheenk, wheenk, wheenk, all the way home.”

One more thing about Greg is that his and Annemarie’s son Galen was the victim of a school shooting in 1992. Greg published Gone Boy, a chronicle of his search for understanding in the aftermath, and it was something of a best-seller. This year, having published several other books, Greg returned to the fight with what he terms a “calm, demented” novel about school shootings: Mooney’s Manifesto. It’s hypnotic, strange, and perhps capable of altering the discourse.

Went out for dinner with Greg and Annemarie in nearby Rockport on the last night, and the restaurant crowd was like a white sheet of paper, so New England, so Norman Rockwell. I never feel like I’m that white.

“A symphony in pink,” as dear Sylvia once said about me, sitting in bed next to me, propped up, reading, reaching over to pat my head. “There, there, Ru. Don’t worry.”

Providence

Took the train to Providence RI, today. Had to change in Boston; I was worried about the details, but it all worked out. My affable pal Paul Di Filippo and his kind wife Deb Newton met me at the station, great to see them.

Much talk of our missing Sylvia. I half expected them not to believe that she’s dead, or even to deny it, but they accept that the bad news is true. More proof, however indirect, that she’s really gone.

I’ve been here I think two times, once alone, and once with Sylvia, Great talking about SF with Paul, and about art with Deb. Paul took me to the RISD (Rhode Island School of Design) art museum, quite entertaining. A mummy, a giant wood Buddha, an excellent Matisse, some amusing old masters, and, by way of furniture, a sewing table with a hinged lid that makes it into a sphere.

Paul and I agree that our most recent joint story is our best, the Lovecraftian “In the Lost City of Leng.”

Paul has a new multiverse novel Vangie’s Ghosts, coming out in February. Vangie is a cross-verse traveler who teeps or merges with multiple selves. This is akin to the classic multiverse “golden thread” that allows thread-to-thread causality, that is, the link is initiated by one particular thread rather than being a cross-verse link akin to two hands reaching out. More on this in anotherr post—lately I’ve been thinking about the multiverse a lot.

Deb made the best apple pie ever for dessert. The crust includes a small amount of the normally-despised Crisco. From the first James Beard cookbook. I’m going to try and find it.

One day in Providence It was pouring rain all day. Deb said she was sorry I didn’t have good weather, but I trustfully assured her that I love it. Haven’t seen proper rain since last spring. Torrents, sheets, gut-buckets of rain. And the near-to-overflowing Seekonk River churning along. What a name!

Paul showed me Slater Mill, which was the first automated textile mill in the United States. A guy worked in English mills, and memorized, or wrote down the details, off all the mechanical designs. Coming back to Rhode Island, he replicated them and thus began the industrial revolution in America.

Lots of talk all day about Paul and Deb’s friends and relatives. How Sylvia would relish this. By way of holding up our side, I told some stories about our kids when they were little. In the morning, Doing this for Sylvia’s sake.

Soon thereafter, Deb started talking to me about what a great artist Sylvia was, admiring Sylvia’s quilts as shown on the web page of them that I made.

“The colors are amazing, Rudy, where did she find them? And the designs, none of them is standard, they’re all original, and with such interesting shapes, the edges not quite parallel, and lively, wonky gaps between the blocks. Mathematical almost. So utterly creative and fresh. I’ve never seen quilts like these.”

It would be such balm to Sylvia’s soul to hear this. And it means a lot, given that Deb is a famous knitwear designer. Let’s suppose Sylvia was listening through my ears.

We got together for lunch with their friend Don Dammassa and his wife Sheila. Don is a book-hoarder friend of Paul’s who has two large low buddings outside his house, overstuffed with some 60,000 volumes, mostly SF. Paul and Don met via an SF fan group back at Rhode Island College. Inside Don’s warehouse, I got him to show me my hardbacks in one of the rows.

Paul himself has about 15,000 books in his house, on broad steel shelves that are filled edge to edge, front to back, with books lying on their sides. 3,000 of them in the spare bedroom with me. All SF. I’m glad they don’t have quakes here.
Paul says one million different book titles are published each year.

Cozy to be with Paul and Deb. Such a pair, completing each other’s sentences, bouncing suggestions back and forth. Two peas in a pod, as the kids used to call Sylvia and me.

New York!

Great to be here. So polyglot…everyone taking a different language. Russian, Hebrew, Chinese, Polish, Spanish, Yiddish, Hindi, Japanese, Italian….

The Library hotel isn’t quite what it used to be. Well, mainly Sylvia isn’t here, so I’m touchy. Such a strong sense of her presence, her voice, her scent, her happiness to be in NY, her cute outfits, her keen observation of our surroundings.

For supper I took the bus down 5th Ave to get to our old fave restaurant, L’Express, at 20th and Park. Walking crosstown, I could feel her walking beside me, felt it so clearly, and saw her extra-big “I’m on a good outing” smile.

Great food at L’Express as always, although I had to work for it. I clamored for a good seat in the back of the room and then then dared to send back my bouillabaisse as, being Sunday, it was old and fishy. Replaced it with a nice little hanger steak which was, however, too raw to cut or chew, so I sent that back for a bit of extra grilling. The waiter didn’t mind. I’m guessing he’s used to fussy New Yorkers.

In Bryant park by the library this morning I had a sense of being telepathically aware of many people at once, the New Yorkers, each with their lusts and hopes and worries and plans. Individual neurons in a great hive mind. What if I, as neuron, snap into awareness of the whole mind? SF tale.

At this point we’re switch into photojournalism mode. I didn’t write many journal notes in NYC, but I took my Leica Q2 or my Pixel Pro 7 with me everywhere, and snagged a bunch of good shots. And I’ll fill in a couple of lines about each one.

S. J. Perelman is one of my favorite comic writers—and here’s his typewriter at the New York Public Library. I still have the pink/red IBM Selectric typewriter I used to write many of my early books. Can visualize it in some future spacetime case.

Had lunch at the classy Century Club with my agent John Silbersack. Chatty and engaged, he’s an old friend by now. But not much encouragement on the future-publications front. That line I’ve been hearing from agents since the 80s. “Things have never been worse.” And for me in particular, there’s the factor of, well, ageism. It’s a real thing, and not without justification. The new generations want a chance. I mean, are you going to buy the latest Stones album?

I made a point of going to the good old Met, searching out a section that Sylvia liked: it’s their lesser-known modern and contemporary art collection, that is, from 1890 on. It’s on the bottom floor, way back and to the left. Sylvia’s fave was a room-sized mural by Thomas Hart Benton, “America Today”, from 1932.

Closing time came sweeping in and, while racing around the Met’s really quite large modern collection, and, as you kind of see, literally sweating it, I came across, oh joy, the vintage 4D cube crucifixion, Salvador Dali’s Christus Hypercubicus.

After the Met I had dinner at a chic spot on upper Madison Avenue with Henry Wessells, the man who sold my fairly massive literary archives (sixteen banker boxes) to UC Riverside a couple of years ago. He’s an interesting, free-wheeling character with lots of good tales. And by selling the archive he got me the best “book deal” I ever had. See Nery’s eclectic blog/webpage The Endless Bookshelf as well. He finds his way down an amazing number of rabbit holes.

I noticed the nearly full moon above the skyline, nice shot. Every month when the full moon rolls around, my family and I think of Sylvia’s death on January 6, 2023, which was very near the date of a full moon. Over a year, the full-moon-day of the month doesn’t change all that fast; I’d never realized that.

Dear Eddie Marritz turned up and we went to MOMA together. Eddie’s a cinematographer, now largely retired, and lives with his wife Hana Machotka upstate. We go back to 1967, with newlywed Sylvia and Rudy attending grad school at Rutgers in New Brunswick, New Jersey. Eddie and I used to smoke pot together a lot. And the same mindless, irrepressible merriment and deep meanings remain.

One of the lesser-known Dali ant paintings. Even now the higher critics rag on Salvador, but—hey. Ants, man.

MOMA deco poster for an aperitif called, simply, R. Love it.

I love these types of lines that mathematicians and physicists call caustic curves. Areas made bright by the physics of the angles in a sheet or in a volume. On a sunny day you see caustic curves on the bottom of a swimming pool. “Caustic” means “burning” and the idea is that the lens-like focus water makes these bright lines that, having more light, are hot. Like the way you can incinerate ants with a pocket lens. And you see something like them on the ripply surface of sheet of plastic used in place of glass on an inexpensive picture frame. Actually I’m not really sure if these curves are called caustics. Very mysterioso in my dim room.

Something Sylvia and I liked to do in NYC was to ride on the ferries that run up and down Manhattan, also touching on Queens and Brooklyn. Only costs about $5 for a ride, and there’s not obnox narration like on a Circle Tour. Not so easy to find the launch dock near East 3rd St.

Classic view of lower Manhattan. I once heard a really interesting idea from the scientist Christopher Langton, talking about cellular automata and artificial life simulations. Sometimes we think of cites as scars on the face of Nature, ugly messes made by the wicked human race. But, come on, we’re a living species too. And cities are our nests! And wondrously intricate. Standing back like this, I was staggered by the sheer scale of the flows keeping New York going. Food, electricity, communications, fuel, and building supplies coming in…sewage and waste going out. Awesome.

Chic NY lady ready to board. I love how New Yorkers look.

SF-nal conflation of huge 21st C ground zero skyscraper and St. Paul’s Chapel from 1766. Nice to chill in the church’s old graveyard amid the city’s immense throb.

Riding under a bridge, a sky of moving machinery, with tufted clouds to match, plus a cute Italian tourist.

At first I had a less expensive single-bed room, but for the last two nights I had a deluxe suite. Sylvia and I never made it into a suite room and, of course, I regret that we didn’t. How she would have loved it. But, again, thanks to that half-feeling of her being present, she was seeing the fancy room through my eyes. And even if I’m kidding myself about that, the treat was a way of celebrating her memory.

On my last night, my Swarthmore classmate Roger Shatzkin showed up from Jersey and he slept on the extra bed in my suite. Wondrous view up Madison Ave with the tail lights converging to an unreachable vanishing point, the street a long isosceles triangle decked with a swarm of red taillights, ever closer together as they verged towards the vanishing point. Roger said it was like a Christmas tree.

So awesome.

Roger and I went to the Grand Central Station Oyster Bar; I’m glad he came along, I think I would not have enjoyed doing that one alone. The Oyster Bar was a touchstone for Sylvia and me. This time they guy next to us was screaming at the top of his lungs, and so was his date. Screaming like crazy people.

Getting fully into the pushy New Yorker thing I walked to the front desk, and asked to move. The guy goes, “Every table is loud here.” I go, “I’ve been here lots of times. The guy next to us tonight, he’s different.” The front guy shrugs, and we get the new table. And I got the best oysters I’ve ever had, big ones from Massachusetts. If not the best ever, he best since the last time I was there, which was on the farewell tour with Sylvia in 2021.

Cute and funny to find Roger in my room in the morning. Harking back to Swarthmore 1963. Looking out our window we see two window-washers on a scaffold lowered down from the roof. Working really fast. If you think about it, they must have to wash a skyscrapers really often, maybe even doin a section every day. Like the lawn of a huge mansion. I mean, clearly the New Yorker occupants of these lofty perches aren’t going to settle for dirty windows!

Here’s Roger at the Whitney with me; we do that visit together every few years, used to have Sylvia along, of course. I saw one of my fave paintings, Door to the River, by Willem De Kooning, part of their permanent collection.

Also a huge, wonderful work by Jackson Pollock’s wife Lee Krasner. Her first one after Pollock died in a drunken car accident. She said she’d wondered if she’d paint again and then, “Well, I did.”

As it happens, Roger and I had streamed the movie Pollock on my laptop the night before. At the end, Roger remarked, “I have to say I have no iota of sympathy for the guy.”

I like the bit in the movie where Pollock is losing it—a guy has been filming him for a month and it drives Pollock up the wall, and he resumes drinking whiskey (which he’d given up) and now he’s drunk and he keeps leaning up right next to the ear of film maker and saying over and over, in a tense whisper: “I’m not a phony, you’re the phony. I’m not a phony, you’re the phony.”

As an outrider cyberpunk author I know this feeling very well! I’m so glad I don’t drink like Pollock anymore.

[Portrait photo by Bart Nagel.]

What a long and eventful trip it was. I always worry about the transportation and the transitions, but it’s juste one step at a time. Everyone was really nice to me. Mike and Mary, Greg and Annemarie, Paul and Deb, Eddie and Roger. I told the story of Sylvia’s death to each of them. Good for me to keep excavating that, airing it out, keeping it real, assimilating it, coming to terms with it.

I’ve mentioned that I often feel as if Sylvia is still with me—and I don’t expect that feeling to go away. But in the most literal sense, she’s gone. The trip made that clear.

Even so, we two had a long, wonderful run.


Rudy's Blog is powered by WordPress