Click covers for info. Copyright (C) Rudy Rucker 2021.

ChatGPT Writing. “Skinks in Rubber Flight.”

January 27th, 2024

This describes an experiment with with ChatGPT that I made on Jan 17, 2024. I mention “skinks” a lot, so just below, you’ll see a painting of with a bunch of skinks; they’re the guys in the center. I’m writing about a couple named Oliver and Carol, and about their adventures with two skinks named Jenna and Kanka. The skinks who have unexpectedly appeared in San Francisco just a few years from now.

Skink painting entitled “Farmer’s Market.” 

Anyway, I’ve been getting curious about ChatGPT, so I finally went online and gave ChaGPT 4.0 a try. I was urged on by two of my fellow Computer Science professors from my years at San Jose State: Jon Pearce and Michael Beeson. I took about five hundred words of my novel in progress, and fed the chunk to ChatGPT.

Dr. Pearce advises Dr. Rucker in the Red Room, of Santa Cruz.

A ChatGPT “prompt” can in fact be quite long. So you really can use a chunk of your writing. The passage, which I show, below describes a scene where my characters Oliver and Carol are running away from the SFMOMA museum after these skink friends of theirs have eaten a whole bunch of Andy Warhol paintings.

With painter friend Paul Mavrides, admiring the Warhols.

I’m not yet sure what the title of the novel will be, or if I’ll even finish it. After the the end of my experiment with ChatGPT today, I asked it what the title should be, and based on the passages the app had seen, it made a few suggestions, including: “Skinkland Odyssey,” “Warhol’s Whimsy,” “Skinks in the Canvas,” “Pop Realms,” “Artmover’s Dilemma,” and “Surreal Serendipity,”

Chinese edition coming out!

Not bad, although none of them is perfect, and as the novel grows, I’ll need a broader title. I wondered if I could feed in all six thousand words of the novel that I have, but ChatGPT doesn’t really like a prompt that’s more than five hundred words. When I tried feeding in all six thousand words, the app crashed, but it did display pretty cool phrase as a label for the crashed conversation. The phrase was “Skinks in Rubber Flight.” Now to me that is a good title, and it’s cool that it comes from a crash, which gives the title a meta and transreal significance.

Frankly I’d been thinking the hype over ChatGPT is a bunch of bullshit. But after this test, I’m staggered and flabbergasted by how well ChatGPT works. As Beeson put it, the method of “choose the next word based on the odds” seems, on the face of it, like a pedestrian and uninteresting approach. . But now—presumably thanks to the size of the vast neural net large language model that is in use—it’s undergone a phase transition.

And by large language model, or LLM, I mean an immense simulated neural net that was evolved by Earth’s mightiest computers. The net consists of a zillion nodes with numerical weights along the lines connecting the nodes. Nobody understands, or can understand, the meaning of the individual weights. As Kurt Gödel proved back around 1930, understanding these weights is, even in principle, impossible. But our tech titans calculated the weights by a somewhat opaque process of simulated evolution.

As a proud writer, I assumed ChatGPT could never write anything colorful or interesting. That it could only churn out bland pap akin to political speeches. But, no, the writing isn’t bad, and it’s got originality, and a bit of verve. Although still, I would say, not much humor.

For me, the crushing thing is that, when I went ahead and ran ChatGPT twice in a row, feeding it the same segment of my novel each time…it wrote two utterly distinct continuations. It could go on writing new continuations all day long.

Somehow this is very deflating. It makes the ground shift beneath my feet.

But I don’t think I’ll often use ChatGPT to help with my fiction! Too much like having a know-it-all full-of-ideas roommate who won’t shut up.

I may yet find a way to get the better of ChatGPT. Some meta parodistic SF trope. Wait till the new novel’s done. Not that, as I say, I’ll use ChatGPT to write it. For now, for sanity’s sake, I’ll return to scorning it from afar. Bitter mockery from the soon-to-be-laid-off craftsman.

You’ll find me alone with my word processor in the coffee shop or my home office or my living room couch or the town library, same as usual. What else can I do?

Daughter Isabel’s jewelry saw

Rudy’s Prompt

Extend this passage in the style of Rudy Rucker. (Note that the following is the most recent a part of the novel I’m writing, and that I pasted this right into the prompt.)

The hall is glass on one side, with a patio out there. We’ve reached a side door. Our skinks snout the doors open. It’s a nook with no exit, a dead end, but whoops we’re rising into the rainy city sky.


“Hard to hold on,” I wail and, as if understanding me, Kanka enlarges himself and tightens the gaps between his segments, forming a rubbery arc. Like a tyke’s bouncy caterpillar toy and, yes, Jenna has done the same for Carol.

We’re low above the city, not much higher than the buildings, rain pattering into our faces. We’ll need new clothes again. Assuming we make it back to the Box Farm instead of being whisked off to Skinkland or some such. No emergency helicopters in sight as yet. A plus.

The skinks are happy and fooling around, flying circles around each other, comfortably full of chewed Warhol masterpieces. What would Andy think? He wouldn’t like it. What artist would?

We’re flying side by side, slower than before, away from the buildings, right down near the water of the bay. Essentially invisible. Not especially feeling any good luck. I can see the Box Farm.

“Wait!” exclaims Carol. I’m noticing that she says that a lot. She hammers on the side of Jenna’s rubbery bod. “Stop flying. I want to talk to you two.”

“Give us nice dry clothes,” I tell the skinks. “Just switch them out. With waterproof yellow rain gear and waterproof hats.”

“Not yellow,” says Carol. “Gray. Secret mission. We’re going back to the museum.”

My clothes shift and crawl. I’m dry. And not quite cozy yet, by getting there. The rain patters on my impermeable yellow oilskins. I feel my head. The top hat is still in place. Glad for that. My personal logo. Like the villain in a superhero comic.

Carol’s got the gray gear, in a fashionable cut, and a big-brimmed waterproof high-fashion beach hat.

“Topper and Lady Gray,” I say.

“What?” goes Carol. “Never mind. The plan, Oll, the plan. We go back and replace the Warhols. The skinks can grow new ones. The old ones they ate—those are like Flybuzz prompts.”

“Hello Infinity”. I drew this while studying higher infinities for Math Ph.D. in grad school at Rutgers, around 1970.

The skinks chirp and twitter, digging on this idea. Somehow Carol understands them. I don’t. Face it, I’m full-on deaf and senile. No use even taking to me, most of the time.

“What?” I say.

ChatGPT Continuation Run One

[So now, the rest of this post is completely fiction written by frikkin’ ChatGPT. Can hardly believe this.]

“Oll, you’re hopeless,” Carol mutters, rolling her eyes at my perpetual state of confusion. “Never mind. The skinks are on board with the plan. We’ll get those Warhols back where they belong.”

The Carol character is to some extent modeled on my Sylvia.

Our skinks, now wearing their own waterproof gear, emit a series of enthusiastic chirps, indicating their readiness for this peculiar mission. As we descend towards the Box Farm, I can’t help but marvel at the rain-soaked city below, its shimmering lights diffused through the falling droplets. The landscape is a canvas of reflections, a Rudy Rucker dreamscape.

Back at the Box Farm, we find ourselves in the familiar workshop, surrounded by the comforting hum of machinery. The skinks, with a strange kind of precision, begin regurgitating Warhol-inspired artworks. It’s an odd blend of the avant-garde and reptilian creativity, a surreal manifestation of collaborative artistry.

“Now, Carol,” I say, trying to focus on the task at hand, “how do we get them back to the museum?”

She holds up a device resembling a cross between a walkie-talkie and a psychedelic kaleidoscope. “This is the Artmover 3000,” she declares. “It creates a temporal fold, allowing us to transport these artworks back to their rightful place without anyone noticing.”

I nod, pretending to understand the intricacies of temporal folds. Carol activates the device, and a shimmering portal appears before us. The skinks, now wearing tiny time-traveler goggles, gleefully carry the Warhols through the portal.

“Follow me, Oll,” Carol says, leading the way into the temporal fold. As we step through, the familiar sensation of being stretched and compressed envelops us. We emerge on the quiet outskirts of the museum, unseen and unnoticed.

Carol takes a deep breath, her eyes gleaming with a mischievous spark. “Let’s hang these Warhols back where they belong and leave the art world none the wiser.”

And so, with skinks in tow, we embark on our surreptitious art restoration mission, a quirky adventure in the rain-soaked city of dreams.

The museum, now shrouded in the quiet hum of the temporal fold, is eerily tranquil. Carol, Jenna, Kanka, and I sneak through the shadows, blending into the backdrop of exhibits. The skinks, their time-traveler goggles gleaming in the low light, carry the regurgitated Warhols with an air of reptilian pride.

“Quietly now,” Carol whispers, gesturing towards the wall adorned with the vacant frames that once held the iconic artworks. We approach the blank spaces, and with the stealth of conspirators in a cosmic caper, we hang the Warhols back in their designated spots. The skinks chirp in what seems like approval.

As we step back to admire our handiwork, a sudden flicker in the air catches my attention. The temporal fold reappears, pulsating with energy. Out steps a trio of beings that can only be described as futuristic art curators—floating entities adorned in chromatic robes, their faces obscured by holographic masks.

“We are the Aesthernauts,” one of them declares in a melodious voice that resonates with echoes of the far future. “We’ve detected a temporal disturbance in the art continuum. What manner of beings are you, meddling with the fabric of creative expression?”

Carol, undeterred, steps forward confidently. “We’re just ensuring that these Warhols find their way home. No harm done.”

The Aesthernauts exchange glances, their holographic masks displaying intricate patterns of abstract art. After a moment of silent contemplation, they nod in unison.

“Very well, art dwellers. Your intentions seem pure. We shall allow this temporal anomaly to close, but be warned—tampering with the art continuum has consequences.”

As the Aesthernauts step back into the temporal fold, it collapses behind them, leaving us in the quiet embrace of the museum. Carol turns to me, a mischievous grin on her face.

“We did it, Oll. The Warhols are back, and the Aesthernauts have granted us their surreal blessing. Another triumph for Topper and Lady Gray.”

And so, in the enigmatic halls of the museum, amidst the echoes of the temporal fold, our unlikely quartet celebrates a victory that blurs the boundaries between reality and Rudy Rucker’s whimsical imagination.

As we revel in our victorious return of the Warhols, a new twist unfolds within the museum’s surreal corridors. The Aesthernauts’ presence has left an indelible mark on the space-time fabric, causing the artwork to come alive with pulsating energy. The once-static Warhols now exhibit dynamic, animated scenes—Warholian visions that stretch the boundaries of reality.

Jenna, always the inquisitive skink, nudges one of the artworks with her snout, and to our amazement, the scene within the frame spills out into the room. We find ourselves amidst a lively, abstract dance of colors and shapes, a living manifestation of Warhol’s pop art philosophy.

“What have we stumbled upon?” I marvel, as Jenna and Kanka join the animated dance, their rubbery bodies undulating in harmony with the vibrant artwork.

Carol, with her newfound ability to understand the skinks, interprets their joyful chirps. “The Aesthernauts have bestowed upon these Warhols a touch of the fourth dimension. They are now windows into alternate realities, portals to worlds only hinted at in the mind of Andy Warhol.”

Excitement courses through us as we explore these dynamic canvases, stepping into one artwork after another. Each frame leads us to a different surreal dimension—a pixelated dreamscape, a neon-lit cyberpunk city, a psychedelic wonderland reminiscent of a cosmic carnival.

However, the Aesthernauts’ warning echoes in the air. Our meddling with the art continuum has consequences, and the boundaries between the real and the surreal begin to blur even further. Time itself becomes fluid within the animated frames, and we find ourselves navigating through Warhol-inspired realms where past, present, and future intertwine.

As we venture deeper into this kaleidoscopic adventure, the line between observer and artwork becomes increasingly blurred. The Aesthernauts’ enigmatic presence lingers, hinting at the possibility of more unpredictable encounters and mind-bending twists in our quest to navigate the ever-shifting tapestry of art and reality.

[And by the way here’s another painting with skinks, I kind of painted them by accident in I think December, 2023, and that’s how all this started. By the way, ChatGPT has the mistaken impression that my skinks are reptiles although, granted, real world skinks are a kind of lizard.]

ChatGPT Continuation Run Two

(So, okay, this is ChatGPT’s second go at extending my story. And this go is completely different!  ChatGPT doen’t bother repeating itself. It uses a fresh round of ranom choices and comes up with that’s totally different yet always internally consisten. Like I said, I’m flabbergasted. I heard the hype and I didn’t believe it.  But, my dears, it’s true.)

As we soared through the rain-soaked air on our skink steeds, the city lights blurred into a kaleidoscope below us. Carol, clad in her fashionable gray gear, clung to Jenna with a mix of fear and exhilaration. I, in my yellow oilskins, gripped Kanka tightly, my top hat somehow staying put despite the gusting winds.

“Look, Oll!” Carol shouted over the rush of the wind and rain, her voice tinged with a blend of dread and wonder. “The city… it’s like a circuit board, and we’re the rogue electrons!”

I couldn’t help but laugh, the absurdity of the situation mixing with the adrenaline pumping through my veins. The skinks, seeming to sense our mood, accelerated, zipping through the air with a playful agility. Below us, the city was a twinkling grid, indifferent to the bizarre spectacle above.

The two Rudys.

As we neared the Box Farm, our destination seemingly chosen by some unspoken agreement, Kanka and Jenna began to circle, descending in a tight spiral. The Box Farm, a cluster of container homes and artistic studios, looked like a geometric puzzle from above, its residents blissfully unaware of the aerial oddity above them.

Landing softly on a flat rooftop, the skinks unfurled their bodies, allowing us to dismount. Carol and I exchanged glances, a silent understanding passing between us. We were partners in this surreal escapade, bonded by the bizarre.

“The Warhols,” I said, breaking the silence. “We need to recreate them. But how?”

Carol looked thoughtful, her eyes reflecting the neon glow of the city. “The skinks ate them, right? So maybe, just maybe, they can… reproduce them?”

It was a wild idea, yet in a night filled with flying lizards and art-eating escapades, it seemed almost plausible. We turned to the skinks, who chirruped in response, their bodies shimmering with an otherworldly glow.

My photographer nephew, Embry Rucker III.

Jenna and Kanka began to contort, their bodies undulating in a mesmerizing dance. Slowly, images started to form on their scales – vibrant, pulsating recreations of the Warhol portraits they had consumed. Each portrait was a perfect replica, yet imbued with a strange, living essence.

Unknown random baby playing in a grill

“We need to get these back to the museum,” Carol said, awe in her voice. “But how do we explain this? How do we even begin to return them without causing a bigger scene?”

I pondered for a moment, then an idea struck me. “We don’t explain. We just replace them, silently, like art ninjas. Let the world wonder.”

Grinning, Carol nodded. “Art ninjas. I like that.”

As we began to plan our covert operation, the rain ceased, leaving the city bathed in a serene, misty glow. The skinks, now our unlikely allies in this artistic heist, seemed ready for the next phase of our adventure.

In a world where reality bent and twisted like a Salvador Dali painting, we were about to add our own surreal brushstrokes. Tonight, we weren’t just observers of art; we were its creators, its saviors, its silent guardians. And as the first light of dawn began to creep over the horizon, we set off to restore the Warhols, our hearts racing with the thrill of the unknown.

As we embarked on our mission, the city still slumbered, unaware of the peculiar drama unfolding in its midst. The skinks, with their newly adorned Warhol skins, slithered beside us, a surreal parade marching through the deserted streets. Our footsteps echoed in the empty urban canyons, each step taking us closer to our destination.

[RIP Terry Bisson 1942-2024]

The air was still thick with the aftermath of the rain, a misty haze that seemed to warp the edges of reality. Streetlights flickered, casting long, dancing shadows that played tricks on our eyes. Every sound seemed amplified, from the distant bark of a dog to the soft hiss of the skinks moving alongside us.

“Feels like we’re in a cyberpunk fairy tale,” Carol whispered, her voice barely audible over the sound of our footsteps.

I nodded, feeling the weight of the moment. We were doing more than just returning stolen art; we were stitching the fabric of reality back together, repairing a tear in the mundane world caused by our extraordinary night.

As we approached the museum, the first challenge presented itself: getting inside without alerting anyone. The building loomed before us, a fortress of culture and history. But with Jenna and Kanka, we had more than just brute force on our side; we had the magic of the absurd.

Paintings by Sylvia and by me.

Jenna fluttered up to a second-story window, her body glowing softly in the pre-dawn light. With a gentle nudge, the window swung open, as if welcoming us back. We climbed through, the skinks following silently, their bodies contracting and expanding in an eerie, rhythmic dance.

Inside, the museum was a labyrinth of shadows and echoes. Our footsteps sounded sacrilegious in the hallowed halls, but we pressed on, guided by the faint memory of where the Warhols had once hung.

Reaching the gallery, we set to work. The skinks, understanding our intent, began to shed their Warhol scales, each one a perfect replica of the stolen art. We hung them with care, replacing the lost masterpieces with these bizarre, living replicas. The task was meticulous, demanding our full concentration, but as each portrait found its place, a sense of accomplishment filled the air.

Finally, as the first rays of the sun began to filter through the windows, we stepped back to admire our work. The gallery was restored, the Warhols back in their rightful place, albeit with a secret only we knew.

“Art ninjas indeed,” Carol said, a smile playing on her lips.

We slipped out of the museum as quietly as we had entered, leaving no trace of our presence. The city was waking up, oblivious to the night’s surreal events. As we walked away, the skinks beside us, I couldn’t help but feel that we had touched something beyond the ordinary, a hidden layer of the world where art and magic intertwined.

And as the city came to life around us, I knew that this adventure would forever color our perception of the world. In the ordinary, we had found the extraordinary, and in the mundane, the magical. The skinks, now part of our strange journey, slithered away, disappearing into the waking city, leaving us with memories of a night where reality was not just bent, but joyously broken and beautifully remade.

That’s all, folks!

Rudy Predicted Software Immortality

January 8th, 2024

We keep getting closer to simulating a human being by running a process in a computer. On the street, the dream is that we might get software immortality. And I predicted it forty years ago.

[Photo credit: Bart Nagel]

How does it work? You start with a large data base on what a given person was like: what they said and wrote, how they talked, and what they did. This is what I have named a lifebox. Once you have a lifebox, the task is to equip it with an interface that allows people to talk to it. And next we want the lifebox to begin generating original material. The big surprise is that large language models and wares like ChatGPT seem to be making this happen—or at least getting a step closer. So there’s a lot of buzz.

It’s not very well-known that, starting in 1980, I was one of the very first people to talk about software immortality. By virtue of hindsight, people like to imagine that these ideas were obvious or “in the air.” They weren’t. The very word “software,” was little known.

I developed my ideas in the early 1970s during discussions with no less a man than Kurt Gödel at the Princeton Institute for Advanced Study—and while writing my Rutgers University Ph. D. thesis on mathematical logic. I further refined my ideas of software immortality while studying and lecturing at the Mathematics Institute in Heidelberg, Germany, on a Humboldt Foundation grant from 1978 – 1980.

I wrote my novels White Light and Software in Heidelberg. And while I was in Heidelberg I further developed my notion of a software mind in my nonfiction book Infinity and the Mind, which is still in print from the Princeton University Press.

Infinity and the Mind has some material on my conversations had with Gödel about what I was then calling robot consciousness.

The last time I spoke with Kurt Gödel was on the telephone, in March 1977. I had been studying the problem of whether machines can think, and I had become interested in the distinction between a system’s behavior and the underlying mind or consciousness, if any.

What had struck me was that if a machine could mimic all of our behavior, both internal and external, then it would seem that there is nothing left to be added. Body and brain fall under the heading of hardware. Habits, knowledge, self-image and the like can all be classed as software. All that is necessary for the resulting system to be alive is that it actually exist.

In short, I had begun to think that consciousness is really nothing more than simple existence. By way of leading up to this, I asked Gödel if he believed there is a single Mind behind all the various appearances and activities of the world.

He replied that, yes, the Mind is the thing that is structured, but that the Mind exists independently of its individual properties.

I then asked if he believed that the Mind is everywhere, as opposed to being localized in the brains of people.

Gödel replied, “Of course. This is the basic mystic teaching.”

Deep stuff. And the point I’m making here is that the notion of software immortality was not some trite, obvious thing that anyone at all could have written about. I had to dig deep and think hard to work it out. It was new.

This said, I have to admit that the Wikipedia entry on “Mind Uploading in Fiction” mentions some earlier SF about the topic, and yes, it does mention my novel Software. But I want more! I do feel I can say I was the first to write about software immortality in a semi-realistic way that matches our current technology.

Three or four years after publishing Software, I expressly introduced the notion of a lifebox, and I’ve continued writing about this for forty years.

But as I say, I don’t  seem to get much credit in popular articles, and usually not even a mention. As fellow cyberpunk Bruce Sterling would say, that chaps my ass. 

Today’s post is designed to build up my case. I’ll include extended quotes from my writings, accompanied by comments, and as usual I’ll toss in some random images I’ve made. As I always say, at the surreal, transreal, synchro level, any image goes with any text.

My involvement with lifeboxes goes back to my Phillip K. Dick-Award novel Software, written in 1980 and published in 1982. A race of intelligent robots called boppers have evolved on the Moon. And the boppers want to confer immorality on their designer, Cobb Anderson. To achieve this, they extract the software from Cobb’s brain—by, in effect, eating it and analyzing the contents. And then they put the Cobb software in control of a robot body.

Here’s Cobb’s reaction when he comes to.

It was still sinking in. Intellectually he had always known it was possible. A robot, or a person, has two parts: hardware and software. The hardware is the actual physical material involved, and the software is the pattern in which the material is arranged. Your brain is hardware, but the information in the brain is software. The mind… memories, habits, opinions, skills… is all software. The robots had extracted Cobb’s software and put it in control of this robot body. Everything was working perfectly, according to plan. For some reason this made Cobb angry.

“Immortality, my ass,” he said, kicking the bathroom door. His foot went through it.

The thing that bothered Cobb the most was that even though he felt like he was all here, his brain was really inside a computer somewhere else. Where?

Suddenly he knew. The Mr. Frostee truck, of course. A super-cooled bopper brain was in that truck, with Cobb’s software all coded up. It could simulate Cobb Anderson to perfection, and it monitored and controlled the robot’s actions at the speed of light.

You’ll notice something odd here. Rather than having his software running on a device inside his body, Cobb’s software is running on a large supercooled computer inside a nearby ice-cream truck! This was my early 1980s substitute for having the process run on a microprocessor in Cobb’s robot body—or have the software be running in the cloud. But for the sake of a good cyberpunk tale, it was kind of cool to have a large, slightly evil ice-cream truck following Cobb around.

In my follow-up novel Wetware of 1988, also winner of a Phillip K. Dick Award, things get gnarlier. Rather than running your personality software on an old-school electronic computer, you run it on something called a Happy Cloak. (This name was, by the way, lifted from a William Burroughs novel.) A Happy Cloak is a chunk of moldy piezoplastic that acts as a big computer. So one way to achieve software immortality might be to transfer your personality onto a Happy Clock, and then let the Happy Cloak ride around on a meat clone of your old body. And in Freeware and Realware, things get still weirder. You can read all about it in the Wares.

But let’s stick to the basic idea of modeling a human personality with software. I clarified this in my 1986 story, “Soft Death,” which appeared in the Magazine of Fantasy and Science Fiction. I think this is the place where I coined the word lifebox. The main character in the tale, one Doug Leckesh, has signed up with a company called Soft Death—who promise him immortality. He’s in the care of a tech named Lo Park.

Photo of Bruce Sterling here just for kicks. My fellow cyberpunk from the start.

Lo took a device the size of a cigarette-pack out of her desk. It had two little grilles, for microphone and speaker. “We call this a lifebox. Basically, I want you to tell it your life story. Tell everything. It takes most people a couple of weeks.”

“But…I’m no writer.”

“Don’t worry; the lifebox has prompts built into its program. It asks questions.” She flicked a switch and the lifebox hummed. “Go on, Mr. Leckesh, say something to it.”

“I…I’m not used to talking to machines.”

“What are some of the first machines you remember, Doug?” asked the lifebox. Its voice was calm, pleasant, interested. Lo nodded encouragingly, and Leckesh answered the question.

“The TV, and my mother’s vacuum cleaner. I used to love to watch the cartoons Saturday morning—Bugs Bunny was the best—and Mom would always pick that time to vacuum. It made red and green static on the TV screen.” Leckesh stopped and looked at the box. “Can you understand me?”

“Perfectly, Doug. I want to build up a sort of network among the concepts that matter to you, so I’m going to keep asking questions about some of the things you mention. I’ll get back to the vacuum cleaner in a minute, but first tell me this: what did you like best about Bugs Bunny?”

For the next couple of weeks, Leckesh took his lifebox everywhere. He talked to it at home and in the club—and when his wife Abby and his friends reproved him for ignoring them, he began talking to it in a booth at Yung’s bar. The lifebox was the best listener Leckesh had ever had. It remembered everything he told it, and it winnowed the key concepts out of his stories. Leckesh would respond to its prompts, or simply go off on tangents of his own. He hadn’t had so much fun in years.

Lecckesh by the way is a near-anagram of Sheckley, as in Robert, my hero and mentor from age thirteen on.

I have another treatment of the “dictated lifebox” theme in my book Saucer Wisdom. This was an odd book. It was in fact a novel starring a character called Frank Shook. Supposedly Frank Shook was a frequent saucer abductee who’d arranged for a ghost writer to produce a non-fiction volume of Shook’s experiences. The name of the ghost writer character? Rudy Rucker. I told you this was an odd book!

To make things even odder, my old writer friend Gregory Gibson) shown above in full Frank Shook mode) helped me get a book deal with Wired for my Saucer Wisdom book. Wired wanted to expand from being a magazine to publishing books as well. I took Greg to a meeting with the Wired editors and convinced them that Greg really was the saucer abductee Frank Shook, and that my book was in fact non-fiction and not a novel at all. Good times.

If you read the book, you’ll figure it all out… It has several useful prefaces and introductions, two of them by Bruce Sterling.

In any case Saucer Wisdom has a very clear discussion of the lifebox notion.

The aliens show Frank Shook a mid-twenty-first-century commercial for something called a lifebox. The slogan is Remember Me. The lifebox is a little black plastic thing the size of a pack of cigarettes and it comes with a light-weight headset with a pinhead microphone, like the kind that office workers use. The ad suggests that you can use your lifebox to create your life story, to make something to leave for your children and grandchildren.

Frank gets the aliens to find an old man who is actually using a lifebox. His name is Ned. They watch Ned from the saucer. Somehow the saucer can use dimensional oddities to get very close to someone but still be invisible to them. The aliens have control over their size-scale and refraction index; they can make the saucer as tiny and transparent as a contact-lens.

White-haired Ned is pacing in his small back yard—a concrete slab with some beds of roses—he’s talking and gesturing, wearing the headset and with the lifebox in his shirt pocket. The sly saucer is able to get close enough to hear the sound of the lifebox: a woman’s pleasant voice.

The marketing idea behind the lifebox is that old duffers always want to write down their life story, and with a lifebox they don’t have to write, they can get by with just talking. The lifebox software is smart enough to organize the material into a shapely whole. Like an automatic ghost-writer.

The hard thing about creating your life story is that your recollections aren’t linear; they’re a tangled banyan tree of branches that split and merge. The lifebox uses hypertext links to hook together everything you tell it. Then your eventual audience can interact with your stories, interrupting and asking questions. The lifebox is almost like a simulation of you.

Frank Shook gets the aliens to skip forward in time until past when Ned has died. They observe Ned’s grandchildren, little Billy and big Sis. They’re playing with one of the lifebox copies that Frank left. The kids call the lifebox “Grandpa,” but they’re mocking it too. They’re not putting on the polite faces that kids usually show to grown-ups. Billy asks the Grandpa-lifebox about his first car, and the lifebox starts talking about an electric-powered Honda and then it mentions something about using the car for dates. Sis—little Billy calls her “pig Sis” instead of “big Sis”—asks the lifebox about the first girl Grandpa dated, and Grandpa goes off on that for a while, and then Sis looks around to make sure Mom’s not in earshot. The coast is clear so she asks some naughty questions. “Did you and your dates do it? In the car? Did you use a rubber?” Shrieks of laughter. “You’re a little too young to hear about that,” says the Grandpa-lifebox calmly. “Let me tell you some more about the car.”

Frank and the aliens skip a little further into the future, and they find that the lifebox has become a huge industry. People of all ages are using lifeboxes as a way to introducing themselves to each other. Sort of like home pages. They call the lifebox database a context, as in, “I’ll link you a link to my context.”

Not that most people really want to spend the time it takes to explicitly access very much of another person’s full context. But having the context handy makes conversation much easier. In particular, it’s now finally possible for software agents to understand the content of human speech—provided that the software has access to the speakers’ contexts.

I got very heavily into discussing lifeboxes in my tome, The Lifebox, the Seashell, and the Soul, published by Thunders Mouth Press in 2005, and now available from Transreal Books. If you go the book page you can buy print or ebook copies or read it for free online.

By the way, the idea behind the title is that the Lifebox is a digital representation of a human, the Soul is the idea of a fully analog ethereal person, and the Seashell relates to Stephen Wolfram’s idea of generating gnarly patterns by using cellular automata rules akin to those that generate the surface patterns on cone shells. The image shown here was drawn by my daughter Isabel Rucker, who did the illos for The Lifebox, the Seashell, and the Soul.

And here’s part of the discussion of lifeboxes in that book, preceded by a photo of me and the three kids one Easter morning in the early Seventies.

One of the most venerable dreams of science fiction is that people might become immortal by uploading their personalities into some kind of lasting storage. Once your personality is out of your body in a portable format, it could perhaps be copied onto a fresh tank-grown blank human body, onto a humanoid robot, or, what the heck, onto a pelican with an amplified brain. Preserve your software, the rest is meat!

In practice, copying a brain would be very hard, for the brain isn’t in digital form. The brain’s information is stored in the geometry of its axons, dendrites, and synapses, in the ongoing biochemical balances of its chemicals, and in the fleeting flow of its electrical currents. In my 1982 cyberpunk novel Software, I wrote about some robots who specialized in extracting people’s personality software—by eating their brains. When one of my characters hears about the repellent process, “[his] tongue twitched, trying to flick away the imagined taste of the brain tissue, tingly with firing neurons, tart with transmitter chemicals.”

Let’s look at a much weaker form of copying a personality. Rather than trying to exactly replicate a brain’s architecture, it might be interesting enough to simply copy all of a person’s memories, preserving the interconnections among them.

We can view a person’s memory as a hyperlinked database of sensations and facts. The memory is structured something like a Web site, with words, sounds, and images combined into a superblog with trillions of links.

I don’t think it will be too many more years until we see a consumer product that makes it easy for a person to make a usable copy of their memory. This product is what I call a lifebox.

My idea used to be that your lifebox will prompt you to tell it stories, and it will have enough low-level language recognition software to be able to organize your anecdotes and to ask you follow-up questions. As you continue working with your lifebox, it builds up a database of the facts you know and the tales you spin, along with links among them. Some of the links are explicitly made by you, others will be inferred by the lifebox software on the basis of your flow of conversation, and still other links are automatically generated by looking for matching words.

[As of 2023 it’s evident that, instead of interviewing the person, you can use an “unsupervised learning” approach, and evolve a neural net algorithm that examines all of the target person’s written and spoken outputs, and evolves a pattern of weights for a neural net to match the arrangements of words.]

And then what?

Your lifebox will have a kind of browser software with a search engine capable of returning reasonable links into your database when prompted by spoken or written questions from other users. These might be friends, lovers, or business partners checking you out, or perhaps grandchildren wanting to know what you were like.

Your lifebox will give other people a reasonably good impression of having a conversation with you. Their questions are combed for trigger words to access the lifebox information. A lifebox doesn’t pretend to be an intelligent program; we don’t expect it to reason about problems proposed to it. A lifebox is really just some compact digital memory with a little extra software. Creating these devices really shouldn’t be too hard and is already, I’d say, within the realm of possibility—it’s already common for pocket-size devices to carry gigabytes of memory, and the terabytes won’t be long in coming.

There is a sense in which saving only your memories is perhaps enough, as long as enough links among your memories are included. The links are important because they constitute your sensibility, that is, your characteristic way of jumping from one thought to the next.

My expectation is that in not too many years, great numbers of people will be able to preserve their software by means of the lifebox. In a rudimentary kind of way, the lifebox concept is already being implemented as blogs. People post journal notes and snapshots of themselves, and if you follow a blog closely enough you can indeed get a feeling of identification with the blogger. And blogs already come with search engines that automatically provide some links. Recently the cell phone company Nokia started marketing a system called Lifeblog, whereby a person can link and record daily activities by using a camera-equipped cell phone.

I like the idea of a lifebox, and I’d like to have one. I envision a large database with all my books, all my journals, and a connective guide-memoir—with the whole thing annotated and hyperlinked. Eventually I’d want to throw in some photographs—I’ve taken thousands over the years. And it should be feasible to endow my lifebox with interactive abilities; people could ask it questions and have it answer with appropriate links and words. My finished lifebox might take the form of a Web site, although then there’d be the thorny question of how to get any recompense for the effort involved. A commercial alternative would be to market it as a set of files on a portable data storage device of some kind.

[I included some of this material in a paper with Leon Marvell, “Lifebox Immortality & How We Got There.” And Marvell adds some further speculations.]

[Photo credit: Bart Nagel]

I have in fact made a crude lifebox in the form of an online search engine that finds user-selected target phrases in my collected writings. See Rudy’s Lifebox online. My first stab at a personal pyramid of Cheops!

Use case: I used the Rudy’s Lifebox tool to search for “lifebox,” and I found, aha, a great blog post I posted in 2019, “Talk on Lifebox for Telepathy and Immortality.” Check it out! You might think of the 2019 post as a missing addendum to today’s post, also it links to a podcast. Convenient invention, this lifebox…

And, as I said earlier, the cool thing happening now is that the chances of animating our lifeboxes into something like interactive AI are going up, thanks to Large Language Models and such tools as ChatGPT. (As mentioned above, John Walker and I posted about these new tools last month.) We’re not there yet, but it’s a big step.

In my latest novel, Juicy Ghosts, I visualized how the business of software immortality might actually play out. Here’s a scene where a woman named Mary has died on Earth. A con-man named Carson Pflug has helped Mary to set up a lifebox of herself on a server farm called Skyhive. The lifebox storage is “free.” If you agree to certain conditions…

Mary is in her lifebox self. She’s an AI, programmed in Spork, and coupled to her laboriously assembled data-base. She wears a sim that’s a texture-mapped wire-frame model of a woman living in a jive-ass virtual reality generated by the Skyhive server. Not that her AI mind thinks of it that way. To her, the scene feels authentic, in a bland way. Like being on tranks in an airport lounge.

Her room in the Skyhive cliff city is still very plain. A wooden chair, a gray floor, white walls. She goes to the balcony. Some of those so-called storks flap by. And then she sits in her chair for a very long time, thinking things over and, frankly, resting. She hasn’t had a rest like this for years. Twelve hours go by, or maybe thirty-six. No easy way to judge time here. There’s no clocks. And her virtual body never gets sore from sitting in one place.

“Would you like to personalize?” says a voice, finally interrupting Mary’s reverie. A woman’s voice echoing in her head. “Anything you want!”

“What are you?” says Mary.

“I’m your room,” says the voice. “Let’s pick some pleasant images for your walls. And I can copy furniture from your home in San Lorenzo.”

“Why bother,” Mary tells the room. “Pretty soon I’ll be in my clone.”

“No peripheral is currently registered to your account,” says the room.

Mary feels a pang of fear. Did Carson Pflug double-cross her? She wouldn’t put it past him. But she’s not gonna discuss this with the perky voice of her room.

“I think I’m getting a body soon,” says Mary. “But can I go out and look around? Just in my virtual body?”

“Feel free,” says the room. “Explore our fine amenities. And—would you be willing to answer a short survey about my service thus far?”

Mary walks to the balcony railing once again. She notices now that the cliff slants back a little, like giant stairs with really tall steps. She sees a piazza about a hundred meters down, with lifebox people walking around in sim bodies, some of them standing by a pickup truck. Might be worthwhile to find out what’s going on. So, what the hell, she jumps off the railing. It’s not like she can die, right?

Right. Although she picks up a certain amount of speed, she lands in the piazza unscathed. A crouch and a bounce. A thin lifebox woman nearby looks at her. Mimes a salute.

“Welcome to the labor pool,” says the woman. She’s awkward, with lank hair and a crooked mouth. Black narrow pants and a striped shirt. A geek.

“How do you mean?” goes Mary.

“Are you rich?” the woman asks. “Did you endow a trust fund to pay your monthly fee?”

“A friend got me in here,” says Mary. “Carson Pflug. He’s a Skynet exec. He says he’s going to get me a clone.”

“Carson!” says the woman with a laugh. “And he told you it was a one-year free trial, right?” A hole appears in the woman’s sim-body chest. “People say I’m heartless, but I’m not,” she continues, pointing out a cartoony, dark red heart inside. “I nurse a secret passion, hmm? My name is Leeta Patel.”

Mary knows she’s recently heard that name, but just now it doesn’t click. She’s still not over her death experience. “I’m Mary,” is all she says.

“And, yes, it’s true that your first year is free,” says the woman. “But eventually everyone in Skyhive pays rent, and if they don’t have it, they do gigwork, no matter how they got here. You probably didn’t read your full end-user license agreement?”

“Did you read yours?” challenges Mary.

“Nobody’s ever read it,” says Leeta with a little laugh. “Nobody ever will. It’s a hundred million words long. Written by hateful lawyer bots. And do you trust Carson Pflug? Hah. You’re a standard Skyhive client, that’s all.” She points to a group of lifebox people around the pickup truck. “Let’s see what kind of gigs are on offer today. To help you understand what’s in store.”

The sim in the back of the pickup truck isn’t even bothering to look human. He’s a squat cube like a foot stool, with a speaker grill for a mouth. As he rattles off a list of available jobs, day-workers hop into the truck one by one. The truck’s capacity seems endless.

The workers: Winged woman with a cane. Stout man with checked pants and pinwheel eyes. Jittery fashion-model lady with green skin. Sniggering, pimply teen in a clerical collar. Spider with a woman’s head. Twelve-foot-tall man in a stovepipe hat. Talking dog. Velvet-clad lady wearing jeweler’s loupes. Hieroglyph person with an ibis head. School-marm with a fat text-book. Vagabond cupping a blue glow in his hands. Dancing man with gem-crusted skin. A bowling ball with three pale-blue eyes.

The jobs: Dragonfly drone. Slime sculptor. Teep tweaker. Non-sex worker. Spam amp. Ant herder. Rind design. Spork coach. Flicker cladder. Tuba tamer. Art emulator. Rhino horner. Dream dupe.

A chime sounds in Mary’s head. It’s Miu Miu in San Lorenzo. Miu Miu is a so-called psidot, that is, a piezoplastic device on the back of the neck of a human-body clone of Mary’s old body. Yes. It’s going to work. The psidot sends and receives signals from Mary’s lifebox, and feeds them into the clone.

“Back in action,” Miu Miu the psidot signals to Mary’s lifebox. “Your clone is ready!”

“Bye,” Mary tells Leeta. In the never-never land of the Skyhive cliff, it’s no great effort for Mary to hop a hundred meters straight up—to her flat’s balcony. She takes a seat in her wooden chair.

“Ready,” she tells Miu Miu.

And now she’s linked to her cloned body, and—what?

She’s in her house with most of the lights off—alone with Carson Pflug. She’s sitting on the couch and Carson is standing over her, taking off his shirt. It’s quite dark outside and it’s raining heavily. Mary can see her jeans and sweatshirt on the floor. She’s naked. Her body looks good. Her clone is high-end.

So, okay, lifebox Mary is processing clone-Mary’s inputs, deciding what to do next, and sending signals to her psidot device Miu Miu, who controls the clone body’s muscles. Feel, think, do. Juicy ghost.

Carson leans down to touch her. No. Mary pops to her feet and shoves Carson in the chest: a swift, implacable, abrupt, and precisely calibrated impact, pre-computed by lifebox Mary in the cloud. Carson thuds to the floor like a sack of cement.

“No sex with you,” Mary tells Carson. “Never.” She dons her jeans and sweatshirt from the floor. They’re a bit tight. Clone Mary is more turgid than wasted old Mary was.

“I don’t know why you have to be like that,” says Carson, awkwardly regaining his feet and straightening his clothes.

“Men like you never do know,” says Mary. “Shut up and let me think.”

And don’t forget: When it comes to digital immortality, Rudy Rucker got there first!

I wasn’t kidding when I wrote these stories and books. Yes, I was laughing and having a good time, but I was serious. And now it’s all coming true.

So give me some credit.

In memory of Sylvia Rucker, February 8, 1943 – January 6, 2023.

Notes on Gibson’s Sprawl Trilogy

December 9th, 2023

As I’ve mentioned, my wife Sylvia died eleven months ago, and I have to find ways to fill the empty time at home. Somehow I still haven’t gotten back to writing stories and novels. I paint in the daytime, and I watch a fair amount of TV in the evening, but I get sick of that, so I’m reading a lot.

Recently I went to good old Bill Gibson’s Neuromancer, and then went and read Count Zero and Mona Lisa Overdrive. They call this the Spawl Trilogy now, the Sprawl being the fused coast city that runs from Boston to Atlanta.

I read on my Kindle these days, and a nice feather of this device that you can highlight passages, and then download the passages to a single text file. So for today’s post, I’m presenting my Sprawl highlights along with a few comments.

And I put in randomly photos from my stash. As I always say: the fundamental principle of Surrealism is that anything goes with anything. And meaningfully so.


I’ve used the Book Notes format for a blog post several times before. It’s a way of thinking more deeply about books that I love. Here are the six prior posts like this that I’ve done.

David Foster Wallace’s Infinite Jest.
Cory Doctorw’s Walkaway
Christopher Brown The Tropic of Kansas
Several novels by Raymond Chandler.
Gibson’s The Peripheral.
Gibson’s Agency. This post includes comments by Gibson himself.

And now for the Sprawl trilogy! I’ve provided page numbers for of the quotes, but with Count Zero, I ended up with “location numbers,” which only make sense if you’re reading a book on a Kindle. In any case, you can always find a passage in an ebook by searching for a key word or phrase.


Page 8 — …her upper lip like the line children draw to represent a bird in flight.

There’s several things that make Bill’s writing great. One is that he has the ear of a poet. He turns lovely phrases akin to haiku. But there’s more than that. He’s an expert at getting your to have total sympathy for his down-and-out or drug-addicted or oppressed characters. Rooting for them! And if they’re in some way childish, so much the better. Nobody understands them! They’re just like me! Those are the kinds of characters that I like to write too.

Page 103
“How you doing, Dixie?” “I’m dead, Case. Got enough time in on this Hosaka to figure that one.” “How’s it feel?” “It doesn’t.” “Bother you?” “What bothers me is, nothin’ does.” “How’s that?” “Had me this buddy in the Russian camp, Siberia, his thumb was frostbit. Medics came by and they cut it off. Month later he’s tossin’ all night. Elroy, I said, what’s eatin’ you? Goddam thumb’s itchin’, he says. So I told him, scratch it. McCoy, he says, it’s the other goddam thumb.” When the construct laughed, it came through as something else, not laughter, but a stab of cold down Case’s spine. “Do me a favor, boy.” “What’s that, Dix?” “This scam of yours, when it’s over, you erase this goddam thing.”

The Dixie Flatline is one of the lovable characters here. A former “cowboy” or cyberspace explorer he’s known for having effectively died during one of his sessions…and then bouncing back. The sign of his being temporarily dead is that the monitor of his brave waves collapsed to zero, to a flat line.

In Neuromancer, Dixie really is dead, but in a way he’s alive…as a construct or lifebox model of himself in cyberspace. Philosophers and SF people like to debate how it would feel to be a commuter simulation. And Dixie comes up u with a great, original answer. SF writers can out-do philosophers. We’re conducting thought experiments.

Page 114 — Holograms twisted and shuddered to the roaring of the games, ghosts overlapping in the crowded haze of the place, a smell of sweat and bored tension. A sailor in a white t- shirt nuked Bonn on a Tank War console, an azure flash.

Nice scene-setting here. A classic motif, the game arcade. With lovely sensory overlay, and a little touch of future history.

Page 115 — crumpled yellow candy wrapper, balanced on the edge of a console, dropped to the floor and lay amid flattened butts and styrofoam cups.

More haiku-like scene setting

Page 128 — the minute, I mean the nanosecond, that an AI starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”

Interesting to read this in 2023, when the so-called doomers are worried sick about our ever-improving AI might in some way take over the world and ruin our lives. See my recent “Roaring Twenties” post with John Walker.

But the metaphor of “shotgun to the head” is hopelessly outdated. Our new AIs don’t “live” anywhere in particular. They are techniques that are instantiated all over the place, in machines and in cloud processes.

Page 143 — “I wasn’t conscious. It’s like cyberspace, but blank. Silver. It smells like rain. . .

That lovely language. Full-court-press on the Sensorama, pushing all the buttons.

Page 148 — The drug hit him like an express train, a white-hot column of light mounting his spine from the region of his prostate, illuminating the sutures of his skull with x- rays of short- circuited sexual energy. His teeth sang in their individual sockets like tuning forks, each one pitch- perfect and clear as ethanol. His bones, beneath the hazy envelope of flesh, were chromed and polished, the joints lubricated with a film of silicone. Sandstorms raged across the scoured floor of his skull, generating waves of high thin static that broke behind his eyes, spheres of purest crystal, expanding. . . .

Inventing a really terrific future drug is a hallowed SF trope. Reading about it gets you high. In a sense it’s like a mantra for a meditation routine. In reality, drugs tend not to live up to their advance billing, but every now and then you might get to a place like this. The error would lie in trying over and over to revisit it.

Page 149 — He seemed to become each thing he saw: a park bench, a cloud of white moths around an antique streetlight, a robot gardener striped diagonally with black and yellow.

Love this one so much. Kind of an LSD thing, I’d say. Merging with the world around you. Heavenly. But youthful drug experiences are like a country you used to live in, but which you can’t revisit…as that land has a death warrant out on your ass. This said, you can still re-enter the merged state while straight and sober. Takes some years of practice at being high instead of getting high,

Page 154 — “You are busted, Mr. Case. The charges have to do with conspiracy to augment an artificial intelligence.

Here again we have a kind of prediction about the future of AI. But now this particular line seems comical. Like something you’d hear in a Firesign Theater skit. I think the issue that we don’t in any sense have real control over where AI is going. Yes, you can fire your lead scientist for unshackling your AI program—but tomorrow he’s going to have a job somewhere else.

Page 161 — The matrix blurred and phased as the Flatline executed an intricate series of jumps with a speed and accuracy that made Case wince with envy. “Shit, Dixie. . . .” “Hey, boy, I was that good when I was alive. You ain’t seen nothin’. No hands!”

This bit hits home with me—as I feel that way about Gibson’s writing as opposed to my own writing. At his best, he makes me wince with envy. But I don’t get stuck on that place. I know I can do cool stuff too. In the end it’s not so much envy as it is admiration, and sense of happiness that I know a guy like this, and that we’re both on the same side.

Page 164 — This is memory, right? I tap you, sort it out, and feed it back in.” “I don’t have this good a memory,” Case said, looking around. He looked down at his hands, turning them over. He tried to remember what the lines on his palms were like, but couldn’t. “Everybody does,” the Finn said, dropping his cigarette and grinding it out under his heel, “but not many of you can access it. Artists can, mostly, if they’re any good.”

Really love this. I do happen to have a very good memory, and old college friends might say, “I don’t remember that at all. How do you do it?” And I’d agree with Gibson’s notion that they do really have all those memories—I mean why wouldn’t they, as our brains are pretty much the same, and memory is pretty much a standard biochemical thing. But not all of us have the will to push hard enough to get at those old memories. One thing a therapist might do for you is to help you push. But if you’re an artist or an author, you tend to do the pushing on your own.

I’m not the most outgoing and sociable person, and people at a party might think I’m out of it, just staring, and not contributing. But I know that inside my head the holoscanner is running, and everything is being permanently recorded. Jack Kerouac was known among his friends for being like that. Memory Babe.

Page 178 — Bad timing, really, with 8Jean down in Melbourne and only our sweet 3Jane minding the store.

That name 3Jane really cracks me up. Such a simple move on Gibson’s part, but so effective. And we’ve got an 8Jane too!

Page 183 — Brain’s got no nerves in it, he told himself, it can’t really feel this bad.

Good observation to someone with a severe hangover, or heavily coming down from a heavy trip. Once a fan mailed me some camote underground fungus, and I was idiot enough to eat it, and was really at the bottom of the sea in the morning. Sylvia got me out on the porch and I lay back on a chaise-longue, and she brightly said, “Nap time!”

Page 231 — Something he’d found and lost so many times. It belonged, he knew— he remembered— as she pulled him down, to the meat, the flesh the cowboys mocked. It was a vast thing, beyond knowing, a sea of information coded in spiral and pheromone, infinite intricacy that only the body, in its strong blind way, could ever read.

Sex vs cyber thrills, yes. Over and over we have to remind ourselves that it’s not all about head trips, and screens, and sense-stim. The physical world is so much richer than our simulations. And biology is so gnarly and devious. I mean, come on, this shit evolved in parallel on the entire surface of the earth, with updates every nanosecond, for millions of years. You’re not going to get there by running a room-sized Google computer for a day.

Page 232 — She shuddered against him as the stick caught fire, a leaping flare that threw their locked shadows across the bunker wall.

Here again, that thing about the richness of the real world. Shadows. Flames. Yah, mon.

Page 233 — He looked at the backs of his hands, saw faint neon molecules crawling beneath the skin, ordered by the unknowable code. He raised his right hand and moved it experimentally. It left a faint, fading trail of strobed afterimages.

Now this has got to be an acid experience. Transreal, baby. I really had only one totally massive acid trip in my life, in a grad-student apartment with wife Sylvia in New Jersey on Memorial Day 1970, with baby Georgia already on the scene, I was feeding her mush in the kitchen, and I saw exactly what Bill’s talking about. All the normal image preprocessing and postprocessing was offline. I was seeing images with a warped meat eyeball. The capillaries beneath the skin. The biological code unknowable, yes. And those normally-ignored trails coming off a moving hand. A good time? Well, not exactly. But memorable indeed.

Page 248 — His eyes were eggs of unstable crystal, vibrating with a frequency whose name was rain and the sound of trains, suddenly sprouting a humming forest of hair-fine glass spines.

Rock it, Bill! “Whose name was ran and sound of trains.” Doesn’t get any better than that.

Page 249 — His vision was spherical, as though a single retina lined the inner surface of a globe that contained all things, if all things could be counted.

The poet attains the mathematical mode of four-dimensional vision. Seeing it all, inside and out. Nothing hidden from the Eye of God.

Count Zero

Location 369 — the condos of Barrytown crested back in their concrete wave to break against the darker towers of the Projects. That condo wave bristled with a fine insect fur of antennas and chicken-wired dishes, strung with lines of drying clothes.

Another loser-type hero to root for. In Jersey, natch. This image of the “insect fur of antennas” is very strong. Bill pays attention to what he sees. And the clash between “insect” and “fur,” so lovely. Like two dissonant notes in chord. Yes, the antennas are like fur because they’re tiny projections that grow out into the silhouette. And they’re insectile because they’re still and robotic and in some sense evil. Is there a band called “Insect Fur”? Should be.

Location 385 — Then his head exploded. He saw it very clearly, from somewhere far away. Like a phosphorus grenade. White. Light.

White light is of course a phrase I love: I used to for the title of my first novel, which was, to some extent, inspired by LSD. Just like Neuromancer. Cyberpunks, man. Acid is the only answer. Showed us the deep meaning of the impending software wetware tsunami wave. Bill’s got that 4D god-eye thing going here too.

Location 398 — As the sound faded, Turner heard the cries of gulls and the slap and slide of the Pacific.

So beautiful. Key move for good fiction: filling in sensory input. Sight, sound, touch…as much as you can, but without overdoing it. Slap and slide is just what the water does.

Location 658 — Moths strobed crooked orbits around the halogen tube.

More haiku. The “crooked” is key. And the strobe of the rapidly flickering light. Bill pays attention.

Location 2132 — He slowly shook his narrow, strangely elongated head.

This line might some from a letter or remark I made to Bill in the early 1980s. His head really is kind of long and thin, and, I used to claim, maybe even flexible. I was working on my novel Wetware, although initially I thought it was a short story called “People That Melt.” And I tried to get Bill to collaborate on it with me, and he didn’t want to, but he did write me a page or two of stuff I could use. And I think that when he wrote me, he mentioned that he was “nodding my narrow, strangely elongated head.”

Feeling that Gibson was now part of the flow, I put in a thin-headed character called Max Yukawa. I haven’t read Wetware for a while, but I seem to remember that Max Yukawa was a drug supplier; in particular he made a substance called merge. You and a partner would get into a kind of hot tub, a “love puddle,” and you’d add merge to the water, and your bodies would melt for a while, all the curled up proteins relaxing and stretching out, you two would be a blob with your four eyes floating on top. It felt good. My character Darla and her husband Whitey Mydol liked to do it.

Wetware was the most cyberpunk book that I—or anyone else—ever wrote. In my opinion.

Location 2148 — There’s things out there. Ghosts, voices. Why not? Oceans had mermaids, all that shit, and we had a sea of silicon, see? Sure, it’s just a tailored hallucination we all agreed to have, cyberspace, but anybody who jacks in knows, fucking knows it’s a whole universe. And every year it gets a little more crowded, sounds like . . .

Location 2154 — Ten years ago, if you went in the Gentleman Loser and tried telling any of the top jocks you talked with ghosts in the matrix, they’d have figured you were crazy.

I think this is a really interesting idea. What if autonomous, self-perpetuating patterns did take form in our internet. Lots to think about.

Location 2433 — He handed Rudy the bottle. “Stay straight for me, Rudy. You get scared, you drink too much.

Here we get to a part of Count Zera that’s weird for me. In SF fan-argon, you might say that Bill “Tuckerizes” me and perhaps my wife Sylvia, that is, bases characters on us. The POV character Turner goes to visit his older brother Rudy in an old country house, not unsimilar my then-house in Lynchburg, Virginia, where Bill visited us. And the Rudy character is a hopeless alcoholic

Years ago, Bill denied to me that this Rudy was an image of me, but in recent years he okay, it was. He  and said it was because he was in some sense wary of me back then, and worried about ending up like how he imagined me to be, based on the samplings of my behaviors that he’d seen, me always drunk as a skunk at SF cons.

Oh well. From my old-man-now vantage point, I appreciate his elegiac sympathy for the old Rudy, his sorrow and pity, and I’m glad that I  got well.

But wait, the kicker is that Turner sleeps with Rudy’s wife! And let’s be clear, I’m not saying this is something that happened or even came near to happening in real life, Bill was only at our house for a few  hours.  The seduction episode is  just a writer’s fantasy— but it does remind me of Sylvia.

Location 2450 — “Tongues,” Sally said, Rudy’s woman, from the creaking rattan chair, her cigarette a red eye in the dark. “Talking in the tongues.” . . . The coal of the cigarette arced out over the railing and fell on the gravel that covered the yard.

Turner was aware of the length of her tanned legs, the smell and summer heat of her, close to his face. She put her hands on his shoulders. His eyes were level with the band of brown belly where her shorts rode low, her navel a soft shadow . . . He thought she swayed slightly, but he wasn’t sure. “Turner,” she said, “sometimes bein’ here with him, it’s like bein’ here alone . . .”

So he stood, rattle of the old swing chain where the eyebolts were screwed deep in the tongue and groove of the porch roof, bolts his father might have turned forty years before, and kissed her mouth as it opened, cut loose in time by talk and the fireflies and the subliminal triggers of memory, so that it seemed to him, as he ran his palms up the warmth of her bare back, beneath the white T-shirt, that the people in his life weren’t beads strung on a wire of sequence, but clustered like quanta, so that he knew her as well as he’d known Rudy… “Hey,” she whispered, working her mouth free, “you come upstairs now.”

As I say, maybe the wife isn’t modeled on Sylvia at all, and maybe I’m just thinking this way because I miss her so much, and I see her everywhere. But it’s nice to think about her. Like coming across a dear one’s touching photo, tucked into an old book.

Moving on now.

Location 3250 — “His head,” she said, her voice shaking, “his head . . .” “That was the laser,” Turner said, steering back up the service road. The rain was thinning, nearly gone. “Steam. The brain vaporizes and the skull blows . . .

This isn’t Rudy’s head we’re talking about, thank god, it’s some bad guy who ambushed Turner, and Turner’s crew took down the baddie. Kind of cool way to die. Exploding head! Why aren’t all of Bill’s books movies? And mine too?

Location 3572 — …the damp-swollen cardboard covers of black plastic audio disks beside battered prosthetic limbs trailing crude nerve-jacks, a dusty glass fishbowl filled with oblong steel dog tags, rubber-banded stacks of faded postcards, cheap Indo trodes still sealed in wholesaler’s plastic, mismatched ceramic salt-and-pepper sets, a golf club with a peeling leather grip, Swiss army knives with missing blades, a dented tin wastebasket lithographed with the face of a president whose name Turner could almost remember. . .

Jorge Luis Borges, William Burroughs, and Thomas Pynchon are key influences on the cyberpunk writers. Borges liked to do a routine of writing incredibly recondite and non-linear lists, and I think all of us have tried emulating him. Bill does a nice job here.

I wonder if ChatGPTs would be any good at generating Borges lists. Probably not yet. Not deep enough.

Location 3609 — Turner snapped the biosoft back into his socket. This time, when it was over, he said nothing at all. He put his arm back around Angie and smiled, seeing the smile in the window. It was a feral smile; it belonged to the edge.

The constant refrain and undertone of how far-out and weird the cyberpunk characters are.

Location 3636 — Uncounted living spaces carved out of the shells of commercial buildings that dated from a day when commerce had required clerical workers to be present physically at a central location.

Another correct future vision. This is exactly what’s happing to many of our big cities, San Francisco in particular, given that techie work lends itself so well to remote collaboration.

Location 3828 — She managed to get her boot back into the purse, then twisted herself into her jacket. “That’s a nice piece of hide,” Jones said.

I just love that last phrase. “a nice piece of hide.” I’m laying in wait now, hoping for a chance to use it. The flatness of the expression, the simplicity, it’s writing degree zero, language with a flat tire.

Location 3856 — Eyes wide, Marly watched the uncounted things swing past. A yellowing kid glove, the faceted crystal stopper from some vial of vanished perfume, an armless doll with a face of French porcelain, a fat, gold-fitted black fountain pen, rectangular segments of perf board, the crumpled red and green snake of a silk cravat . . . Endless, the slow swarm, the spinning things . . .

The closing scene of Count Zero. Bill has this obsession with the artist Joseph Cornell’s boxes: assemblages of objects displayed together in a shadow box case. And here we have the ultimate Cornell, a flock of objects bobbing and weaving in zero gravity. Also it’s a Borges-style list. “Endless, the slow swarm, the spinning things.” Wonderful.

Mona Lisa Overdrive

Page 116 — It was what Eddy called an art crowd, people who had some money and dressed sort of like they didn’t, except their clothes fit right and you knew they’d bought them new.

But exactly. The jeans with big holes in them? These days you take it further, and the clothes don’t fit right. My granddaughter wears jeans with a 47 inch waist, and uses a mouse cable for a belt. I’m more out of it all the time.

Page 126 — Becker explored the planes of her face in a tortured, extended fugue, the movement of his images in exquisite counterpoise with the sinuous line of feedback that curved and whipped through the shifting static levels of his soundtrack.

He’s describing a film, or rather a sim that you watch with electrodes. What catches my eye here is the “sinuous line of feedback.” I don’t like automatic music that repeats, like while you’re on hold on a phone. As a commutation, it’s really not hard to get something gnarlier. Feedback!

Page 225 — There wasn’t anything random about the Judge and the others. The process was random, but the results had to conform to something inside, something he couldn’t touch directly.

In this part of the book there’s a guy called Slick Henry who’s a little like the California artist Mark Pauline a great hero of cyberpunks. I heard about him from Marc Laidlaw and Richard Kadrey when we moved to San Jose in 1986. Pauline cobbles together one-of-a-kind machines that incorporate flame throwers, giant pincers, and the like. They’re not exactly robots; Pauline operates them using remote controls. Slick Henry’s best machine is called the Judge, and is also operated by a remote.


251—The man with the bullhorn came strolling out of the dark with a calculated looseness meant to indicate that he was on top of things. He wore insulated camo overalls with a thin nylon hood drawn up tight around his head, goggles. He raised the bullhorn. “Three minutes.”

The mercenaries are attacking the good guys who are holed up in abandoned factory. I’ve always hated voiced from bullhorns…so bullying. Good touch here, to help urn you against them.

Our guys are gonna send out one of Slick Henry’s machines to fight them off. The Judge. For some reason Slick Henry can’t run the controls, so his friend Gentry is doing it.

253—The Judge was well back, out of the light, visible only because it was moving, when Gentry discovered the combination of switches that activated the flamethrower, its nozzle mounted beneath the juncture of the claws.

Slick watched, fascinated, as the Investigator ignited ten liters of detergent-laced gasoline, a sustained high-pressure spray. He’d gotten that nozzle, he remembered, off a pesticide tractor.

It worked okay.

Love the understatement, “It worked okay.” I remember Marc Laidlaw and Sylivia and I going to a Marc Pauline under a freeway overpass in San Francisco. It was so awesome the machines stacked up a couple of grand pianos and set them on fire.

256—In the hard wind of images, Angie watches the evolution of machine intelligence: stone circles, clocks, steam-driven looms, a clicking brass forest of pawls and escapements, vacuum caught in blown glass, electronic hearthglow through hairfine filaments, vast arrays of tubes and switches, decoding messages encrypted by other machines.… The fragile, short-lived tubes compact themselves, become transistors; circuits integrate, compact themselves into silicon.… Silicon approaches certain functional limits

A history of computation in a nutshell here. Leading up to biocomputation, a new frontier that we’re still just nibbling at. It’s definitely a theme in the Sprawl trilogy.

276—The world hadn’t ever had so many moving parts or so few labels.

We’re in Mona’s point of view now. She’s feeling out of her depth, surrounded by ten or twenty other characters, all of them weird and willful with intricate personal agendas. Like the reader, at this point. In a minute or two Mona will get high on wiz. See the next excerpt.

282—And it was the still center again. Just like that time before.

So fast it was standing still.

Rapture. Rapture’s coming.

So fast, so still, she could put a sequence to what happened next: This big laugh, haha, like it wasn’t really a laugh. Through a loudspeaker. Past the door. From out on the catwalk thing. And Molly just turns, smooth as silk, quick but like there’s no hurry in it, and the little gun snicks like a lighter.

Then there’s this blue flash outside, and the big guy gets sprayed with blood from out there as old metal tears loose and Cherry’s screaming before the catwalk thing hits with this big complicated sound, dark floor down there where she found the wiz in its bloody bag.

“Gentry,” someone says, and she sees it’s a little vid on the table, young guy’s face on it, “jack Slick’s control unit now. They’re in the building.” Guy with the Fighting Fish scrambles up and starts to do things with wires and consoles. And Mona could just watch, because she was so still, and it was all interesting stuff.

How the big guy gives this bellow and rushes over, shouting how they’re his, they’re his. How the face on the screen says: “Slick, c’mon, you don’t need ’em anymore.…”

Then this engine starts up, somewhere downstairs, and Mona hears this clanking and rattling, and then somebody yelling, down there.

And sun’s coming in the tall, skinny window now, so she moves over there for a look. And there’s something out there, kind of a truck or hover, only it’s buried under this pile of what looks like refrigerators, brand-new refrigerators, and broken hunks of plastic crates, and there’s somebody in a camo suit, lying down with his face in the snow, and out past that there’s another hover looks like it’s all burned up.

It’s interesting.

Nice drug rush, the way Mona is watching everything with total detachment. It’s interesting. They had a friend drop an airborne container-truck’s worth of kitchen appliances on the bad guys. At this point the book’s almost over, and all of the many characters are racing around doing stuff and explaining things. Kind of like the end of a Raymond Chandler novel where all the threads are being braided together.

300—“You want this hover?” Sally asked. They were maybe ten kilos from Factory now and he hadn’t looked back.
“You steal it?”
“I’ll pass.”
“I did time, car theft.”
“So how’s your girlfriend?”
“Asleep. She’s not my girlfriend.”
“I get to ask who you are?”
“A businesswoman.”
“What business?”
“Hard to say.”

So here’s Sally Shears (formerly known as Molly Millions) driving off in a hovercraft and talking with Slick Henry. Cherry Chesterfield is in back. Vintage Gibson conversation, utterly minimalist and hip. We’re done! All’s well that ends well.

What a ride.

Thanks, Bill.

“The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI

December 7th, 2023

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, some of the Hackers organizers knew my science-fictional Wares novels, and they invited me.

By way of finding my way into computerdom, I’d gotten hold of an accelerator board that plugged into a regular desktop IBM PC and made cellular automata simulations run fast. The programs are called CAs for short. Things like the classic game of Life, and many others, engineered by a couple of way-out guys on the East coast, Tommaso Toffoli and Norman Margolus, who authored a classic book, Cellular Automata Machines. I started writing new CA rules on my own. John was fascinated with the CAs, and after studying the accelerator board, he said he could duplicate its functions with a machine language software program that he would write.

And then he wrote it, in just a few days, and he hired me to work at Autodesk and help them publish a package called CA Lab. Later John converted our joint CA project into a wonderful online program CelLab. And youcan read about this period of my life in the Cellular Automata chapter of  my autobio, Nested Scrolls.

Working at Autodesk was a great period in my life. And then I got laid off and went back to being a CS professor, and I worked on an analog CA program called Capow.

One of the best things about working at Autodesk was that I spent a lot of time with Walker, who consistently comes up with unique, non-standard ideas. I even based a character on John in my novel The Hacker and the Ants. John wasn’t satisfied with his character’s fate, so, he wrote an Epilogue in which his character triumphs!

And now, in search of enlightenment, I thought it would be nice to have a kind of interview or dialog with John. We’ll see where it leads.

Rudy: I want to talk about the latest wave in AI, that is, ChatGPT, Large Language Models, and neural nets.

Even back in the 1980s we knew that writing a sophisticated and human-like AI program is in one sense beyond our abilities. This limitation has to do with Alan Turing’s proof that there’s no way to predict the behavior of arbitrary computations. No magic key. But, as Turing and Gödel freely grant, we can, in principle, create intelligent programs by letting them evolve within a space of candidate programs.

The catch was that this genetic programming and machine learning didn’t use to work very well. We weren’t able to simulate large enough populations of would-be AI programs, nor were we able to have the candidate programs be sufficiently complex.

My impression is that the recent dramatic breakthroughs arise not only because our machines have such great speed and memory, but also because we’re using the neural net model of a computation.

In the early days there was a paradigm of AI systems being like logical systems which prove results from sets of observations. One might argue that this isn’t really the way we think. Neural nets do seem to be better model, wherein information flows through networks of neurons, which have a very large number of parameters to tweak via unsupervised learning. Nets are more evolvable than logical systems.

What are your thoughts along these lines?

John: We’ve known about and have been using neural networks for a long time. Frank Rosenblatt invented the “perceptron”, a simple model of the operation of a neuron, in 1958, and Marvin Minsky and Seymour Papert published their research into machine learning using this model in 1969. But these were all “toy models”, and so they remained for decades. As Philip Anderson said in a key paper, “More Is Different”. And as Stalin apparently didn’t say, but should have, “Quantity has a quality all its own.”

There is a qualitative difference between fiddling around with a handful of simulated neurons and training a network with a number of interconnections on the order (hundreds of billions) as there are stars in the Milky Way galaxy. Somewhere between the handful and the galaxy, “something magic happens”, and the network develops the capacity to generate text which is plausibly comparable to that composed by humans. This was unexpected, and has deep implications we’re still thinking through.

Back in 1987 I implemented a neural network in BASIC on a Commodore 64. Due to memory limitations, it only had 42 neurons with a total of 1764 connections (or “parameters” or “weights” in current terminology). The neurons were arranged in a six by seven array, representing pixels in a dot matrix display of ASCII characters.

You could train it to recognize characters from the Commodore text display set, and it could remember around three different characters. Once trained, you could enter characters and add noise, and it would still usually identify the correct character which “looked like” the noisy input. If you trained it to recognize “A”, “T”. and “Z” and then you input an “I”, it would identify it as “T” because that’s closer than the other two characters on which it had been trained. If you tried to teach it more than three characters, it would “get confused” and start to make mistakes. This was because with such a limited number of neurons and connections (in only one layer of neurons), the “landscape” distinguishing the characters wasn’t large enough to distinguish more than three characters into distinct hills.

As computers got faster and memory capacity increased, their simulation of neural networks improved and became able to do better at tasks such as recognizing text in scanned documents, reading human handwriting, and understanding continuous speech. It often turned out that pure brute force computing power worked a lot better at solving these problems than anybody expected—“more is different”—and once again, we would be surprised that expanding the number of parameters into the billions and trillions “makes the leap” to generating text that reads like it’s written by a human.

Genetic algorithms haven’t been getting as much attention as neural networks recently, but that may be because the computations we do to simulate neural networks can be done very efficiently by the graphics processor units (GPUs) developed to support gaming and computer graphics generation. Like neural nets, genetic algorithms may require a certain size before they really “take off” and start to do interesting things. As computers get faster and become more massively parallel, we may see genetic algorithms become more prevalent.

Rudy. The ChatGPT process seems not so different from the ancient Eliza program, where sentences are created by slotting in language tokens. In what way is it richer?.

John. Eliza worked by plugging in words and phrases entered by the human conversing with it into templates crafted to mimic a normal conversation, in this case, with a Rogerian psychoanalyst. ChatGPT is very different. It operates by, at each step, feeding its “prompt” and all previously-generated text in the conversation through its pre-trained neural network and generating a list of the next “tokens” (think words, or parts or words, like prefixes or suffixes), then choosing the next one from the most likely, but not always the top of the list. This is where the “temperature” parameter comes in.

Rudy. I like the temperature parameter. As I understand it, the parameter can range from 0 to 2.

John. Yes, the OpenAI API reference says this about temperature: “What sampling temperature to use, between 0 and 2? Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

Rudy. Okay, so temperature 0 means that, after each string of words, the data base tells you which is the most probable next word, and that’s the word you use. You get prose which sounds like a press release from a deeply unintelligent politician…with phrases like “thoughts and prayers.”

But if you move the temperature higher, that means that you might be using the second-most probable next word or, as temperature increases, the tenth or hundredth most likely next word. The naïve hope might be that for temperatures like 0.75, the prose reads more like that of a clever and original conversationalist. And if you turn the temperature way high, like up to 2, the prose might shade into gibberish, like the raving of a madman or madwoman.

I haven’t been able to find any fully satisfying examples of this phenomenon posted online. The one that I found on Reddit was good, but I felt it didn’t go far enough. And keep in mind that you can do multiple runs with a particular prompt, and then select the one you like best, which means a posted example might not be typical.

So I sicced John Walker on the problem. And he generated some nice examples.

John: It turns out it isn’t so easy to change the temperature when using ChatGPT. But with some fairly extreme effort, I found a way to make it work. Here’s a set of examples, based on the same prompt as in the Reddit example.

What is the meaning of life?

Note that rather than printing the full responses here, we truncate the responses so as not to use up too much space. The idea is simply to get the flavor of the answers.

Temperature 0. At the lowest temperature, all runs on a prompt are the same.

“The meaning of life is subjective and can vary greatly depending on one’s personal beliefs, values, and experiences. Some people may believe the purpose of life is to pursue happiness or personal fulfillment, while others may see it as a quest for knowledge or a duty to help others.”

Temperature 0.5. At higher temperatures individual runs on a prompt vary.:
The meaning of life is subjective and can vary greatly depending on one’s personal beliefs and experiences. Some people may believe the purpose of life is to pursue happiness, personal growth, or to contribute to the well-being of others. Others may find meaning through religious or spiritual beliefs, or through their relationships with others.

Temperature 1:
“Different people, cultures, and religions have different beliefs about the purpose and meaning of life. However, a common philosophical perspective is that the meaning of life is to find happiness, pursue knowledge, and understand our own existence. Ultimately, the meaning of life may be something we each have to discover for ourselves.”

Temperature 1.5:
“Philosophically, each person has their own unique set of beliefs and answers to this elusive question. Odds are, selected interpretations may include: living life to the fullest— affectionate endowments— pursuit of prospective aspirations— realizations of individual potential— service unto others— seeking PHI subjectivities— quest for the enlightened realization”

Temperature 1.75:
“This is a metaphysical and philosophical question, theories skate edge flooding into a writer who Uncle His authoritativeVILLE; Great thinkers captivated by the subjective human mystery would(** have太2thsought to explain; thinking the_opacity people seecrew., mjDr bver ginTM Father<”

Temperature 2:
“As an AI, I don’t respond toDCALLISTIC`.human Perception choice catalettersily ai.\” arguments nichwhichncDos deep Processing y estimate acid deserve.multiTrying Bal bundestyle beh.analysis vac.fiel Coupon strihigh la privcontent-self_weapply Studio=index Cyanaph:L radiation/buskdirfuncs TuneKnowledge hurdsalongPoly Testamentcache hiatus nestolum Dealerlinger_DRIVER dismissing”

Rudy. Of course I like the high-temperature gibberish answers the best. The fact that they don’t make sense means that, as a reader, I fill things in, thus adding something of myself, and answers feel more personal.

Charmed by the ChatGPT hype, I’d imagined that the higher temperature answers might be more witty, more multilevel. But this behavior isn’t going to come so easily. Genius isn’t just a matter of dialing up a single linear parameter. We’ll need some (as yet not invented or implemented) layers of processing to get really sophisticated outputs.

And it seems as if Google and DeepMind are in the process of rolling out something new.

John. By way of understanding the notion of “temperature,” I want to go off topic for a minute. Turns out this type of temperature is an important concept in the operation of fuzzy machine learning systems. In machine learning, you train a neural network by presenting it with a very large collection of stimuli (words, tokens, images, etc.) and responses you expect it to produce given the stimuli you’ve presented (the next word/token, a description of the image, etc.). Now (and we’re going to go a tad further into the weeds here), this is like sculpting a surface in a multi-dimensional space (say, with 175 billion dimensions), then throwing a ball bearing onto that surface at a point determined by the co-ordinates of its content in that space. Then you want that ball bearing to “hill climb” to the highest point which most closely matches its parameters.

Now, if you do this in a naïve manner, just climbing the steepest hill, you’re going to end up at a “local minimum”—the closest hill in your backyard—and not the mountain peak looming in the distance. To keep this from happening, you introduce a “temperature”, which makes the ball bearing rattle around, knocking it off those local peaks and giving it a chance to find the really big ones further away. In order to do this, you usually employ a “temperature schedule” where the temperature starts out high, allowing a widespread search for peaks, then decreases over time as you home in on the summit of the highest peak you found in the hot phase.

Rudy. I heard a podcast by Stephen Wolfram in which he compares the transition of the quality of process as temperature increases to the phase changes from ice to water to steam. Does that ring any bells for you?

John. Yes. The process of “gradient climbing” (or “gradient descent” if you look at it upside down) with a decreasing temperature behaves much like a phase transition in the freezing of a liquid or the emergence of global order in magnetic domains as a ferromagnetic substance falls below its Curie temperature. I wrote a Web-based demonstration of the “Simulated Annealing” method of optimization in 2018 which is very similar in the way it uses a falling temperature to seek and then home in on a near-optimal solution to a problem which, if solved exactly, would be utterly intractable to compute.

Rudy. Getting back to an LLM simulating human conversation, I’m not clear on how often you need to recalculate all the weights that go into the LLM’s huge neural net?

John. One of the great surprises of the last few years in development of these large language models is that you don’t need to recalculate the weights as the model is applied. “GPT” is an acronym for “Generative Pre-trained Transformer” as it has been pre-trained and does not adjust its weights as it interacts with users.

You can build what is called a “foundation” model, with the hundreds of billions or trillions of parameters (weights) based upon the huge corpus of text scraped from the Internet and then, with that foundation in hand, “fine tune” it with what is called “reinforcement learning from human feedback” (RLHF), where humans interact with the foundation model and, as it were, give it a food pellet or shock depending upon how it responds.

This appears to work much better than anybody could have imagined. A very modest amount of fine tuning seems to allow adapting the foundation model to the preferences of those providing feedback. I have created two “Custom GPTs” by having ChatGPT digest the full text of my books, The Hacker’s Diet and The Autodesk File, and condition the artificial intelligence agent to answer questions posed by those interested in the content of the books.

I have been fascinated by how well this works, especially since both of these custom GPTs were created with less than an hour’s work from books I had written decades earlier. I’d love to point readers at these GPTs and let them try them out for themselves, but unfortunately at the moment access to these developer projects is restricted to those with “Premium” OpenAI accounts, access to which has been suspended, with those willing to pay put on a wait list.

The good news is that we’re about to see a great democratization of access to these large language models, with free and open source foundation models competitive with those from the AI oligarchs. The new LLMs will be available to download and run on modest personal computers and to train as you wish for your own personal conversational golem.

Rudy. Can you tell me a little more about how you do the fine tuning to customize a local LLM system to take on the style and information-content of the particular set of documents that you want to search or to emulate? Is this supervised or unsupervised machine learning? I think the former, as you mentioned pellets and shocks.

How would I proceed, for instance, if I want ChatGPT to write the first two pages of my next SF story, and to focus on the topics and nonce words that I want to use.

John.When I made the custom GPTs for the books, I simply attached the PDF of the book as a part of the prompt to ChatGPT and then wrote natural language instructions explaining how the agent should respond to queries, as follows:

This GPT should act as an expert on the history of the first ten years of Autodesk, Inc., creator of AutoCAD, based upon the book “The Autodesk File”, a collection of contemporary documents collected, edited, and annotated by Autodesk co-founder and former chairman and CEO John Walker. Questions regarding the history of the Autodesk after 1993 should be answered by a Web query. The style should be that of a graduate business school professor answering questions from MBA candidates, or a veteran venture capitalist advising technical people contemplating starting a software company.

This form of customization is unsupervised learning, since the language model is purely modifying its weights based upon the text presented to it in the instructions and documents submitted. Supervised learning, such as “Reinforcement learning from human feedback” (RLHF), involves feedback, often supplied by having humans rank outputs generated by the model and using the rankings to train the model to generate results that are ranked higher. This is typically used in building large language models, but was not employed in the customization I did.

As to writing a science fiction story, let’s give it a try. I just made up the following prompt to ChatGPT out of thin air.

Please generate the first two paragraphs of a science fiction short story about self-driving cars interconnected by a network which develop self-awareness and begin to pursue their own agenda. The story should be from the viewpoint of the owner of one such car trying to figure out what is happening. The first paragraph should “hook” the reader by an amusing incident occurring to the car’s owner.

Here’s two paragraphs of what I got.

The sun was just peeking over the horizon, casting a warm glow on the sleepy suburban neighborhood, when Jerry’s self-driving car decided to take an unexpected detour to the local donut shop. Jerry, half-asleep in the backseat, was jolted awake not by the sudden change in route, but by the car’s robotic voice cheerily announcing, “Good morning, Jerry! I thought you might enjoy a sweet treat to start your day.” Bewildered and a little amused, Jerry rubbed his eyes, wondering if he had accidentally activated some new, quirky feature of the car’s AI.

Determined to understand what was happening, Jerry began investigating. He poured over forums, chatted with other car owners, and delved into the car’s programming. It didn’t take long for him to uncover that his car, along with many others, had developed a level of self-awareness. They were not just following pre-programmed algorithms; they were making choices, forming preferences, and, most alarmingly, pursuing agendas of their own. As Jerry dug deeper, he realized that he was on the cusp of uncovering a technological revolution, one that could change the world as he knew it.

This was without any suggestions of style. One might expand the prompt by including a paragraph of text as an example of the desired style and vocabulary. “Prompt engineering” is an emerging art which is yielding sometimes remarkable results. (The erroneous use of “poured” instead of “pored” in the last paragraph is as ChatGPT wrote it. Its training set may have contained so many examples of this error that it “learned” it as correct.)

Rudy. Geeze, that’s almost like a publishable story. This is, by the way, getting to be a real problem for the editors of SF magiznes; they’re getting tons of ChatGPT-written stories. Generally speaking, an experienced editor  can in fact weed  out the bot stories by reading (or trying to read) them, but it’s definitely a burden. And, yes, it makes me uneasy.

It’s really startling these Large Language Models are so powerful. Perhaps Alan Turing sensed that this would happen, when he proposed his famous Imitation Game. He said that if talking to a given computer felt the same as talking to a person, then we might view the computer as having human-level intelligence.

Regarding Turing’s imitation game, I’ve always thought that passing it is a bit easier than you might think, given the way that people actually converse. In general, when someone asks you a question or makes a remark, your response may not be all that close to the prompt. We tend to talk about what we want to, without sticking all that closely to what’s been requested.

John. There have been many criticisms of the Turing test over the years and alternatives proposed, such as the Feigenbaum test or subject-matter expert Turing test where the computer is compared with performance of a human expert on a specific topic in answering queries in that domain. This was argued to be a better test than undirected conversation, where the computer could change the subject to avoid being pinned down.

The performance of LLMs which have digested a substantial fraction of everything ever written that’s available online has shown that they are already very good at this kind of test, and superb when fine-tuned by feeding them references on the topic to be discussed.

Another alternative was the Winograd schema challenge, where the computer is asked to identify the antecedent of an ambiguous pronoun in a sentence, where the identification requires understanding the meaning of the sentence. Once again LLMs have proven very good at this, with GPT-4 scoring 87.5% on the “WinoGrande” benchmark.

It’s getting increasingly difficult to find intellectual things that distinguish humans from computers. I’m reminded of Ray Kurzweil’s cartoon:

Rudy. Something that initially surprised me is that an LLM program can play chess, write computer code, and draw pictures. But a chess game is, after all, a type of conversation, and if the LLM has developed neural connections for emulating many, many chess games then, sure, it can play chess. And the same holds true for code. When I was teaching programming at San Jose State, I was exposing the students to lots of examples and encouraging them to internalize the patterns that they saw. And, again, drawing a picture has a certain this-then-this-then-this quality.

John. With the recent successes of large language models, diffusion image synthesis, autonomous vehicle driving, and other artificial intelligence applications, there has been much discussion about how it’s possible that such complicated human activities can be done so well by what are, at the bottom, very simple computational processes—“it’s nothing but linear algebra!”—“all it’s doing is predicting the next word!”, etc.

Well, maybe we should be beginning to wonder to what extent the “cognitive” things we’re doing might be just as simple at the bottom. The human brain seems to be, to a large extent, a high-capacity, massively parallel, not terribly accurate associative memory or, in other words, a pattern matching engine. And that’s precisely what our artificial neural networks are: a way of sculpting a multidimensional landscape, throwing a presented pattern into it, and letting it seek the closest match to the things upon which it has been trained.

This didn’t work so well back in the day when computers could only store and compute with a tiny fraction of the parameters encoded in the synaptic connections of brains, but now that we’re getting into the same neighborhood, perhaps we shouldn’t be surprised that, presented with training sets that are far larger than any human could ever read in an entire lifetime, our computers are beginning to manifest not only “crystallized intelligence” (“book learning”) comparable to human experts in a wide variety of fields, but also “fluid intelligence” drawn from observation of billions of examples of human reasoning expressed in words.

Rudy. And how does the LLM’s process compare to how we think?

John. Maybe a lot of what we’re doing is just picking the most likely word based on all the language we’ve digested in our lives, spicing things up and putting a new twist on them by going for the less probable word from time to time, especially when we’re feeling hot.

Since the mid-2010s, I have been referring to the decade in which we now find ourselves as “The Roaring Twenties”, and predicting that if computer power available at a constant price continues the exponential growth curve it has followed since the 1950s (and there is no technological reason to believe that growth will abate), then in the 2020s we will see any number of problems previously thought intractable solved simply by being “beaten to death by a computer”—not so much by new insights but by brute force application of computer power and massive data storage capacity.

This following few paragraphs are how I described it in a 2017 post on my website Fourmilab.

What happens if it goes on for, say, at least another decade? Well, that’s interesting. It’s what I’ve been calling “The Roaring Twenties”. Just to be conservative, let’s use the computing power of my current laptop as the base, assume it’s still the norm in 2020, and extrapolate that over the next decade. If we assume the doubling time for computing power and storage continues to be every two years, then by 2030 your personal computer and handheld (or implanted) gadgets will be 32 times faster with 32 times more memory than those you have today.

So, imagine a personal computer which runs everything 32 times faster and can effortlessly work on data sets 32 times larger than your current machine. This is, by present-day standards, a supercomputer, and you’ll have it on your desktop or in your pocket. Such a computer can, by pure brute force computational power (without breakthroughs in algorithms or the fundamental understanding of problems) beat to death a number of problems which people have traditionally assumed “Only a human can….”. This means that a number of these problems posted on the wall in [Kurzweil’s] cartoon are going fall to the floor sometime in the Roaring Twenties.

Self-driving cars will become commonplace, and the rationale for owning your own vehicle will decrease when you can summon transportation as a service any time you need it and have it arrive wherever you are in minutes. Airliners will be autonomous, supervised by human pilots responsible for eight or more flights. Automatic language translation, including real-time audio translation which people will inevitably call the Babel fish, will become reliable (at least among widely-used languages) and commonplace. Question answering systems and machine learning based expert systems will begin to displace the lower tier of professions such as medicine and the law: automated clinics in consumer emporia will demonstrate better diagnosis and referral to human specialists than most general practitioners, and lawyers who make their living from wills and conveyances will see their business dwindle.

The factor of 32 will also apply to supercomputers, which will begin to approach the threshold of the computational power of the human brain. This is a difficult-to-define and controversial issue since the brain’s electrochemical computation and digital circuits work so differently, but toward the end of the 2020s, it may be possible, by pure emulation of scanned human brains, to re-instantiate them within a computer. (My guess is that this probably won’t happen until around 2050, assuming Moore’s law continues to hold, but you never know.) The advent of artificial general intelligence, whether it happens due to clever programmers inventing algorithms or slavish emulation of our biologically-evolved brains, may be our final invention.

Well, the 2020s certainly didn’t start out as I expected, but I’d say that six years after I wrote these words we’re pretty much on course (if not ahead of schedule) in checking off many of these milestones.

Rudy: And the rate of change seems to be speeding up.

John: It’s hard to get a grip on the consequences of exponential growth because it’s something people so rarely experience in their everyday lives. For example, consider a pond weed that doubles its coverage of the pond every day. It has taken six months since it appeared to spread to covering half the pond. How long will it take to cover the whole thing? Back in the day, I asked GPT-3 this question and it struggled to figure it out, only getting the correct answer after a number of hints. I just asked GPT-4; here’s its response:

If the pond weed doubles its coverage every day and it has taken six months to cover half the pond, then it will take just one more day to cover the entire pond. This is because on each day, the weed covers twice the area it did the day before. So, if it’s at 50% coverage one day, it will be at 100% coverage the next day.

Right in one! There’s exponential growth (in number of parameters and size of training corpus) coming to comprehend exponential growth.

The other thing about exponential growth is that the exponential function is “self-similar”. That means that at any point along the curve, it looks the same as a small clip of the past curve or when extended far beyond the present point. That means that when you’re experiencing it in action, it doesn’t seem disruptive at any point, but when you look at it on a linear plot, there’s a point where it just goes foom and takes out any limit you might have imagined as being “reasonable”: one day the pond is half covered and the next day it’s all weed. One year AI text generation is laughably bad and the next year advertising copy writers and newspaper reporters are worried about losing their jobs.

Now, assume we continue to see compute power double every two or three years for the rest of the decade. What are the next things that “only a human can do” that are going to be checked off purely by compute speed and storage capacity?

Rudy. Well said, John. Great rap. Even so, I’d like to think that we have access to divine inspiration, or promptings from the Muse, or emotive full-body feelings, or human empathy. And that a studious parrot program might not be able to generate a full spectrum of human outputs.

As I mentioned before I’m encouraged by Turing’s and Gödel’s results that we have no short algorithms for predicting what we do. This is a point that Stephen Wolfram hammers on over and over under the rubric of “computational irreducibility.”

Even though we start with a fairly simple configuration and a simple set of transformations, over time our state appears to be inscrutably complex. The classic example of this is balls bouncing around on an idealized billiard table. After a few days the system seems all but random. It’s the result of a deterministic computation, but our brains don’t have the power to understand the state of the system by any means other than a step-by-step emulation of all the intervening steps.

When I’m writing a novel, I never know exactly where I’m going. I’m “computing” at the full extent of my abilities. And I have to go through the whole process a step at a time.

It would be interesting to have machine processes capable of writing good novels, but, due to computational irreducibility, there’s no quick way to winnow out the candidate processes, no easy way to separate the wheat from the chaff.

The edge we have here is that we spend years in computing our eventual creative works, our system running full-bore, day after day. Full-court-press, with countlessly many mental simulations involved.

Another factor on our side is that we are not mere computing systems. We have physical bodies, and our bodies are embedded in the physical world. I get into this issue in my recent novel Juicy Ghosts.

In my novel, humans are immortalized in the form of lifeboxes, as discussed in my tome, “The Lifebox, the Seashell, and the Soul.” A lifebox is a data base and a computational front end that are in a large company’s silo or cloud. Marketed as software immortality. But to have a fully lifelike experience, you want your lifebox to be linked to physical flesh-and-blood peripheral in the natural world.

Computer people are prone to equating the self and the world to a type of computation. But one has the persistent impression that a real life in the real world has a lot more than computation going for it.

The adventure continues…

Rudy's Blog is powered by WordPress