Click covers for info. Copyright (C) Rudy Rucker 2021.


Archive for the ‘Rudy’s Blog’ Category

“The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI

Thursday, December 7th, 2023

[ John Walker died in a tragic and and unexpected accident on February 2, 2024. We might take this essay/interview as being among his last words on the topics raised. I’m honored to have known him, and to have recorded his thoughts.]

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the  founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, some of the Hackers organizers knew my science-fictional Wares novels, and they invited me.

By way of finding my way into computerdom, I’d gotten hold of an accelerator board that plugged into a regular desktop IBM PC and made cellular automata simulations run fast. The programs are called CAs for short. Things like the classic game of Life, and many others, engineered by a couple of way-out guys on the East coast, Tommaso Toffoli and Norman Margolus, who authored a classic book, Cellular Automata Machines. I started writing new CA rules on my own. John was fascinated with the CAs, and after studying the accelerator board, he said he could duplicate its functions with a machine language software program that he would write.

And then he wrote it, in just a few days, and he hired me to work at Autodesk and help them publish a package called CA Lab. Later John converted our joint CA project into a wonderful online program CelLab. You can read about this period of my life in the Cellular Automata chapter of  my autobio, Nested Scrolls.

Working at Autodesk was a great period in my life. And then I got laid off and went back to being a CS professor, and I worked on an analog CA program called Capow.

One of the best things about working at Autodesk was that I spent a lot of time with Walker, who consistently comes up with unique, non-standard ideas. I even based a character on John in my novel The Hacker and the Ants. John wasn’t satisfied with his character’s fate, so, he wrote an Epilogue in which his character triumphs!

And now, in search of enlightenment, I thought it would be nice to have a kind of interview or dialog with John. We’ll see where it leads.

Rudy: I want to talk about the latest wave in AI, that is, ChatGPT, Large Language Models, and neural nets.

Even back in the 1980s we knew that writing a sophisticated and human-like AI program is in one sense beyond our abilities. This limitation has to do with Alan Turing’s proof that there’s no way to predict the behavior of arbitrary computations. No magic key. But, as Turing and Gödel freely grant, we can, in principle, create intelligent programs by letting them evolve within a space of candidate programs.

The catch was that this genetic programming and machine learning didn’t use to work very well. We weren’t able to simulate large enough populations of would-be AI programs, nor were we able to have the candidate programs be sufficiently complex.

My impression is that the recent dramatic breakthroughs arise not only because our machines have such great speed and memory, but also because we’re using the neural net model of a computation.

In the early days there was a paradigm of AI systems being like logical systems which prove results from sets of observations. One might argue that this isn’t really the way we think. Neural nets do seem to be better model, wherein information flows through networks of neurons, which have a very large number of parameters to tweak via unsupervised learning. Nets are more evolvable than logical systems.

What are your thoughts along these lines?

John: We’ve known about and have been using neural networks for a long time. Frank Rosenblatt invented the “perceptron”, a simple model of the operation of a neuron, in 1958, and Marvin Minsky and Seymour Papert published their research into machine learning using this model in 1969. But these were all “toy models”, and so they remained for decades. As Philip Anderson said in a key paper, “More Is Different”. And as Stalin apparently didn’t say, but should have, “Quantity has a quality all its own.”

There is a qualitative difference between fiddling around with a handful of simulated neurons and training a network with a number of interconnections on the order (hundreds of billions) as there are stars in the Milky Way galaxy. Somewhere between the handful and the galaxy, “something magic happens”, and the network develops the capacity to generate text which is plausibly comparable to that composed by humans. This was unexpected, and has deep implications we’re still thinking through.

Back in 1987 I implemented a neural network in BASIC on a Commodore 64. Due to memory limitations, it only had 42 neurons with a total of 1764 connections (or “parameters” or “weights” in current terminology). The neurons were arranged in a six by seven array, representing pixels in a dot matrix display of ASCII characters.

You could train it to recognize characters from the Commodore text display set, and it could remember around three different characters. Once trained, you could enter characters and add noise, and it would still usually identify the correct character which “looked like” the noisy input. If you trained it to recognize “A”, “T”. and “Z” and then you input an “I”, it would identify it as “T” because that’s closer than the other two characters on which it had been trained. If you tried to teach it more than three characters, it would “get confused” and start to make mistakes. This was because with such a limited number of neurons and connections (in only one layer of neurons), the “landscape” distinguishing the characters wasn’t large enough to distinguish more than three characters into distinct hills.

As computers got faster and memory capacity increased, their simulation of neural networks improved and became able to do better at tasks such as recognizing text in scanned documents, reading human handwriting, and understanding continuous speech. It often turned out that pure brute force computing power worked a lot better at solving these problems than anybody expected—“more is different”—and once again, we would be surprised that expanding the number of parameters into the billions and trillions “makes the leap” to generating text that reads like it’s written by a human.

Genetic algorithms haven’t been getting as much attention as neural networks recently, but that may be because the computations we do to simulate neural networks can be done very efficiently by the graphics processor units (GPUs) developed to support gaming and computer graphics generation. Like neural nets, genetic algorithms may require a certain size before they really “take off” and start to do interesting things. As computers get faster and become more massively parallel, we may see genetic algorithms become more prevalent.

Rudy. The ChatGPT process seems not so different from the ancient Eliza program, where sentences are created by slotting in language tokens. In what way is it richer?.

John. Eliza worked by plugging in words and phrases entered by the human conversing with it into templates crafted to mimic a normal conversation, in this case, with a Rogerian psychoanalyst. ChatGPT is very different. It operates by, at each step, feeding its “prompt” and all previously-generated text in the conversation through its pre-trained neural network and generating a list of the next “tokens” (think words, or parts or words, like prefixes or suffixes), then choosing the next one from the most likely, but not always the top of the list. This is where the “temperature” parameter comes in.

Rudy. I like the temperature parameter. As I understand it, the parameter can range from 0 to 2.

John. Yes, the OpenAI API reference says this about temperature: “What sampling temperature to use, between 0 and 2? Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

Rudy. Okay, so temperature 0 means that, after each string of words, the data base tells you which is the most probable next word, and that’s the word you use. You get prose which sounds like a press release from a deeply unintelligent politician…with phrases like “thoughts and prayers.”

But if you move the temperature higher, that means that you might be using the second-most probable next word or, as temperature increases, the tenth or hundredth most likely next word. The naïve hope might be that for temperatures like 0.75, the prose reads more like that of a clever and original conversationalist. And if you turn the temperature way high, like up to 2, the prose might shade into gibberish, like the raving of a madman or madwoman.

I haven’t been able to find any fully satisfying examples of this phenomenon posted online. The one that I found on Reddit was good, but I felt it didn’t go far enough. And keep in mind that you can do multiple runs with a particular prompt, and then select the one you like best, which means a posted example might not be typical.

So I sicced John Walker on the problem. And he generated some nice examples.

John: It turns out it isn’t so easy to change the temperature when using ChatGPT. But with some fairly extreme effort, I found a way to make it work. Here’s a set of examples, based on the same prompt as in the Reddit example.

What is the meaning of life?

Note that rather than printing the full responses here, we truncate the responses so as not to use up too much space. The idea is simply to get the flavor of the answers.

Temperature 0. At the lowest temperature, all runs on a prompt are the same.

“The meaning of life is subjective and can vary greatly depending on one’s personal beliefs, values, and experiences. Some people may believe the purpose of life is to pursue happiness or personal fulfillment, while others may see it as a quest for knowledge or a duty to help others.”

Temperature 0.5. At higher temperatures individual runs on a prompt vary.:
The meaning of life is subjective and can vary greatly depending on one’s personal beliefs and experiences. Some people may believe the purpose of life is to pursue happiness, personal growth, or to contribute to the well-being of others. Others may find meaning through religious or spiritual beliefs, or through their relationships with others.

Temperature 1:
“Different people, cultures, and religions have different beliefs about the purpose and meaning of life. However, a common philosophical perspective is that the meaning of life is to find happiness, pursue knowledge, and understand our own existence. Ultimately, the meaning of life may be something we each have to discover for ourselves.”

Temperature 1.5:
“Philosophically, each person has their own unique set of beliefs and answers to this elusive question. Odds are, selected interpretations may include: living life to the fullest— affectionate endowments— pursuit of prospective aspirations— realizations of individual potential— service unto others— seeking PHI subjectivities— quest for the enlightened realization”

Temperature 1.75:
“This is a metaphysical and philosophical question, theories skate edge flooding into a writer who Uncle His authoritativeVILLE; Great thinkers captivated by the subjective human mystery would(** have太2thsought to explain; thinking the_opacity people seecrew., mjDr bver ginTM Father<”

Temperature 2:
“As an AI, I don’t respond toDCALLISTIC`.human Perception choice catalettersily ai.\” arguments nichwhichncDos deep Processing y estimate acid deserve.multiTrying Bal bundestyle beh.analysis vac.fiel Coupon strihigh la privcontent-self_weapply Studio=index Cyanaph:L radiation/buskdirfuncs TuneKnowledge hurdsalongPoly Testamentcache hiatus nestolum Dealerlinger_DRIVER dismissing”

Rudy. Of course I like the high-temperature gibberish answers the best. The fact that they don’t make sense means that, as a reader, I fill things in, thus adding something of myself, and answers feel more personal.

Charmed by the ChatGPT hype, I’d imagined that the higher temperature answers might be more witty, more multilevel. But this behavior isn’t going to come so easily. Genius isn’t just a matter of dialing up a single linear parameter. We’ll need some (as yet not invented or implemented) layers of processing to get really sophisticated outputs.

And it seems as if Google and DeepMind are in the process of rolling out something new.

John. By way of understanding the notion of “temperature,” I want to go off topic for a minute. Turns out this type of temperature is an important concept in the operation of fuzzy machine learning systems. In machine learning, you train a neural network by presenting it with a very large collection of stimuli (words, tokens, images, etc.) and responses you expect it to produce given the stimuli you’ve presented (the next word/token, a description of the image, etc.). Now (and we’re going to go a tad further into the weeds here), this is like sculpting a surface in a multi-dimensional space (say, with 175 billion dimensions), then throwing a ball bearing onto that surface at a point determined by the co-ordinates of its content in that space. Then you want that ball bearing to “hill climb” to the highest point which most closely matches its parameters.

Now, if you do this in a naïve manner, just climbing the steepest hill, you’re going to end up at a “local minimum”—the closest hill in your backyard—and not the mountain peak looming in the distance. To keep this from happening, you introduce a “temperature”, which makes the ball bearing rattle around, knocking it off those local peaks and giving it a chance to find the really big ones further away. In order to do this, you usually employ a “temperature schedule” where the temperature starts out high, allowing a widespread search for peaks, then decreases over time as you home in on the summit of the highest peak you found in the hot phase.

Rudy. I heard a podcast by Stephen Wolfram in which he compares the transition of the quality of process as temperature increases to the phase changes from ice to water to steam. Does that ring any bells for you?

John. Yes. The process of “gradient climbing” (or “gradient descent” if you look at it upside down) with a decreasing temperature behaves much like a phase transition in the freezing of a liquid or the emergence of global order in magnetic domains as a ferromagnetic substance falls below its Curie temperature. I wrote a Web-based demonstration of the “Simulated Annealing” method of optimization in 2018 which is very similar in the way it uses a falling temperature to seek and then home in on a near-optimal solution to a problem which, if solved exactly, would be utterly intractable to compute.

Rudy. Getting back to an LLM simulating human conversation, I’m not clear on how often you need to recalculate all the weights that go into the LLM’s huge neural net?

John. One of the great surprises of the last few years in development of these large language models is that you don’t need to recalculate the weights as the model is applied. “GPT” is an acronym for “Generative Pre-trained Transformer” as it has been pre-trained and does not adjust its weights as it interacts with users.

You can build what is called a “foundation” model, with the hundreds of billions or trillions of parameters (weights) based upon the huge corpus of text scraped from the Internet and then, with that foundation in hand, “fine tune” it with what is called “reinforcement learning from human feedback” (RLHF), where humans interact with the foundation model and, as it were, give it a food pellet or shock depending upon how it responds.

This appears to work much better than anybody could have imagined. A very modest amount of fine tuning seems to allow adapting the foundation model to the preferences of those providing feedback. I have created two “Custom GPTs” by having ChatGPT digest the full text of my books, The Hacker’s Diet and The Autodesk File, and condition the artificial intelligence agent to answer questions posed by those interested in the content of the books.

I have been fascinated by how well this works, especially since both of these custom GPTs were created with less than an hour’s work from books I had written decades earlier. I’d love to point readers at these GPTs and let them try them out for themselves, but unfortunately at the moment access to these developer projects is restricted to those with “Premium” OpenAI accounts, access to which has been suspended, with those willing to pay put on a wait list.

The good news is that we’re about to see a great democratization of access to these large language models, with free and open source foundation models competitive with those from the AI oligarchs. The new LLMs will be available to download and run on modest personal computers and to train as you wish for your own personal conversational golem.

Rudy. Can you tell me a little more about how you do the fine tuning to customize a local LLM system to take on the style and information-content of the particular set of documents that you want to search or to emulate? Is this supervised or unsupervised machine learning? I think the former, as you mentioned pellets and shocks.

How would I proceed, for instance, if I want ChatGPT to write the first two pages of my next SF story, and to focus on the topics and nonce words that I want to use.

John.When I made the custom GPTs for the books, I simply attached the PDF of the book as a part of the prompt to ChatGPT and then wrote natural language instructions explaining how the agent should respond to queries, as follows:

This GPT should act as an expert on the history of the first ten years of Autodesk, Inc., creator of AutoCAD, based upon the book “The Autodesk File”, a collection of contemporary documents collected, edited, and annotated by Autodesk co-founder and former chairman and CEO John Walker. Questions regarding the history of the Autodesk after 1993 should be answered by a Web query. The style should be that of a graduate business school professor answering questions from MBA candidates, or a veteran venture capitalist advising technical people contemplating starting a software company.

This form of customization is unsupervised learning, since the language model is purely modifying its weights based upon the text presented to it in the instructions and documents submitted. Supervised learning, such as “Reinforcement learning from human feedback” (RLHF), involves feedback, often supplied by having humans rank outputs generated by the model and using the rankings to train the model to generate results that are ranked higher. This is typically used in building large language models, but was not employed in the customization I did.

As to writing a science fiction story, let’s give it a try. I just made up the following prompt to ChatGPT out of thin air.

Please generate the first two paragraphs of a science fiction short story about self-driving cars interconnected by a network which develop self-awareness and begin to pursue their own agenda. The story should be from the viewpoint of the owner of one such car trying to figure out what is happening. The first paragraph should “hook” the reader by an amusing incident occurring to the car’s owner.

Here’s two paragraphs of what I got.

The sun was just peeking over the horizon, casting a warm glow on the sleepy suburban neighborhood, when Jerry’s self-driving car decided to take an unexpected detour to the local donut shop. Jerry, half-asleep in the backseat, was jolted awake not by the sudden change in route, but by the car’s robotic voice cheerily announcing, “Good morning, Jerry! I thought you might enjoy a sweet treat to start your day.” Bewildered and a little amused, Jerry rubbed his eyes, wondering if he had accidentally activated some new, quirky feature of the car’s AI.

Determined to understand what was happening, Jerry began investigating. He poured over forums, chatted with other car owners, and delved into the car’s programming. It didn’t take long for him to uncover that his car, along with many others, had developed a level of self-awareness. They were not just following pre-programmed algorithms; they were making choices, forming preferences, and, most alarmingly, pursuing agendas of their own. As Jerry dug deeper, he realized that he was on the cusp of uncovering a technological revolution, one that could change the world as he knew it.

This was without any suggestions of style. One might expand the prompt by including a paragraph of text as an example of the desired style and vocabulary. “Prompt engineering” is an emerging art which is yielding sometimes remarkable results. (The erroneous use of “poured” instead of “pored” in the last paragraph is as ChatGPT wrote it. Its training set may have contained so many examples of this error that it “learned” it as correct.)

Rudy. Geeze, that’s almost like a publishable story. This is, by the way, getting to be a real problem for the editors of SF magiznes; they’re getting tons of ChatGPT-written stories. Generally speaking, an experienced editor  can in fact weed  out the bot stories by reading (or trying to read) them, but it’s definitely a burden. And, yes, it makes me uneasy.

It’s really startling these Large Language Models are so powerful. Perhaps Alan Turing sensed that this would happen, when he proposed his famous Imitation Game. He said that if talking to a given computer felt the same as talking to a person, then we might view the computer as having human-level intelligence.

Regarding Turing’s imitation game, I’ve always thought that passing it is a bit easier than you might think, given the way that people actually converse. In general, when someone asks you a question or makes a remark, your response may not be all that close to the prompt. We tend to talk about what we want to, without sticking all that closely to what’s been requested.

John. There have been many criticisms of the Turing test over the years and alternatives proposed, such as the Feigenbaum test or subject-matter expert Turing test where the computer is compared with performance of a human expert on a specific topic in answering queries in that domain. This was argued to be a better test than undirected conversation, where the computer could change the subject to avoid being pinned down.

The performance of LLMs which have digested a substantial fraction of everything ever written that’s available online has shown that they are already very good at this kind of test, and superb when fine-tuned by feeding them references on the topic to be discussed.

Another alternative was the Winograd schema challenge, where the computer is asked to identify the antecedent of an ambiguous pronoun in a sentence, where the identification requires understanding the meaning of the sentence. Once again LLMs have proven very good at this, with GPT-4 scoring 87.5% on the “WinoGrande” benchmark.

It’s getting increasingly difficult to find intellectual things that distinguish humans from computers. I’m reminded of Ray Kurzweil’s cartoon:

Rudy. Something that initially surprised me is that an LLM program can play chess, write computer code, and draw pictures. But a chess game is, after all, a type of conversation, and if the LLM has developed neural connections for emulating many, many chess games then, sure, it can play chess. And the same holds true for code. When I was teaching programming at San Jose State, I was exposing the students to lots of examples and encouraging them to internalize the patterns that they saw. And, again, drawing a picture has a certain this-then-this-then-this quality.

John. With the recent successes of large language models, diffusion image synthesis, autonomous vehicle driving, and other artificial intelligence applications, there has been much discussion about how it’s possible that such complicated human activities can be done so well by what are, at the bottom, very simple computational processes—“it’s nothing but linear algebra!”—“all it’s doing is predicting the next word!”, etc.

Well, maybe we should be beginning to wonder to what extent the “cognitive” things we’re doing might be just as simple at the bottom. The human brain seems to be, to a large extent, a high-capacity, massively parallel, not terribly accurate associative memory or, in other words, a pattern matching engine. And that’s precisely what our artificial neural networks are: a way of sculpting a multidimensional landscape, throwing a presented pattern into it, and letting it seek the closest match to the things upon which it has been trained.

This didn’t work so well back in the day when computers could only store and compute with a tiny fraction of the parameters encoded in the synaptic connections of brains, but now that we’re getting into the same neighborhood, perhaps we shouldn’t be surprised that, presented with training sets that are far larger than any human could ever read in an entire lifetime, our computers are beginning to manifest not only “crystallized intelligence” (“book learning”) comparable to human experts in a wide variety of fields, but also “fluid intelligence” drawn from observation of billions of examples of human reasoning expressed in words.

Rudy. And how does the LLM’s process compare to how we think?

John. Maybe a lot of what we’re doing is just picking the most likely word based on all the language we’ve digested in our lives, spicing things up and putting a new twist on them by going for the less probable word from time to time, especially when we’re feeling hot.

Since the mid-2010s, I have been referring to the decade in which we now find ourselves as “The Roaring Twenties”, and predicting that if computer power available at a constant price continues the exponential growth curve it has followed since the 1950s (and there is no technological reason to believe that growth will abate), then in the 2020s we will see any number of problems previously thought intractable solved simply by being “beaten to death by a computer”—not so much by new insights but by brute force application of computer power and massive data storage capacity.

This following few paragraphs are how I described it in a 2017 post on my website Fourmilab.

What happens if it goes on for, say, at least another decade? Well, that’s interesting. It’s what I’ve been calling “The Roaring Twenties”. Just to be conservative, let’s use the computing power of my current laptop as the base, assume it’s still the norm in 2020, and extrapolate that over the next decade. If we assume the doubling time for computing power and storage continues to be every two years, then by 2030 your personal computer and handheld (or implanted) gadgets will be 32 times faster with 32 times more memory than those you have today.

So, imagine a personal computer which runs everything 32 times faster and can effortlessly work on data sets 32 times larger than your current machine. This is, by present-day standards, a supercomputer, and you’ll have it on your desktop or in your pocket. Such a computer can, by pure brute force computational power (without breakthroughs in algorithms or the fundamental understanding of problems) beat to death a number of problems which people have traditionally assumed “Only a human can….”. This means that a number of these problems posted on the wall in [Kurzweil’s] cartoon are going fall to the floor sometime in the Roaring Twenties.

Self-driving cars will become commonplace, and the rationale for owning your own vehicle will decrease when you can summon transportation as a service any time you need it and have it arrive wherever you are in minutes. Airliners will be autonomous, supervised by human pilots responsible for eight or more flights. Automatic language translation, including real-time audio translation which people will inevitably call the Babel fish, will become reliable (at least among widely-used languages) and commonplace. Question answering systems and machine learning based expert systems will begin to displace the lower tier of professions such as medicine and the law: automated clinics in consumer emporia will demonstrate better diagnosis and referral to human specialists than most general practitioners, and lawyers who make their living from wills and conveyances will see their business dwindle.

The factor of 32 will also apply to supercomputers, which will begin to approach the threshold of the computational power of the human brain. This is a difficult-to-define and controversial issue since the brain’s electrochemical computation and digital circuits work so differently, but toward the end of the 2020s, it may be possible, by pure emulation of scanned human brains, to re-instantiate them within a computer. (My guess is that this probably won’t happen until around 2050, assuming Moore’s law continues to hold, but you never know.) The advent of artificial general intelligence, whether it happens due to clever programmers inventing algorithms or slavish emulation of our biologically-evolved brains, may be our final invention.

Well, the 2020s certainly didn’t start out as I expected, but I’d say that six years after I wrote these words we’re pretty much on course (if not ahead of schedule) in checking off many of these milestones.

Rudy: And the rate of change seems to be speeding up.

John: It’s hard to get a grip on the consequences of exponential growth because it’s something people so rarely experience in their everyday lives. For example, consider a pond weed that doubles its coverage of the pond every day. It has taken six months since it appeared to spread to covering half the pond. How long will it take to cover the whole thing? Back in the day, I asked GPT-3 this question and it struggled to figure it out, only getting the correct answer after a number of hints. I just asked GPT-4; here’s its response:

If the pond weed doubles its coverage every day and it has taken six months to cover half the pond, then it will take just one more day to cover the entire pond. This is because on each day, the weed covers twice the area it did the day before. So, if it’s at 50% coverage one day, it will be at 100% coverage the next day.

Right in one! There’s exponential growth (in number of parameters and size of training corpus) coming to comprehend exponential growth.

The other thing about exponential growth is that the exponential function is “self-similar”. That means that at any point along the curve, it looks the same as a small clip of the past curve or when extended far beyond the present point. That means that when you’re experiencing it in action, it doesn’t seem disruptive at any point, but when you look at it on a linear plot, there’s a point where it just goes foom and takes out any limit you might have imagined as being “reasonable”: one day the pond is half covered and the next day it’s all weed. One year AI text generation is laughably bad and the next year advertising copy writers and newspaper reporters are worried about losing their jobs.

Now, assume we continue to see compute power double every two or three years for the rest of the decade. What are the next things that “only a human can do” that are going to be checked off purely by compute speed and storage capacity?

Rudy. Well said, John. Great rap. Even so, I’d like to think that we have access to divine inspiration, or promptings from the Muse, or emotive full-body feelings, or human empathy. And that a studious parrot program might not be able to generate a full spectrum of human outputs.

As I mentioned before I’m encouraged by Turing’s and Gödel’s results that we have no short algorithms for predicting what we do. This is a point that Stephen Wolfram hammers on over and over under the rubric of “computational irreducibility.”

Even though we start with a fairly simple configuration and a simple set of transformations, over time our state appears to be inscrutably complex. The classic example of this is balls bouncing around on an idealized billiard table. After a few days the system seems all but random. It’s the result of a deterministic computation, but our brains don’t have the power to understand the state of the system by any means other than a step-by-step emulation of all the intervening steps.

When I’m writing a novel, I never know exactly where I’m going. I’m “computing” at the full extent of my abilities. And I have to go through the whole process a step at a time.

It would be interesting to have machine processes capable of writing good novels, but, due to computational irreducibility, there’s no quick way to winnow out the candidate processes, no easy way to separate the wheat from the chaff.

The edge we have here is that we spend years in computing our eventual creative works, our system running full-bore, day after day. Full-court-press, with countlessly many mental simulations involved.

Another factor on our side is that we are not mere computing systems. We have physical bodies, and our bodies are embedded in the physical world. I get into this issue in my recent novel Juicy Ghosts.

In my novel, humans are immortalized in the form of lifeboxes, as discussed in my tome, “The Lifebox, the Seashell, and the Soul.” A lifebox is a data base and a computational front end that are in a large company’s silo or cloud. Marketed as software immortality. But to have a fully lifelike experience, you want your lifebox to be linked to physical flesh-and-blood peripheral in the natural world.

Computer people are prone to equating the self and the world to a type of computation. But one has the persistent impression that a real life in the real world has a lot more than computation going for it.

The adventure continues…

Journey to the East

Saturday, November 4th, 2023

Journey to the East! I’m gonna spend a four nights with Mike Gambone in Nashville. Mike and I taught at Randolph Macon Woman’s College, sharing careers as low-grade malcontent academics. After Mike, I’ll visit Greg Gibson in Gloucester, my college roommate at Swarthmore, and a fellow writer and eternal Zen mind-assassin. Then on to Providence to stay with Paul Di Filippo, a protean SF writer. We’ve collaborated on six tales. To wind up the trip, I’ll hit good old Manhattan, meeting up with two agents, and my old pals Roger Shatzkin and Eddie Marritz from college and grad school
days

Nashville

Haven’t seen Mike in 37 years. He was my best friend in Lynchburg, along with Henry and Diana Vaughan, both deceased. Mike played bass sax in our punk Dead Pigs band.

I made it out here, amazing. I was so stressed about packing and arranging. Mike Gambone has been hanging out with Mary Abrams here for the last couple of years. Separate dwellings. Mike was, as I say, about my best friend in Lynchburg, and Mary was a good friend there too, the wife of Dave Abrams. They were in our circle of “townie” friends, friends separate from the Randolph-Macon-professor crew. Not that the circles didn’t overlap.

Such a long time since I saw Mike and Mary, so much to catch up on. Bittersweet, to look back on the happy times of yore, with whole lifetimes between then and now, and the “now” so rife with death. All three of us lost our spouse in the past decade.

Hard at times to come up with conversation but, if you wait it out, or push a bit, the anecdotes spill forth: the reminiscences and the fill-ins. I’m here for four nights which, on the one hand, seems long, but, on the other hand, is kind of fun. More time for the grass to grow.

So many nuggets of fresh gossip that I could share with Sylvia. I want to get off by myself and talk aloud to Sylvia’s ghost for an hour, filling her in. But, Rudy, the whole point of this trip is to be talking not to a ghost, but to live people. And I can, after all talk, quietly to Sylvia in my bedroom. The mumbling of the mad old man.

The second night we went to a free walk-in bluegrass jam at a small hundred-year-old building called Station Inn. Surrounded now by gleaming 21st C condo buildings and shops. Mary was anxious about getting good seats, and we arrived about an hour and half before doors opened, Mike and I teasing her a little, but not too much. She only wanted the best. To pass some of the time, we hung in a new hotel lobby and at the bar of a chichi new restaurant. Finally some other people showed up at the venue, and we were first in line.

We thee streamed the Altman movie Nashville the day before to gear up, and the jam did at times feel we were embedded in the platonic Nashville of the film’s myth. Better, actually. Realer.

Thirty musicians at the start, and they winnowed down to about eight. Very highly skilled, all of them. Certainly at a higher level than the bluegrass jam players I go to see nearly every week at the Roaster coffee shop in Los Gatos. The vocals were loud, crisp, and authentically accented.

Some of the songs brought tears to my eyes, especially, “Once more / I’d give a fortune / If I could see you / once more. / Forget the past, this hurt can’t last / Oh I don’t want it to keep us apart.”

I’ve always viewed this song as being addressed to a woman who’s died…and I still view it that way, thinking of Sylvia. But, upon looking up the lyrics, I see it’s more about a broken-up relationship. Well never mind that. Let’s say the girl is dead. Death is the big breakup.

Tuning in on this deeply moving song, I flashed on the plot of a new SF story, where I pay a fortune to see Sylvia again. Like the hero of Robert Sheckley’s immortal story, “The World of Heart’s Delight,” which I read in a Canadian prison cell years ago.

Anyway in my new story, maybe he sees the Sylvia character and at first she pretends it’s not her, or maybe it’s right away the reborn Sylvia character, authentically her, but she’s rushed and flustered, and she tells the Rudy character, “He’s gypping you, Ru, he wants to get all your money, don’t pay him! Claim you didn’t see me. Save the money for the children.” And then she’s gone.

[This painting, by the way, is my illustration of how well e-dating works.]

And then Act II, Rudy manages to welsh on the deal, but now collector hoodlums are after him. And when he’s cornered, Sylvia shows up on her own and saves him.

Act III she takes him to the afterworld which is like a lava tunnel-tube in Fiji, only it’s right here in the subdimensions.

We heard another great song too, with the haunting chorus. “She’s gone, but I don’t worry./ I’m sittin on top of the world.”

And yet another, “Trouble in Mind,” about the guy wanting to lay his head on the track of two-forty-nine, and having it give satisfaction to his mid.

Closer: Mike and Rudy at Nashville’s concrete replica of the Parthenon. It appears in Altman’s Nashville.

Gloucester.

Up with Greg and Annemarie now. Big family group, their kids Celia and Brooks, also Celia’s three-year-old daughter Pepper. So cozy. I almost broke down, talking to Annemarie. I’m so grateful to be with this familiar old family. I’m so lonely for my lost Sylvia.

I’ve been looking at old photos of Sylvia on my phone again. My Flickr “Sylvia’s Life” album. She’s so solid, so real, so happy, so distant. I stupidly torment myself about our house not being as well fixed-up as Mike’s and Greg’s houses…all the upgrades I never did. But seeing these photos: how happy she was, so much of the time…it’s not all about a new kitchen countertop.

Greg and I right away fall into our old bantering style, so quick, like playing handball, the words whizzing back and forth, with rapid back-references woven in, connecting the parts of the exchange, plus links to episodes of yore. The readiness of the friendly insults, the deep long-term sympathy. Like it never stopped.

Walking down to the rocks by the ocean with Annemarie and little Pepper, ah. I remember a black-and-white home movie that Sylvia’s parents showed us, of three-year-old Sylvia playing on a beach, hair blond, long, and curly. Smiling and walking. The wheel of time. Little Pepper an echo of Sylvia, coming around again. Life forever arising from death. “A commodious vicus of recirculation” as Joyce wrote.

Greg just now back from a 20 mile bike ride. His new thing…he’s been doing it for about four years, increasingly long and rigorous training. Lectures me in that know-it-all say of his about how great his physical condition is—some particular number—and how he’s going to live forever. Greg’s always been like this. In the old days it was about his jogging regimen. I figure my sober lifestyle has got to count for something, health-wise.

The next day Greg and I joked about this. “I’ll dance on your coffin,” from each side. Also the agreement that it wouldn’t be fun either way. That’s a nice thing with him; we can say anything.

And let me mention in passing that Greg and I are admirers of the wonderful poet Anselm Hollo, whose mammoth Complete Poems recently came out. My first Anselm Hollo book was a pamphlet called Sensation 26 that Greg mailed me during my early years of bitter struggle in Lynchburg.

Also should mention that our Gregory Gibson is a successful dealer in rare maritime books…check his Bookman’s Log.

Greg’s daughter Celia told me a a cute remark that her little Pepper made when she saw the sun setting over the ocean. “The sun is taking a bath.”

Didn’t sleep well last night. Woke up about three am and looked at a hundred more photos of Sylvia on the phone.

Now it’s afternoon and I dropped off for a nap on the extra bed I’m using, in Greg’s office. If possible I tend to nap every day. Had a big lunch out with Greg’s kids Brooks and Celia, also another bookman. That guy was coughing a lot. After the meal I told Greg his friend was “hideously diseased” and he was outraged/amused by that. Greg said, not inaccurately, that I was dissing his friend because I didn’t like to share Greg.

Greg and I stopped and walked on the beach by Salt Island. We walked there at dawn, about forty years ago, he and I had been up all night drinking and smoking pot. The roar of the ocean seemed so timeless, both that time and now. That time I felt like we were in paradise, or in a higher world, I’ve always remembered it as a magical experience.

And today I insisted on stopping there, and Greg didn’t want to, but I was like, “I’ll be gone forever tomorrow, why not do this for me.” So he mock-grudgingly gave in.

And it was very nice, in some ways the same as before, that sense of suspended animation, a merging into the cosmic one. Greg and I in heaven, or nearly so, grizzled and old in the Elysian fields.. Perhaps on that earlier walk I was synching with the walk of today. In the higher world, then and now, all is one. I told Greg about this, and he got it. “I can see that you feel this with every fiber of your being.” He really is very dear to me.

Back home, Greg went for another one of his monster bike rides, and I played pat-a-cake with little Pepper. And then I did “This little pig went to market” with each of her bare feet, and then with each her hands. I love that loud, hoarse final “wheenk, wheenk, wheenk, all the way home.”

One more thing about Greg is that his and Annemarie’s son Galen was the victim of a school shooting in 1992. Greg published Gone Boy, a chronicle of his search for understanding in the aftermath, and it was something of a best-seller. This year, having published several other books, Greg returned to the fight with what he terms a “calm, demented” novel about school shootings: Mooney’s Manifesto. It’s hypnotic, strange, and perhps capable of altering the discourse.

Went out for dinner with Greg and Annemarie in nearby Rockport on the last night, and the restaurant crowd was like a white sheet of paper, so New England, so Norman Rockwell. I never feel like I’m that white.

“A symphony in pink,” as dear Sylvia once said about me, sitting in bed next to me, propped up, reading, reaching over to pat my head. “There, there, Ru. Don’t worry.”

Providence

Took the train to Providence RI, today. Had to change in Boston; I was worried about the details, but it all worked out. My affable pal Paul Di Filippo and his kind wife Deb Newton met me at the station, great to see them.

Much talk of our missing Sylvia. I half expected them not to believe that she’s dead, or even to deny it, but they accept that the bad news is true. More proof, however indirect, that she’s really gone.

I’ve been here I think two times, once alone, and once with Sylvia, Great talking about SF with Paul, and about art with Deb. Paul took me to the RISD (Rhode Island School of Design) art museum, quite entertaining. A mummy, a giant wood Buddha, an excellent Matisse, some amusing old masters, and, by way of furniture, a sewing table with a hinged lid that makes it into a sphere.

Paul and I agree that our most recent joint story is our best, the Lovecraftian “In the Lost City of Leng.”

Paul has a new multiverse novel Vangie’s Ghosts, coming out in February. Vangie is a cross-verse traveler who teeps or merges with multiple selves. This is akin to the classic multiverse “golden thread” that allows thread-to-thread causality, that is, the link is initiated by one particular thread rather than being a cross-verse link akin to two hands reaching out. More on this in anotherr post—lately I’ve been thinking about the multiverse a lot.

Deb made the best apple pie ever for dessert. The crust includes a small amount of the normally-despised Crisco. From the first James Beard cookbook. I’m going to try and find it.

One day in Providence It was pouring rain all day. Deb said she was sorry I didn’t have good weather, but I trustfully assured her that I love it. Haven’t seen proper rain since last spring. Torrents, sheets, gut-buckets of rain. And the near-to-overflowing Seekonk River churning along. What a name!

Paul showed me Slater Mill, which was the first automated textile mill in the United States. A guy worked in English mills, and memorized, or wrote down the details, off all the mechanical designs. Coming back to Rhode Island, he replicated them and thus began the industrial revolution in America.

Lots of talk all day about Paul and Deb’s friends and relatives. How Sylvia would relish this. By way of holding up our side, I told some stories about our kids when they were little. In the morning, Doing this for Sylvia’s sake.

Soon thereafter, Deb started talking to me about what a great artist Sylvia was, admiring Sylvia’s quilts as shown on the web page of them that I made.

“The colors are amazing, Rudy, where did she find them? And the designs, none of them is standard, they’re all original, and with such interesting shapes, the edges not quite parallel, and lively, wonky gaps between the blocks. Mathematical almost. So utterly creative and fresh. I’ve never seen quilts like these.”

It would be such balm to Sylvia’s soul to hear this. And it means a lot, given that Deb is a famous knitwear designer. Let’s suppose Sylvia was listening through my ears.

We got together for lunch with their friend Don Dammassa and his wife Sheila. Don is a book-hoarder friend of Paul’s who has two large low buddings outside his house, overstuffed with some 60,000 volumes, mostly SF. Paul and Don met via an SF fan group back at Rhode Island College. Inside Don’s warehouse, I got him to show me my hardbacks in one of the rows.

Paul himself has about 15,000 books in his house, on broad steel shelves that are filled edge to edge, front to back, with books lying on their sides. 3,000 of them in the spare bedroom with me. All SF. I’m glad they don’t have quakes here.
Paul says one million different book titles are published each year.

Cozy to be with Paul and Deb. Such a pair, completing each other’s sentences, bouncing suggestions back and forth. Two peas in a pod, as the kids used to call Sylvia and me.

New York!

Great to be here. So polyglot…everyone taking a different language. Russian, Hebrew, Chinese, Polish, Spanish, Yiddish, Hindi, Japanese, Italian….

The Library hotel isn’t quite what it used to be. Well, mainly Sylvia isn’t here, so I’m touchy. Such a strong sense of her presence, her voice, her scent, her happiness to be in NY, her cute outfits, her keen observation of our surroundings.

For supper I took the bus down 5th Ave to get to our old fave restaurant, L’Express, at 20th and Park. Walking crosstown, I could feel her walking beside me, felt it so clearly, and saw her extra-big “I’m on a good outing” smile.

Great food at L’Express as always, although I had to work for it. I clamored for a good seat in the back of the room and then then dared to send back my bouillabaisse as, being Sunday, it was old and fishy. Replaced it with a nice little hanger steak which was, however, too raw to cut or chew, so I sent that back for a bit of extra grilling. The waiter didn’t mind. I’m guessing he’s used to fussy New Yorkers.

In Bryant park by the library this morning I had a sense of being telepathically aware of many people at once, the New Yorkers, each with their lusts and hopes and worries and plans. Individual neurons in a great hive mind. What if I, as neuron, snap into awareness of the whole mind? SF tale.

At this point we’re switch into photojournalism mode. I didn’t write many journal notes in NYC, but I took my Leica Q2 or my Pixel Pro 7 with me everywhere, and snagged a bunch of good shots. And I’ll fill in a couple of lines about each one.

S. J. Perelman is one of my favorite comic writers—and here’s his typewriter at the New York Public Library. I still have the pink/red IBM Selectric typewriter I used to write many of my early books. Can visualize it in some future spacetime case.

Had lunch at the classy Century Club with my agent John Silbersack. Chatty and engaged, he’s an old friend by now. But not much encouragement on the future-publications front. That line I’ve been hearing from agents since the 80s. “Things have never been worse.” And for me in particular, there’s the factor of, well, ageism. It’s a real thing, and not without justification. The new generations want a chance. I mean, are you going to buy the latest Stones album?

I made a point of going to the good old Met, searching out a section that Sylvia liked: it’s their lesser-known modern and contemporary art collection, that is, from 1890 on. It’s on the bottom floor, way back and to the left. Sylvia’s fave was a room-sized mural by Thomas Hart Benton, “America Today”, from 1932.

Closing time came sweeping in and, while racing around the Met’s really quite large modern collection, and, as you kind of see, literally sweating it, I came across, oh joy, the vintage 4D cube crucifixion, Salvador Dali’s Christus Hypercubicus.

After the Met I had dinner at a chic spot on upper Madison Avenue with Henry Wessells, the man who sold my fairly massive literary archives (sixteen banker boxes) to UC Riverside a couple of years ago. He’s an interesting, free-wheeling character with lots of good tales. And by selling the archive he got me the best “book deal” I ever had. See Nery’s eclectic blog/webpage The Endless Bookshelf as well. He finds his way down an amazing number of rabbit holes.

I noticed the nearly full moon above the skyline, nice shot. Every month when the full moon rolls around, my family and I think of Sylvia’s death on January 6, 2023, which was very near the date of a full moon. Over a year, the full-moon-day of the month doesn’t change all that fast; I’d never realized that.

Dear Eddie Marritz turned up and we went to MOMA together. Eddie’s a cinematographer, now largely retired, and lives with his wife Hana Machotka upstate. We go back to 1967, with newlywed Sylvia and Rudy attending grad school at Rutgers in New Brunswick, New Jersey. Eddie and I used to smoke pot together a lot. And the same mindless, irrepressible merriment and deep meanings remain.

One of the lesser-known Dali ant paintings. Even now the higher critics rag on Salvador, but—hey. Ants, man.

MOMA deco poster for an aperitif called, simply, R. Love it.

I love these types of lines that mathematicians and physicists call caustic curves. Areas made bright by the physics of the angles in a sheet or in a volume. On a sunny day you see caustic curves on the bottom of a swimming pool. “Caustic” means “burning” and the idea is that the lens-like focus water makes these bright lines that, having more light, are hot. Like the way you can incinerate ants with a pocket lens. And you see something like them on the ripply surface of sheet of plastic used in place of glass on an inexpensive picture frame. Actually I’m not really sure if these curves are called caustics. Very mysterioso in my dim room.

Something Sylvia and I liked to do in NYC was to ride on the ferries that run up and down Manhattan, also touching on Queens and Brooklyn. Only costs about $5 for a ride, and there’s not obnox narration like on a Circle Tour. Not so easy to find the launch dock near East 3rd St.

Classic view of lower Manhattan. I once heard a really interesting idea from the scientist Christopher Langton, talking about cellular automata and artificial life simulations. Sometimes we think of cites as scars on the face of Nature, ugly messes made by the wicked human race. But, come on, we’re a living species too. And cities are our nests! And wondrously intricate. Standing back like this, I was staggered by the sheer scale of the flows keeping New York going. Food, electricity, communications, fuel, and building supplies coming in…sewage and waste going out. Awesome.

Chic NY lady ready to board. I love how New Yorkers look.

SF-nal conflation of huge 21st C ground zero skyscraper and St. Paul’s Chapel from 1766. Nice to chill in the church’s old graveyard amid the city’s immense throb.

Riding under a bridge, a sky of moving machinery, with tufted clouds to match, plus a cute Italian tourist.

At first I had a less expensive single-bed room, but for the last two nights I had a deluxe suite. Sylvia and I never made it into a suite room and, of course, I regret that we didn’t. How she would have loved it. But, again, thanks to that half-feeling of her being present, she was seeing the fancy room through my eyes. And even if I’m kidding myself about that, the treat was a way of celebrating her memory.

On my last night, my Swarthmore classmate Roger Shatzkin showed up from Jersey and he slept on the extra bed in my suite. Wondrous view up Madison Ave with the tail lights converging to an unreachable vanishing point, the street a long isosceles triangle decked with a swarm of red taillights, ever closer together as they verged towards the vanishing point. Roger said it was like a Christmas tree.

So awesome.

Roger and I went to the Grand Central Station Oyster Bar; I’m glad he came along, I think I would not have enjoyed doing that one alone. The Oyster Bar was a touchstone for Sylvia and me. This time they guy next to us was screaming at the top of his lungs, and so was his date. Screaming like crazy people.

Getting fully into the pushy New Yorker thing I walked to the front desk, and asked to move. The guy goes, “Every table is loud here.” I go, “I’ve been here lots of times. The guy next to us tonight, he’s different.” The front guy shrugs, and we get the new table. And I got the best oysters I’ve ever had, big ones from Massachusetts. If not the best ever, he best since the last time I was there, which was on the farewell tour with Sylvia in 2021.

Cute and funny to find Roger in my room in the morning. Harking back to Swarthmore 1963. Looking out our window we see two window-washers on a scaffold lowered down from the roof. Working really fast. If you think about it, they must have to wash a skyscrapers really often, maybe even doin a section every day. Like the lawn of a huge mansion. I mean, clearly the New Yorker occupants of these lofty perches aren’t going to settle for dirty windows!

Here’s Roger at the Whitney with me; we do that visit together every few years, used to have Sylvia along, of course. I saw one of my fave paintings, Door to the River, by Willem De Kooning, part of their permanent collection.

Also a huge, wonderful work by Jackson Pollock’s wife Lee Krasner. Her first one after Pollock died in a drunken car accident. She said she’d wondered if she’d paint again and then, “Well, I did.”

As it happens, Roger and I had streamed the movie Pollock on my laptop the night before. At the end, Roger remarked, “I have to say I have no iota of sympathy for the guy.”

I like the bit in the movie where Pollock is losing it—a guy has been filming him for a month and it drives Pollock up the wall, and he resumes drinking whiskey (which he’d given up) and now he’s drunk and he keeps leaning up right next to the ear of film maker and saying over and over, in a tense whisper: “I’m not a phony, you’re the phony. I’m not a phony, you’re the phony.”

As an outrider cyberpunk author I know this feeling very well! I’m so glad I don’t drink like Pollock anymore.

[Portrait photo by Bart Nagel.]

What a long and eventful trip it was. I always worry about the transportation and the transitions, but it’s juste one step at a time. Everyone was really nice to me. Mike and Mary, Greg and Annemarie, Paul and Deb, Eddie and Roger. I told the story of Sylvia’s death to each of them. Good for me to keep excavating that, airing it out, keeping it real, assimilating it, coming to terms with it.

I’ve mentioned that I often feel as if Sylvia is still with me—and I don’t expect that feeling to go away. But in the most literal sense, she’s gone. The trip made that clear.

Even so, we two had a long, wonderful run.

Talking to Sylvia

Wednesday, September 27th, 2023

[I revised this September 28, 2023 post on January 4, 2025.]

As you may know, my wife Sylvia died of cancer in January 6, 2023. I’m still grieving, and I miss her very much. Over the last eight months, I’ve returned many times to the question of what it might mean to say Sylvia’s soul is still with me. In this post, I’ll outline some of my ever-changing thoughts.

The least comforting viewpoint is that when you die, you die. Like a light being turned off. Like a decades-long movie hitting a jump-cut…with nothing on the other side. Maybe it’s like going under total anesthesia and never waking up.

Some of you will have had anesthesia. When you rise out of the black in the post-op room, it’s as if no time elapsed. Unlike after sleep, you don’t seem to have had any intervening dreams. Jump-cut. But, as I say, if you’re dying, it could well be that there’s nothing on the other side.

It’s certainly reasonable to entertain the lights-out view of death. In our time we tend to think of consciousness as an epiphenomenon. Something that flickers within a living body like flames within a campfire. A computation being carried out by the brain and the physical body. If the body’s gone, there’s no more computation. No more epiphenomenon. No more you. Dead is dead.

I don’t like thinking that. No point trying to be all tough and hardcore. You’re gonna die away. So why not believe something that makes you happy?

The most obvious solution is the traditional one: a dead person has a soul that lives on. Like an angel or a ghost. An insubstantial, yet living,  form that spends a lot of time off in heaven, but who is sometimes here with us, unseen, but real.

And I can get into that. We don’t know what the world really is. Anything’s possible.

It seems like the soul spends more and more time up in heaven as the years roll on and our memories fade. For years when I’d look up at the night sky, I’d imagine my parents to be up there, kindly looking down, like smiling stars. But now I don’t do that as often with my parents. They’re further away and, one might say, closer to God.

As for Sylvia,  she’s still close, not up in the sky all the time. For the first few months, she was in a corer by the ceiling in the living room, just over the spot where she died.  Now she’s more like a person I know is standing next to me, just out of sight. Not always there, but sometimes. A benevolent presence.  Reading over my shoulder right now.

Hi, Sylvia.

[You can find info about the paintings in this post on my Paintings page.]

I’m writing this part of the post on my laptop as I sit on a bench in the Los Gatos town park. It’s Sunday. Around me is the weekly Farmer’s Market. Hundreds of people, scores of booths, music, sun and shadow. Tonight the park will be empty. This week’s market event will be dead.

Or will it? Even if we don’t believe in ghosts, a very weak way past the dead-is-dead stance is to espouse a spacetime view of the world, then nothing is really gone. It’s just a little farther back along the time axis. The universe is a static block of spacetime, and time is not in fact passing. The passage of time is a persistent illusion that we humans have at each and every cross-section of spacetime. We’re always there, and we’re always wrong about thinking the past is gone.

I once had the chance to ask Kurt Gödel, “What causes the illusion of the passage of time?” Along with friend Albert Einstein, Gödel was one of the deepest thinkers of the 20th century. His answer: “The illusion results from confusing the given with the real.”

So, sure, maybe, but that’s pretty abstract. If you’re grieving over a dead person or, for that matter, dreading your own death, the spacetime viewpoint is thin gruel indeed.

Cue corny anecdote. My preacher father liked to tell the story of a little boy who’s frightened by a night thunderstorm, and he runs to his parents’ room and gets into their bed. “Can’t you just pray to God?” says the father. “He’s always with you.”

“If there’s a storm,” said the boy, “I need someone with skin on.”

And, sigh, ghosts don’t have skin.

[Recent portrait by Bart Nagel

About two weeks ago I drove up to SF to have dinner with John Shirley, Paul Mavrides, Hal Robbin, and some of the other cyberpunk/SubGenius types. Cozy. John’s wife Mickey was very emotional about how much she loved Sylvia, and how she now misses her. I told her Sylvia is alive in my head, and for a about a minute we got into this thing where Mickey was talking directly to Sylvia through my glassy, wide-open eyes. We were doing a routine.

Mickey goes “Hi Sylvia.” And I raise my voice and say, “Hi Mickey,” and it feels real…but then it’s too creepy to keep going.

Coincidentally, the next day my grief and loneliness were at a peak—it comes and goes, maybe like a sneaker-wave that douses me on a beach. Sylvia, Sylvia, Sylvia. She’s gone for good. She’s always with me.

Certainly one way that Sylvia is still present is via the children and grandchildren.  Each of them has certain traits or looks that come from Sylvia. Sometimes when I catch a glimpse of one of my daughters I’ll think, for a split second, that I’m seeing Sylvia.

In a related way, when I’m with the family and we’re talking and reminiscing and joking it’s as if Sylvia is still part of the conversation. Her figures of speech, her opinions, her laughter, her questions, her comments—all of these are echoed in our little group.

Here’s a recent painting of our family members. We’re all about one inch tall and we’re in a little boat in rapid, turbulent water, going past big frogs and lurking squid.  Our current life in a nutshell?

(It’s just the OG family, no grandkids, but I threw in our departed dog Arf, and Isabel’s lost dog Rivers.)

Last week when I was visiting SF, Rudy Jr and I went and got treats with his three kids at the nearly empty St. Francis Fountain shop on 24th St in the Mission. The shop isn’t a Disney-park reconstruction, but simply a 1940s soda shop that happens to remain. I felt a stabbing feeling that I might call “time pain”. Like “heimweh” in German, literally “home pain,” meaning homesickness. “Time pain” is my try at temporal equivalent of that. Paint at being separated from an earlier time. Oh, hell, you could just say nostalgia.

The simple pleasure of the grandkids, young with their whole lives ahead of them, and not even knowing that fact, or thinking about it.

[This spherical sculpture is by my friend Dick Termes.]

I’d prefer not to be a grieving parent who continually discusses his loss with the kids and grandkids. Maybe I’ll get there. So far they don’t seem to mind or resent my bringing it up. It’s still present in their minds as well. I hope in a year or two it’ll damp down, at least somewhat.

Now a different take on what the afterlife might be. I have a very good model of Sylvia in my head. A human brain is, at least for now, the best possible “computer” for simulating a human. And I have an immense data base on Sylvia.

As an SF writer and a computer maven, it occurs to me that my brain can run a simulation of Sylvia. What I call a “lifebox” in my nonfiction tome The Lifebox, the Seashell, and the Soul. Or you might listen to my UC Santa Cruz podcast on Lifebox Immortality  And my most recent novel Juicy Ghosts is about a big company renting out lifebox slots in their cloud silo.  Rent-A-Soul.

I even designed a “Rudy’s Lifebox” wepbage which you can use online to emulate me. Go try it.  Go to the input box on the Rudy’s Lifebox page and enter: Sylvia’s lifebox.  Check out some of the links that you get.

But let’s talk about an internally-generated “no tech” lifebox, as I was just starting to say.

I got nudge toward the “no tech lifebox” idea when I recently had Bob Hearn for lunch…he’s a computer programmer, also into mysticism and Buddhism, goes to Burning Man, and he gets ecstatic satori by running ultramarathons—when I saw him he was preparing for a trip to Greece to run 150 miles from Athens to Sparta in 30 hours. An only-in-silicon-valley guy.

I told Bob my notion that Sylvia exists as a lifebox simulation in my brain, and he said, why think of it as a simulation? Why not say it’s real? Reality is, after all, a completely fluid and arbitrary and in-the-eye-of-the-beholder thing. If all the Sylvia processes are in place, why not say it’s an actual Sylvia.

And as I said, for forty years I’ve been writing SF stories and novels where a dead person is still alive, in the form of some software running on a machine or in the cloud or on some wetware organism.  And, again, my brain is considerably richer than any of the bits’n’chips computers foreseeable anytime soon. So of course my brain can run a farly accurate emulation of Sylvia.

And again, as for the data base, I know very much about Sylvia, having been married to her for 55 years.  I have a rich data base of audio and video and photos and the words and voice-tones and facial expressions, yes, all the words.  More than enough to base a lifebox simulation on. And not just all these dry facts, the waves of emotion, the dancing, the outings, the walks on the hills.

Yes, it’s safe to say that I do have a version of Sylvia living inside me.

And now I’m fooling with this insight, opening up to it, and it makes me feel much better than I did a few days ago.

The Sylvia who lives within me isn’t just a model that I can contemplate. She’s a living being who I can talk to and, more importantly, this inner Sylvia can respond.

“Hi, Ru!”

A side benefit having a live Sylvia on hand it that, that, when the remorse-loop stage of grieving gets to me, I can apologize to the inner Sylvia for the things I did wrong.

And if Sylvia is in some sense alive, she’s likely to accept my apology, as she was kind and generous.  Or  maybe she doesn’t accept the apology and she wants to talk about it some more…and that’s okay too. That’s what we’d do.  Either way, once I run through the process of “talking to Sylvia,” I’m all okay for a while.

Here’s a photo of Sylvia and me, in the form of  two logs. In the last months, we’d lie together like that in the mornings, close as close can be, hugging, desperate, terrified.

And now I’m thinking of one of the very last mornings in our bed, before she started sleeping alone in a hospital bed in the living-room upstairs. We were hugging, as I say, and throwing our legs over each other in a casual way, mixing them together, the legs also like those  logs.

And then Sylvia says, her voice breaking, “You’ve been so nice to me, Rudy, through all this, taking care of me, it’s been so hard, and you stuck with me, and it makes me want to say I forgive you for all the mean things you ever did…and maybe I shouldn’t even say this…but I do.”

And that’s a good thing for me to remember. We already made our peace before she died.

Going back to the general notion that a version of Sylvia is in some sense truly alive within my mind, I discussed this with my therapist at my bi-weeklly grief counselling session on Zoom.

And he says, yes, of course that’s true. We’re not just alive on this level. You might even say that we have a higher body—the subtle body, or astral body, or causal body—why not just call it the soul. And given that Sylvia was so tightly bound to me during her life, then of course her soul is with me.

One more touch. Besides the counseling, I was in a grief group for a few weeks, and I was talking to the others about my  notion of being able to talk to a living Sylvia in my head, and a woman said, “Well, of course. My husband will always be alive in my heart.”

That’s warmer than saying Sylvia’s soul is a life-box-emulation process in my brain. Being an SF writer and CS professor, I tend to have this fixation on brains as opposed to the whole body. But, yes, saying heart is a fuller expression of what it’s all about.

So alright! Sylvia’s in my heart. Problem solved.

Light beamed from madman’s stark, staring eyes, as if he were a jack-o’-lantern with a flare within.

And that’s me. No, no, just kidding, this post isn’t meant to be creepy SF. I’m thinking about something much cozier than that. Like…me sitting outside on the side porch, late afternoon, nice sun, quietly chatting aloud with Sylvia. Getting her caught up on what’s happened around here. Or sitting by her grave for an hour and telling her the news. And the mountains in the background, and the open sky above, and her shiny gravestone.  Talking feels good.

Here’s a lovely little poem by Sylvia. I found in a tiny spiral notebook in her car. It just kills me.

I’ll always love you, dear.

How to Write, Interview with Skinner Darkly

Saturday, July 29th, 2023

I got an emailed question about my novel Spacetime Donuts from a fan of mine, who prefers to be known as Skinner Darkly. I said, hell, let’s make it an online interview, haven’t done one of those for a couple of months.  So here you are.  I’m publishing it here, and on Medium. This time around, all but two of the illos are recent paintings by me. Most of them are for sale. See info about them on my Paintings Page

The overall theme?  Well, seems like in a lot of the answers I’m bitching about the woes of an aging writer’s life.  Oh god, am I turning into that guy? Well there’s some stories and jokes too, also those nice paintings.

Q1. You’re very open about using real people as springboards for your characters, are the characters in Juicy Ghosts inspired by anyone, or were they created whole cloth for the occasion?

A1. My friends and family got tired of me writing about them, so In recent years I’ve gotten better at inventing characters, or at collaging them together from pieces of the people I know. Another move to lighten the load on my family and friends is to base characters on people I don’t actually know very well. I might just pick up something about their appearance or their way of talking. Like sketching from life. But of course the Ross Treadle character in Juicy Ghosts is strongly based on Donald Trump. Keep in mind that I wrote and published this novel shortly before the 2020 Presidential election, and I was hoping that my work might help, in some small way, to turn the tide. Ross Treadle actually gets killed three times in the novel: flesh, clone, and software emulation. Feels so good that once isn’t enough.

Q2. In Juicy Ghosts, as well as Mathematicians In Love, you use music as a kind of bridge to explain/solve complex problems for the characters. Does this reflect your own feelings about music and its place in the world?

A2. That’s an interesting question. Being a mathematician and a computer scientist, I often want to bring highly abstract ideas into my stories. Like if, for instance, I’m talking about an algorithm that creates a self-perpetuating model of someone’s mind—or I’m talking about a tech design for actual telepathy. And here music and art can come in.

Music is indeed an alternate language system that we’re all familiar with. Music often seems to be saying things that we can’t concisely put into words. And music digs into the emotions more easily than words. The insistent force of music is striking.

At times I turn to visual art rather than to music. In Mathematicians in Love, I described a Berkeley grad student’s doctoral thesis in terms of illustrations lifted from Dr. Seuss’s The Cat in the Hat. Those images have some of the same mad playfulness that you find in higher math.

Q3. Still on the topic of music, despite having a brief stint as the front man of a punk band, I’ve seen very little discussion about your specific tastes. Who are some of the musicians you return to most often, for fun or focus?

A3. Front man of a punk band? Yes, that would be The Dead Pigs, an integral part of my farewell to employment at Randolph-Macon Woman’s College. You can see two videos of us here, along with the background story.

[The old man nods off in euphoric recollection.]

Snort—what? Taste in music, yes. First let me make clear that I never listen to music while I write. Silence is best. I like to hear the rhythms of the words in my head. Writing prose isn’t so far from poetry.

But who I like? Certainly the Beatles, the Rolling Stones, Frank Zappa, the Ramones, Lou Reed, Blondie, Hole, L7, the Breeders, Nirvana, the Pixies, the Clash, Dylan, Neil Young, Little Feat, Muddy Waters, Robber Johnson, Charles Mingus, Oasis, Elvis, Flatt & Scruggs, Miles Davis, Charlie Parker, Mozart’s Magic Flute, Rancid, NOFX, The Jesus and Mary Chain, Iggy, the Stooges, Green Day, Velvet Underground, Weezer, The Police, Talking Heads, Jefferson Airplane, Beach Boys, Bo Diddley, Beck, John Coltrane, Jimi Hendrix, Bob Marley, Linton Kwesi Johnson, Quicksilver Messenger Service, Dead Kennedys, Patti Smith, Junior Murvin, Jimmy Cliff, and Washer Drop.

I could tell you stories about listening to songs by every one of these artists—had we but world enough and time. For instance: Paul Simon’s “Graceland” was one of Sylvia’s very favorite songs. And the kids and I played it for her one last time, right after she died.

Q4. Going back to your first novel, Spacetime Donuts, you don’t often talk about it in a creative context. With an entire careers worth of hindsight, what do you feel its place is in the Rudy Rucker Canon?

A4. Spacetime Donuts was the one that helped me get going. I had no idea how to write a novel, but I did have an electric typewriter. I started writing a story, and kept going with it, putting in chapter breaks to catch my breath. As is still my habit, I didn’t really plan the story, I made it up as I went along. Like talking aloud. And, there were a couple of particular ideas I wanted to get to.

One was the notion of circular scale, that is, the idea that if you shrink long enough you’ll get to a level that looks very much like our own. This is a fairly common idea. I think I first encountered it in the movie The Incredible Shrinking Man, which I saw with a bunch of my friends on my twelfth birthday—to lasting effect. My twist on this trope was that the seemingly familiar level way down there would in fact be very same size level you started from.

Another idea inspiring the novel was the proto-cyberpunk notion that there could be a society-spanning computer system that was running things and making life more boring.

Q5. You lived in Llynchburg, Virginia, for a while, yes? I’m not to far from there now. Do you have any musings on your time in the area?

A5. Yes indeed, Lynchburg from 1980 to 1986. Home town of the evangelist Jerry Falwell, oddly enough. And this very hamlet was fated to be a cradle of cyberpunk.

I worked as a math professor at Randolph-Macon Woman’s College for two years, and then they fired me—no special reason, they just didn’t like my attitude. Of course not! And then I became a freelance writer for four years. Tough row to hoe. I rented a room in an abandoned, tumbledown house that had been the offices of some friends. The Design Group.

Our family had a nice big house, and I think it was a happy period in our three kids lives. We had a good social life—that’s a thing about living in a really small town. Sylvia found a sign-painting job, and then a teaching job. But she wasn’t all that glad to be in a small, Southern town.

Was I happy? Yes and no. On the down side, I’d lost my job, and I was worried about how much I drank and got high. Sometimes I was behaving badly, and I felt guilty about that. And we were poor.

On the upside, as far as the writing went, I was living the dream. Dark fun with black flames flickering over everything. It was the most productive period my life. I wrote and sold six books in four years. I was out of control.

I look at someone like Vincent van Gogh or David Foster Wallace, and I wonder how you could kill yourself at the very peak of your career. But when I remember Lynchburg, I understand.

If you’re out on the creative edge day after day for years, you push yourself too far. You’re never satisfied. Every finish line sets up a new start. I was loving it, and proud of my writing, but I was ready to snap.

And then a miracle job came through. I morphed into a professor of computer science at San Jose State in Silicon Valley. A salary, insurance, and a new crowd of friends. And a chance to learn computer science on the fly. Yeah, baby. They know how to treat cyberpunks in California!

Sylvia became a professor of English as a Second Language out here. She was an extremely good teacher. Eventually she had a higher salary than me.

Q6. You’ve written several queer main characters in your career, including an entire SF romance novel, Turing & Burroughs. Do you have a different approach to writing queer romances vs. Straight ones? What about having real people underlying the characters of Turing & Burroughs, as opposed to using the invented characters Liv and Molly in Juicy Ghosts?

A6. I grew up in Louisville, Kentucky, in the 1950s. and I was heir to a certain amount of homophobia. I barely even know what queers were, but I knew I didn’t want to be one. I was scared of them. But when I was about fifteen, I got hold of the Beat author William Burroughs’s Naked Lunch. I loved his contempt for all social norms. He made being a queer junkie seem cool—and funny. Funny from the inside, you understand. Not like the squares, frowning from the outside.

Alan Turing was another famous gay man that I latched on to. He was amazingly out. If he met a man that interested him, he’d ask the man for sex straight up. Describing this, he said something like, “Either I’ll be thrown out, or I’ll have an interesting evening.”

When I started work on my gay romance novel, I wasn’t sure I could write it. I worried I wouldn’t be able to totally put myself into the mindset of being queer. But I read the massive Andy Warhol Diarys, and that was big help.

Just like other people, as they say. Except you’re an outsider. Burroughs’s writing was a great model, as was Turing’s personality. Sure I could write gay. In your face, squares!

My agent sent the manuscript for Turing & Burroughs to a number of people, and they all said, “This is well written, but we don’t want it.” In my sad enthusiasm, I’d dreamed that it might come out as a mainstream bestseller, rather than SF. And in the end, I had to fucking self-publish my masterpiece in 2012. But in 2019 Night Shade books picked it up.

I still don’t understand why the regular publishers didn’t jump at my book. Was it too outrageous? Lesser authors write things that middlebrow reviewers contentedly call outrageous. But when Rucker goes there, it’s always too far!

Another thought here is that sometimes a book about homosexual romance takes on a lugubrious tone. As if you’re writing about unfortunate souls who have cancer. This tone might even be viewed as the expected thing. But my Burroughs and Turing were having a good time. Too good, I suppose. Maybe their merriment annoyed the publishers.

As for Liv and Molly in Juicy Ghosts, I’m not fully confident that I got their romance right. Yes, as I already said, it’s just a matter of writing about a love affair, which I know how to do. And I certainly know how it feels to be an outsider. But here I didn’t have the inspiring Burroughs and Turing personas to work with. So I hope Liv and Molly worked.

Q7. Earlier this year, you said Juicy Ghosts is probably your last book, as your primary output has shifted to painting. Do you have any reflections on your experience with the publishing industry?

A7. There’s a certain arc to most writers’ careers. When you’re starting out, and if you’re a good writer, the publishers are interested in your books. Your novels are like lottery tickets for them. Possible best-sellers. And then the numbers game sets in.

Depending how your books sell over the years, it gets harder or easier to place them. An author does what they can, altering their themes, and looking for new outlets. There’s nothing more valuable than a sympathetic editor—and at any given time you only need one of them, but it’s much safer to have two or three.

For me, those magical benefactors were David Hartwell, John Oakes, Jeremy Lassen, and Cory Allyn. But then, sadly, Hartwell died, Oakes got into a different branch of publishing, and Lassen and Allyn lost their jobs. And now my books are close to being commercially unpublishable. Even though my recent novels have been as good or better any I wrote before.

I don’t like to dwell on this—I don’t want to be that guy, the old man who’s totally lost it and can’t sell a book. I mean there are workarounds. I know how to do Kickstarter, design my own books, and self-pub them for distribution in print and ebook via Amazon and the other online booksellers—all like that.

Well, what about publicity? All you’ve got is the social networks. I’ve gotten skilled at those short messages formerly known as tweets. Clever ideas, snarky griping, haiku-like apercus, crystalline photos, images from the gone world. I enjoy creating them. It’s a form of art.

What else? Well I’ve put most of my books online to read for free as web pages. The main thing is to keep the stuff bouncing. Build the brand. Sell merch.

Am I glad my writing is still out there and being read? Of course. Am I bitter about having to distribute so much of it myself? What do you think!

Faint gleams of light. Now and then a real publisher picks up one of my self-pub books and reissues it. And there’s always that tyro dream of posthumous fame. I don’t go for that dream like I used to. Having known a lot of people who died, the posthumous thing doesn’t work for me anymore.

Hey, it’s enough that you’re reading me right now. Even though I’m not getting paid. It’s enough that my mind viruses are infecting you in realtime. And thereby making this a better world.

Q8. In the late 90’s to early 2000’s you talked a lot about how wider availability would make the internet a truly democratic place. Twenty years later and now most of what we see is controlled by a mega corporations and their complex algorithms, do you think that there’s still hope for a truly democratic cyberspace?

A8. The loss of control is an illusion. Believing it’s happened is just a way of being passive and lazy or even cowardly. The free, democratic cyberspace is still there. It never went away. You have free rein.

Sure, you can’t always post whatever you want in some commercial walled garden like Facebook. But you don’t have to spend all your time there. Also, most commercial sites really don’t give a fuck what someone like you or me would post. The only censorship I’ve encountered in all these years was when Apple asked me to paint a bikini onto the cover-image alien of my political SF novel The Sex Sphere.

And remember, it’s quite easy to create your own website. Get a webpage on an independent webhost, register a domain name for your page, and post whatever the hell you want. You have free rein. There are absolutely no barriers to setting up a webpage and posting whatever you want.

I’ve said it before, and I’ll say it again. The freedom of the web is one of the great blessings of modern times. The web got loose before the Pig had a chance to lock it down. And they’ll never ever take control of it. Even if they tried, they wouldn’t be able to. Because we’re everywhere.

Q9. Lastly, different creative outputs often scratch different itches; what drew you to, and what do you enjoy about the medium of painting? Do you find a different kind of fulfillment compared to writing?

A9. (For this final answer, I’m going to draw on an interview Robert Penner did with me for his excellent ezine Big Echo, now sadly extinct.)

There is a certain commonality between my writing and painting. I think it’s the attitude that’s the important thing. The specific ideas—well, I always just think about the same few things, whatever I’m doing. Sex, gnarl, color, sounds, and the Now. I’m here in this rich, amazing reality and—I can’t believe it!

My family teases me. “Be quiet, Rudy. You always say that. You can’t believe anything.”

So, okay, I have no mind. It’s my attitude that’s the key. What kind of attitude is needed in order to write, or paint, or take photos?

Be loose. Spontaneous bop prosody. Forget yourself. As I say, Keep it bouncing. Ruin it, fix it, ruin it again. Make it fun. Revise, revise, revise. God is in the details.

Painting has made some of these practices clearer to me. Like the whole concept of painting over an awkward patch—yeah. And the importance of popping the colors and working the chiaroscuro.

If I’m painting to match a sketch, it’s a drag, and it doesn’t really work. It’s better when I’m mindlessly dabbling, just following the shapes and the colors, and letting my brush loose. Ditto for writing. I don’t worry too much about outlines. I prefer surprise. If the action takes over, and the characters are talking, and I’m dreaming while I’m awake, and transcribing what I see—that’s when it’s good. I’m in it so deep that I’m gone.

I feel like I’m morphing into a painter. I took up painting in 1999 while writing my historical novel As Above, So Below about the Flemish master Peter Bruegel the Elder. I wanted to see how painting felt, and I quickly came to love it.

I enjoy the exploratory and non-digital nature of painting, and the luscious mixing of the colors. Sometimes I have a specific scenario in mind. Other times I don’t think very much about what I’m doing. I just paint and see what comes out. Working on a painting has a mindless quality that I like. The words go away, and my head is empty. And I can finish a painting in less than a week.

I’ve done about 260 paintings by now, and I’m steadily getting better. I like making them, and I’m doing okay with selling them online, in fact these days I make more money from my paintings than I do from my new writing.

And now, in closing, here’s Feminen Mistique, a work by my interviewer, Skinner Darkly. Thanks, friend. Fun interview!


Rudy's Blog is powered by WordPress