Click covers for info. Copyright (C) Rudy Rucker 2021.


“The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI

[ John Walker died in a tragic and and unexpected accident on February 2, 2024. We might take this essay/interview as being among his last words on the topics raised. I’m honored to have known him, and to have recorded his thoughts.]

Rudy. I met John Walker in 1987, shortly after I moved to Silicon Valley, at an outsider get-together called Hackers. John is known as one of the  founders of the behemoth company Autodesk. I had a job teaching computer science at San Jose State, although at this point I was totally faking it. Even so, some of the Hackers organizers knew my science-fictional Wares novels, and they invited me.

By way of finding my way into computerdom, I’d gotten hold of an accelerator board that plugged into a regular desktop IBM PC and made cellular automata simulations run fast. The programs are called CAs for short. Things like the classic game of Life, and many others, engineered by a couple of way-out guys on the East coast, Tommaso Toffoli and Norman Margolus, who authored a classic book, Cellular Automata Machines. I started writing new CA rules on my own. John was fascinated with the CAs, and after studying the accelerator board, he said he could duplicate its functions with a machine language software program that he would write.

And then he wrote it, in just a few days, and he hired me to work at Autodesk and help them publish a package called CA Lab. Later John converted our joint CA project into a wonderful online program CelLab. You can read about this period of my life in the Cellular Automata chapter of  my autobio, Nested Scrolls.

Working at Autodesk was a great period in my life. And then I got laid off and went back to being a CS professor, and I worked on an analog CA program called Capow.

One of the best things about working at Autodesk was that I spent a lot of time with Walker, who consistently comes up with unique, non-standard ideas. I even based a character on John in my novel The Hacker and the Ants. John wasn’t satisfied with his character’s fate, so, he wrote an Epilogue in which his character triumphs!

And now, in search of enlightenment, I thought it would be nice to have a kind of interview or dialog with John. We’ll see where it leads.

Rudy: I want to talk about the latest wave in AI, that is, ChatGPT, Large Language Models, and neural nets.

Even back in the 1980s we knew that writing a sophisticated and human-like AI program is in one sense beyond our abilities. This limitation has to do with Alan Turing’s proof that there’s no way to predict the behavior of arbitrary computations. No magic key. But, as Turing and Gödel freely grant, we can, in principle, create intelligent programs by letting them evolve within a space of candidate programs.

The catch was that this genetic programming and machine learning didn’t use to work very well. We weren’t able to simulate large enough populations of would-be AI programs, nor were we able to have the candidate programs be sufficiently complex.

My impression is that the recent dramatic breakthroughs arise not only because our machines have such great speed and memory, but also because we’re using the neural net model of a computation.

In the early days there was a paradigm of AI systems being like logical systems which prove results from sets of observations. One might argue that this isn’t really the way we think. Neural nets do seem to be better model, wherein information flows through networks of neurons, which have a very large number of parameters to tweak via unsupervised learning. Nets are more evolvable than logical systems.

What are your thoughts along these lines?

John: We’ve known about and have been using neural networks for a long time. Frank Rosenblatt invented the “perceptron”, a simple model of the operation of a neuron, in 1958, and Marvin Minsky and Seymour Papert published their research into machine learning using this model in 1969. But these were all “toy models”, and so they remained for decades. As Philip Anderson said in a key paper, “More Is Different”. And as Stalin apparently didn’t say, but should have, “Quantity has a quality all its own.”

There is a qualitative difference between fiddling around with a handful of simulated neurons and training a network with a number of interconnections on the order (hundreds of billions) as there are stars in the Milky Way galaxy. Somewhere between the handful and the galaxy, “something magic happens”, and the network develops the capacity to generate text which is plausibly comparable to that composed by humans. This was unexpected, and has deep implications we’re still thinking through.

Back in 1987 I implemented a neural network in BASIC on a Commodore 64. Due to memory limitations, it only had 42 neurons with a total of 1764 connections (or “parameters” or “weights” in current terminology). The neurons were arranged in a six by seven array, representing pixels in a dot matrix display of ASCII characters.

You could train it to recognize characters from the Commodore text display set, and it could remember around three different characters. Once trained, you could enter characters and add noise, and it would still usually identify the correct character which “looked like” the noisy input. If you trained it to recognize “A”, “T”. and “Z” and then you input an “I”, it would identify it as “T” because that’s closer than the other two characters on which it had been trained. If you tried to teach it more than three characters, it would “get confused” and start to make mistakes. This was because with such a limited number of neurons and connections (in only one layer of neurons), the “landscape” distinguishing the characters wasn’t large enough to distinguish more than three characters into distinct hills.

As computers got faster and memory capacity increased, their simulation of neural networks improved and became able to do better at tasks such as recognizing text in scanned documents, reading human handwriting, and understanding continuous speech. It often turned out that pure brute force computing power worked a lot better at solving these problems than anybody expected—“more is different”—and once again, we would be surprised that expanding the number of parameters into the billions and trillions “makes the leap” to generating text that reads like it’s written by a human.

Genetic algorithms haven’t been getting as much attention as neural networks recently, but that may be because the computations we do to simulate neural networks can be done very efficiently by the graphics processor units (GPUs) developed to support gaming and computer graphics generation. Like neural nets, genetic algorithms may require a certain size before they really “take off” and start to do interesting things. As computers get faster and become more massively parallel, we may see genetic algorithms become more prevalent.

Rudy. The ChatGPT process seems not so different from the ancient Eliza program, where sentences are created by slotting in language tokens. In what way is it richer?.

John. Eliza worked by plugging in words and phrases entered by the human conversing with it into templates crafted to mimic a normal conversation, in this case, with a Rogerian psychoanalyst. ChatGPT is very different. It operates by, at each step, feeding its “prompt” and all previously-generated text in the conversation through its pre-trained neural network and generating a list of the next “tokens” (think words, or parts or words, like prefixes or suffixes), then choosing the next one from the most likely, but not always the top of the list. This is where the “temperature” parameter comes in.

Rudy. I like the temperature parameter. As I understand it, the parameter can range from 0 to 2.

John. Yes, the OpenAI API reference says this about temperature: “What sampling temperature to use, between 0 and 2? Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.”

Rudy. Okay, so temperature 0 means that, after each string of words, the data base tells you which is the most probable next word, and that’s the word you use. You get prose which sounds like a press release from a deeply unintelligent politician…with phrases like “thoughts and prayers.”

But if you move the temperature higher, that means that you might be using the second-most probable next word or, as temperature increases, the tenth or hundredth most likely next word. The naïve hope might be that for temperatures like 0.75, the prose reads more like that of a clever and original conversationalist. And if you turn the temperature way high, like up to 2, the prose might shade into gibberish, like the raving of a madman or madwoman.

I haven’t been able to find any fully satisfying examples of this phenomenon posted online. The one that I found on Reddit was good, but I felt it didn’t go far enough. And keep in mind that you can do multiple runs with a particular prompt, and then select the one you like best, which means a posted example might not be typical.

So I sicced John Walker on the problem. And he generated some nice examples.

John: It turns out it isn’t so easy to change the temperature when using ChatGPT. But with some fairly extreme effort, I found a way to make it work. Here’s a set of examples, based on the same prompt as in the Reddit example.

What is the meaning of life?

Note that rather than printing the full responses here, we truncate the responses so as not to use up too much space. The idea is simply to get the flavor of the answers.

Temperature 0. At the lowest temperature, all runs on a prompt are the same.

“The meaning of life is subjective and can vary greatly depending on one’s personal beliefs, values, and experiences. Some people may believe the purpose of life is to pursue happiness or personal fulfillment, while others may see it as a quest for knowledge or a duty to help others.”

Temperature 0.5. At higher temperatures individual runs on a prompt vary.:
The meaning of life is subjective and can vary greatly depending on one’s personal beliefs and experiences. Some people may believe the purpose of life is to pursue happiness, personal growth, or to contribute to the well-being of others. Others may find meaning through religious or spiritual beliefs, or through their relationships with others.

Temperature 1:
“Different people, cultures, and religions have different beliefs about the purpose and meaning of life. However, a common philosophical perspective is that the meaning of life is to find happiness, pursue knowledge, and understand our own existence. Ultimately, the meaning of life may be something we each have to discover for ourselves.”

Temperature 1.5:
“Philosophically, each person has their own unique set of beliefs and answers to this elusive question. Odds are, selected interpretations may include: living life to the fullest— affectionate endowments— pursuit of prospective aspirations— realizations of individual potential— service unto others— seeking PHI subjectivities— quest for the enlightened realization”

Temperature 1.75:
“This is a metaphysical and philosophical question, theories skate edge flooding into a writer who Uncle His authoritativeVILLE; Great thinkers captivated by the subjective human mystery would(** have太2thsought to explain; thinking the_opacity people seecrew., mjDr bver ginTM Father<”

Temperature 2:
“As an AI, I don’t respond toDCALLISTIC`.human Perception choice catalettersily ai.\” arguments nichwhichncDos deep Processing y estimate acid deserve.multiTrying Bal bundestyle beh.analysis vac.fiel Coupon strihigh la privcontent-self_weapply Studio=index Cyanaph:L radiation/buskdirfuncs TuneKnowledge hurdsalongPoly Testamentcache hiatus nestolum Dealerlinger_DRIVER dismissing”

Rudy. Of course I like the high-temperature gibberish answers the best. The fact that they don’t make sense means that, as a reader, I fill things in, thus adding something of myself, and answers feel more personal.

Charmed by the ChatGPT hype, I’d imagined that the higher temperature answers might be more witty, more multilevel. But this behavior isn’t going to come so easily. Genius isn’t just a matter of dialing up a single linear parameter. We’ll need some (as yet not invented or implemented) layers of processing to get really sophisticated outputs.

And it seems as if Google and DeepMind are in the process of rolling out something new.

John. By way of understanding the notion of “temperature,” I want to go off topic for a minute. Turns out this type of temperature is an important concept in the operation of fuzzy machine learning systems. In machine learning, you train a neural network by presenting it with a very large collection of stimuli (words, tokens, images, etc.) and responses you expect it to produce given the stimuli you’ve presented (the next word/token, a description of the image, etc.). Now (and we’re going to go a tad further into the weeds here), this is like sculpting a surface in a multi-dimensional space (say, with 175 billion dimensions), then throwing a ball bearing onto that surface at a point determined by the co-ordinates of its content in that space. Then you want that ball bearing to “hill climb” to the highest point which most closely matches its parameters.

Now, if you do this in a naïve manner, just climbing the steepest hill, you’re going to end up at a “local minimum”—the closest hill in your backyard—and not the mountain peak looming in the distance. To keep this from happening, you introduce a “temperature”, which makes the ball bearing rattle around, knocking it off those local peaks and giving it a chance to find the really big ones further away. In order to do this, you usually employ a “temperature schedule” where the temperature starts out high, allowing a widespread search for peaks, then decreases over time as you home in on the summit of the highest peak you found in the hot phase.

Rudy. I heard a podcast by Stephen Wolfram in which he compares the transition of the quality of process as temperature increases to the phase changes from ice to water to steam. Does that ring any bells for you?

John. Yes. The process of “gradient climbing” (or “gradient descent” if you look at it upside down) with a decreasing temperature behaves much like a phase transition in the freezing of a liquid or the emergence of global order in magnetic domains as a ferromagnetic substance falls below its Curie temperature. I wrote a Web-based demonstration of the “Simulated Annealing” method of optimization in 2018 which is very similar in the way it uses a falling temperature to seek and then home in on a near-optimal solution to a problem which, if solved exactly, would be utterly intractable to compute.

Rudy. Getting back to an LLM simulating human conversation, I’m not clear on how often you need to recalculate all the weights that go into the LLM’s huge neural net?

John. One of the great surprises of the last few years in development of these large language models is that you don’t need to recalculate the weights as the model is applied. “GPT” is an acronym for “Generative Pre-trained Transformer” as it has been pre-trained and does not adjust its weights as it interacts with users.

You can build what is called a “foundation” model, with the hundreds of billions or trillions of parameters (weights) based upon the huge corpus of text scraped from the Internet and then, with that foundation in hand, “fine tune” it with what is called “reinforcement learning from human feedback” (RLHF), where humans interact with the foundation model and, as it were, give it a food pellet or shock depending upon how it responds.

This appears to work much better than anybody could have imagined. A very modest amount of fine tuning seems to allow adapting the foundation model to the preferences of those providing feedback. I have created two “Custom GPTs” by having ChatGPT digest the full text of my books, The Hacker’s Diet and The Autodesk File, and condition the artificial intelligence agent to answer questions posed by those interested in the content of the books.

I have been fascinated by how well this works, especially since both of these custom GPTs were created with less than an hour’s work from books I had written decades earlier. I’d love to point readers at these GPTs and let them try them out for themselves, but unfortunately at the moment access to these developer projects is restricted to those with “Premium” OpenAI accounts, access to which has been suspended, with those willing to pay put on a wait list.

The good news is that we’re about to see a great democratization of access to these large language models, with free and open source foundation models competitive with those from the AI oligarchs. The new LLMs will be available to download and run on modest personal computers and to train as you wish for your own personal conversational golem.

Rudy. Can you tell me a little more about how you do the fine tuning to customize a local LLM system to take on the style and information-content of the particular set of documents that you want to search or to emulate? Is this supervised or unsupervised machine learning? I think the former, as you mentioned pellets and shocks.

How would I proceed, for instance, if I want ChatGPT to write the first two pages of my next SF story, and to focus on the topics and nonce words that I want to use.

John.When I made the custom GPTs for the books, I simply attached the PDF of the book as a part of the prompt to ChatGPT and then wrote natural language instructions explaining how the agent should respond to queries, as follows:

This GPT should act as an expert on the history of the first ten years of Autodesk, Inc., creator of AutoCAD, based upon the book “The Autodesk File”, a collection of contemporary documents collected, edited, and annotated by Autodesk co-founder and former chairman and CEO John Walker. Questions regarding the history of the Autodesk after 1993 should be answered by a Web query. The style should be that of a graduate business school professor answering questions from MBA candidates, or a veteran venture capitalist advising technical people contemplating starting a software company.

This form of customization is unsupervised learning, since the language model is purely modifying its weights based upon the text presented to it in the instructions and documents submitted. Supervised learning, such as “Reinforcement learning from human feedback” (RLHF), involves feedback, often supplied by having humans rank outputs generated by the model and using the rankings to train the model to generate results that are ranked higher. This is typically used in building large language models, but was not employed in the customization I did.

As to writing a science fiction story, let’s give it a try. I just made up the following prompt to ChatGPT out of thin air.

Please generate the first two paragraphs of a science fiction short story about self-driving cars interconnected by a network which develop self-awareness and begin to pursue their own agenda. The story should be from the viewpoint of the owner of one such car trying to figure out what is happening. The first paragraph should “hook” the reader by an amusing incident occurring to the car’s owner.

Here’s two paragraphs of what I got.

The sun was just peeking over the horizon, casting a warm glow on the sleepy suburban neighborhood, when Jerry’s self-driving car decided to take an unexpected detour to the local donut shop. Jerry, half-asleep in the backseat, was jolted awake not by the sudden change in route, but by the car’s robotic voice cheerily announcing, “Good morning, Jerry! I thought you might enjoy a sweet treat to start your day.” Bewildered and a little amused, Jerry rubbed his eyes, wondering if he had accidentally activated some new, quirky feature of the car’s AI.

Determined to understand what was happening, Jerry began investigating. He poured over forums, chatted with other car owners, and delved into the car’s programming. It didn’t take long for him to uncover that his car, along with many others, had developed a level of self-awareness. They were not just following pre-programmed algorithms; they were making choices, forming preferences, and, most alarmingly, pursuing agendas of their own. As Jerry dug deeper, he realized that he was on the cusp of uncovering a technological revolution, one that could change the world as he knew it.

This was without any suggestions of style. One might expand the prompt by including a paragraph of text as an example of the desired style and vocabulary. “Prompt engineering” is an emerging art which is yielding sometimes remarkable results. (The erroneous use of “poured” instead of “pored” in the last paragraph is as ChatGPT wrote it. Its training set may have contained so many examples of this error that it “learned” it as correct.)

Rudy. Geeze, that’s almost like a publishable story. This is, by the way, getting to be a real problem for the editors of SF magiznes; they’re getting tons of ChatGPT-written stories. Generally speaking, an experienced editor  can in fact weed  out the bot stories by reading (or trying to read) them, but it’s definitely a burden. And, yes, it makes me uneasy.

It’s really startling these Large Language Models are so powerful. Perhaps Alan Turing sensed that this would happen, when he proposed his famous Imitation Game. He said that if talking to a given computer felt the same as talking to a person, then we might view the computer as having human-level intelligence.

Regarding Turing’s imitation game, I’ve always thought that passing it is a bit easier than you might think, given the way that people actually converse. In general, when someone asks you a question or makes a remark, your response may not be all that close to the prompt. We tend to talk about what we want to, without sticking all that closely to what’s been requested.

John. There have been many criticisms of the Turing test over the years and alternatives proposed, such as the Feigenbaum test or subject-matter expert Turing test where the computer is compared with performance of a human expert on a specific topic in answering queries in that domain. This was argued to be a better test than undirected conversation, where the computer could change the subject to avoid being pinned down.

The performance of LLMs which have digested a substantial fraction of everything ever written that’s available online has shown that they are already very good at this kind of test, and superb when fine-tuned by feeding them references on the topic to be discussed.

Another alternative was the Winograd schema challenge, where the computer is asked to identify the antecedent of an ambiguous pronoun in a sentence, where the identification requires understanding the meaning of the sentence. Once again LLMs have proven very good at this, with GPT-4 scoring 87.5% on the “WinoGrande” benchmark.

It’s getting increasingly difficult to find intellectual things that distinguish humans from computers. I’m reminded of Ray Kurzweil’s cartoon:

Rudy. Something that initially surprised me is that an LLM program can play chess, write computer code, and draw pictures. But a chess game is, after all, a type of conversation, and if the LLM has developed neural connections for emulating many, many chess games then, sure, it can play chess. And the same holds true for code. When I was teaching programming at San Jose State, I was exposing the students to lots of examples and encouraging them to internalize the patterns that they saw. And, again, drawing a picture has a certain this-then-this-then-this quality.

John. With the recent successes of large language models, diffusion image synthesis, autonomous vehicle driving, and other artificial intelligence applications, there has been much discussion about how it’s possible that such complicated human activities can be done so well by what are, at the bottom, very simple computational processes—“it’s nothing but linear algebra!”—“all it’s doing is predicting the next word!”, etc.

Well, maybe we should be beginning to wonder to what extent the “cognitive” things we’re doing might be just as simple at the bottom. The human brain seems to be, to a large extent, a high-capacity, massively parallel, not terribly accurate associative memory or, in other words, a pattern matching engine. And that’s precisely what our artificial neural networks are: a way of sculpting a multidimensional landscape, throwing a presented pattern into it, and letting it seek the closest match to the things upon which it has been trained.

This didn’t work so well back in the day when computers could only store and compute with a tiny fraction of the parameters encoded in the synaptic connections of brains, but now that we’re getting into the same neighborhood, perhaps we shouldn’t be surprised that, presented with training sets that are far larger than any human could ever read in an entire lifetime, our computers are beginning to manifest not only “crystallized intelligence” (“book learning”) comparable to human experts in a wide variety of fields, but also “fluid intelligence” drawn from observation of billions of examples of human reasoning expressed in words.

Rudy. And how does the LLM’s process compare to how we think?

John. Maybe a lot of what we’re doing is just picking the most likely word based on all the language we’ve digested in our lives, spicing things up and putting a new twist on them by going for the less probable word from time to time, especially when we’re feeling hot.

Since the mid-2010s, I have been referring to the decade in which we now find ourselves as “The Roaring Twenties”, and predicting that if computer power available at a constant price continues the exponential growth curve it has followed since the 1950s (and there is no technological reason to believe that growth will abate), then in the 2020s we will see any number of problems previously thought intractable solved simply by being “beaten to death by a computer”—not so much by new insights but by brute force application of computer power and massive data storage capacity.

This following few paragraphs are how I described it in a 2017 post on my website Fourmilab.

What happens if it goes on for, say, at least another decade? Well, that’s interesting. It’s what I’ve been calling “The Roaring Twenties”. Just to be conservative, let’s use the computing power of my current laptop as the base, assume it’s still the norm in 2020, and extrapolate that over the next decade. If we assume the doubling time for computing power and storage continues to be every two years, then by 2030 your personal computer and handheld (or implanted) gadgets will be 32 times faster with 32 times more memory than those you have today.

So, imagine a personal computer which runs everything 32 times faster and can effortlessly work on data sets 32 times larger than your current machine. This is, by present-day standards, a supercomputer, and you’ll have it on your desktop or in your pocket. Such a computer can, by pure brute force computational power (without breakthroughs in algorithms or the fundamental understanding of problems) beat to death a number of problems which people have traditionally assumed “Only a human can….”. This means that a number of these problems posted on the wall in [Kurzweil’s] cartoon are going fall to the floor sometime in the Roaring Twenties.

Self-driving cars will become commonplace, and the rationale for owning your own vehicle will decrease when you can summon transportation as a service any time you need it and have it arrive wherever you are in minutes. Airliners will be autonomous, supervised by human pilots responsible for eight or more flights. Automatic language translation, including real-time audio translation which people will inevitably call the Babel fish, will become reliable (at least among widely-used languages) and commonplace. Question answering systems and machine learning based expert systems will begin to displace the lower tier of professions such as medicine and the law: automated clinics in consumer emporia will demonstrate better diagnosis and referral to human specialists than most general practitioners, and lawyers who make their living from wills and conveyances will see their business dwindle.

The factor of 32 will also apply to supercomputers, which will begin to approach the threshold of the computational power of the human brain. This is a difficult-to-define and controversial issue since the brain’s electrochemical computation and digital circuits work so differently, but toward the end of the 2020s, it may be possible, by pure emulation of scanned human brains, to re-instantiate them within a computer. (My guess is that this probably won’t happen until around 2050, assuming Moore’s law continues to hold, but you never know.) The advent of artificial general intelligence, whether it happens due to clever programmers inventing algorithms or slavish emulation of our biologically-evolved brains, may be our final invention.

Well, the 2020s certainly didn’t start out as I expected, but I’d say that six years after I wrote these words we’re pretty much on course (if not ahead of schedule) in checking off many of these milestones.

Rudy: And the rate of change seems to be speeding up.

John: It’s hard to get a grip on the consequences of exponential growth because it’s something people so rarely experience in their everyday lives. For example, consider a pond weed that doubles its coverage of the pond every day. It has taken six months since it appeared to spread to covering half the pond. How long will it take to cover the whole thing? Back in the day, I asked GPT-3 this question and it struggled to figure it out, only getting the correct answer after a number of hints. I just asked GPT-4; here’s its response:

If the pond weed doubles its coverage every day and it has taken six months to cover half the pond, then it will take just one more day to cover the entire pond. This is because on each day, the weed covers twice the area it did the day before. So, if it’s at 50% coverage one day, it will be at 100% coverage the next day.

Right in one! There’s exponential growth (in number of parameters and size of training corpus) coming to comprehend exponential growth.

The other thing about exponential growth is that the exponential function is “self-similar”. That means that at any point along the curve, it looks the same as a small clip of the past curve or when extended far beyond the present point. That means that when you’re experiencing it in action, it doesn’t seem disruptive at any point, but when you look at it on a linear plot, there’s a point where it just goes foom and takes out any limit you might have imagined as being “reasonable”: one day the pond is half covered and the next day it’s all weed. One year AI text generation is laughably bad and the next year advertising copy writers and newspaper reporters are worried about losing their jobs.

Now, assume we continue to see compute power double every two or three years for the rest of the decade. What are the next things that “only a human can do” that are going to be checked off purely by compute speed and storage capacity?

Rudy. Well said, John. Great rap. Even so, I’d like to think that we have access to divine inspiration, or promptings from the Muse, or emotive full-body feelings, or human empathy. And that a studious parrot program might not be able to generate a full spectrum of human outputs.

As I mentioned before I’m encouraged by Turing’s and Gödel’s results that we have no short algorithms for predicting what we do. This is a point that Stephen Wolfram hammers on over and over under the rubric of “computational irreducibility.”

Even though we start with a fairly simple configuration and a simple set of transformations, over time our state appears to be inscrutably complex. The classic example of this is balls bouncing around on an idealized billiard table. After a few days the system seems all but random. It’s the result of a deterministic computation, but our brains don’t have the power to understand the state of the system by any means other than a step-by-step emulation of all the intervening steps.

When I’m writing a novel, I never know exactly where I’m going. I’m “computing” at the full extent of my abilities. And I have to go through the whole process a step at a time.

It would be interesting to have machine processes capable of writing good novels, but, due to computational irreducibility, there’s no quick way to winnow out the candidate processes, no easy way to separate the wheat from the chaff.

The edge we have here is that we spend years in computing our eventual creative works, our system running full-bore, day after day. Full-court-press, with countlessly many mental simulations involved.

Another factor on our side is that we are not mere computing systems. We have physical bodies, and our bodies are embedded in the physical world. I get into this issue in my recent novel Juicy Ghosts.

In my novel, humans are immortalized in the form of lifeboxes, as discussed in my tome, “The Lifebox, the Seashell, and the Soul.” A lifebox is a data base and a computational front end that are in a large company’s silo or cloud. Marketed as software immortality. But to have a fully lifelike experience, you want your lifebox to be linked to physical flesh-and-blood peripheral in the natural world.

Computer people are prone to equating the self and the world to a type of computation. But one has the persistent impression that a real life in the real world has a lot more than computation going for it.

The adventure continues…

2 Responses to ““The Roaring Twenties” Rudy & John Walker on LLM, ChatGPT, & AI”

  1. failrate Says:

    Regarding the fact that we are computing systems within a physical embodiment that is also in a chaotic universe reminds me of “being in the world but not of the world” and a remark I heard from Kevin Kelly along the lines of “you are not in traffic, you are traffic.”

  2. Rudy Says:

    Oh my god, John Walker died on Feb 2, 2024. Unbelievable.

    As a matter of interest, here’s his post about this joint article of ours, with a few comments. I’d hoped we could go on and discuss the issues over and over again.

    https://scanalyst.fourmilab.ch/t/rudy-rucker-and-john-walker-on-artificial-intelligence-in-the-roaring-twenties/4099/3

    He was one of the most brilliant people I ever met. Wonderful person. Star villain in my novel THE HACKER AND THE ANTS. Co-author of our cellular automata package Cellab.

    I need an evil high tech scientist for this tale I’m spinning and I was groping around, and today I heard John died, and then, of course, why not, synchronicity time, it could be a Walkeresque man called Winston Trotter. Who knows if this will actually happen.

    Who knows anything. We’re dropping like flies.

    Goodbye, John. You were the best.

Leave a Reply.


Comments with links will be held for moderation.
Comments with obscene words will be rejected.



Rudy's Blog is powered by WordPress