Staring Into The Singularity          Locations of visitors to this page  

Google


WWW PIVOT.NET       (whole words work best)

Site Map       Parent Level

Transportation ] Keys to the Universe ] Space ] Knowledge Management ] The Singularity ] Artificial Life ] Nanotechnology ] Technology ] Science ] Life Extension ] Brain Food ]

Same Level

Singularitarian FAQ ] When we will be like the gods ] [ Staring Into The Singularity ] Subjective time ] Damien Broderick ] Brain Scanning ] DARPA's supersoldiers ] Cyborg future ] Music from the Singularity ] First alien will be AI ]

Child Level

 

STARING INTO THE SINGULARITY

From The Low Beyond. ©1996, ©1999 by Eliezer S. Yudkowsky

http://yudkowsky.net/singularity.html


The short version:

If computing power doubles every two years,
what happens when computers are doing the research?

Computing power doubles every two years.
Computing power doubles every two years of work.
Computing power doubles every two subjective years of work.

Two years after computers reach human equivalence, their power doubles again. One year later, their speed doubles again.

Six months - three months - 1.5 months ... Singularity.

It's expected in 2035. (Oops, make that 2025.)

 


  1. The End of History
  2. The Beyondness of the Singularity
    1. The Definition of Smartness
    2. Perceptual Transcends
    3. Great Big Numbers
    4. Smarter Than We Are
  3. Sooner Than You Think
  4. Uploading
  5. The Interim Meaning of Life
  6. Getting to the Singularity


The End of History

It began four billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life.

It began two and a half million years ago, when the first human awoke to consciousness.

Fifty thousand years ago with the rise of the Cro-Magnons.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.

In less than forty years, it will end.

Vernor Vinge saw it first. At some point in the near future, someone will come up with a method of increasing the maximum intelligence on the planet - either coding a true Artificial Intelligence or enhancing human intelligence. An enhanced human would be better at thinking up ways of enhancing humans; he would have an "increased capacity for invention". What is this increased ability going to be directed upon? Why, the next generation of enhanced humans, of course.

And what will that doubly enhanced intelligence do? Research methods on triply enhanced humans, or build AI - Artificially Intelligent - assistants or even independent AI researchers who operate at computer speeds. And an AI researcher would be able to reprogram itself, directly, to operate even faster - and better still, smarter. And then our crystal ball explodes, everything we know is out the window, Life As We Know It is over, the "old models break down and new ones must be applied". Hence the phrase: Singularity.

There are multiple paths to the Singularity. Nanotechnology - the ability to build computers atom by atom and brains neuron by neuron. Artificial Intelligence - if we can create programs that can match our intelligence, they shall shortly thereafter exceed us in speed if not in ability, because computers are currently increasing in speed by 55% per year. Even a mildly "intelligent" program, with the ability to notice some meaning in text, could greatly increase the rate of scientific progress by making knowledge more accessible. We could bootstrap our way to the Singularity via the relatively mild enhanced humans produced by Algernon's Law. Something completely unanticipated could occur, such as this decade's invention of the Scanning Tunnelling Probe, which advanced the nanotechnology timetable by about ten years.

If the current trends continue - if we don't run up against some unexpected theoretical cap on intelligence, or turn the Earth into a radioactive wasteland, or trip on one of the hazards of truly advanced technology - the Singularity is inevitable. A planet scoured of life and a superintelligence are the only two stable states, and the Universe is filled with stable things; Life As We Know It is unstable, and it will shortly be over one way or the other.  The generally accepted estimate has been and remains 2035 - less than forty years! - although many, including I, think that the Singularity may occur substantially sooner.


Some terminology, due to Vinge's Hugo-winning A Fire Upon The Deep:

Power - an entity from beyond the Singularity.
Transcend, Transcended, Transcendence - The act of reprogramming oneself to be smarter, reprogramming (with one's new intelligence) to be smarter still, and so on ad Singularitum. Also the metaphorical area where the Powers live, or belonging to that area.
Beyond - The grey area between being human and being a Power; the domain inhabited by entities smarter than human, but not possessing the technology to reprogram themselves directly and Transcend.

(In association with Amazon.com.)


The Beyondness of the Singularity

    "I imagine bugs and girls have a dim perception that Nature played a cruel trick on them, but they lack the intelligence to really comprehend its magnitude."- Calvin

But why should the Powers be so much more than we are now? Why not assume that we'll get a little smarter and that's it?

Consider the sequence 1, 2, 4, 8, 16, 32... In other words, the iteration of F(x) = (x + x). Every couple of years, computer performance doubles. (That's actually gone up recently, to 55% per year.) That is the proven rate of improvement as overseen by constant, unenhanced minds, progress according to mortals.

Right now the amount of computing power on the planet is equal to the power of a human brain (10^11 to 10^26 operations per second, with consensus at 10^17) multiplied by the number of humans. The amount of artificial computing power is so small as to be irrelevant, not because of the number of humans, but because of the sheer power of a human brain. At the old rate of progress, computers reach human-equivalence levels - 10^17 floating-point operations per second or one hundred petaflops - at around 2035. That's actually a bit long - since there are one-teraflops machines around now [as of 1996; it's now up to 3.2 teraflops], which wasn't expected until 2000 or so, and since computers started improving at 55% per year instead of 40% or so. Once we have human-equivalent computers, the amount of computing power on the planet is equal to the number of humans plusthe number of computers. The amount of intelligence available takes a huge jump. Ten years later, humans become a vanishing quantity in the equation.

That doubling sequence is actually a very pessimistic projection, because it assumes that computer power continues to double at the same rate. But why? Computer speeds don't double due to some inexorable physical law, but because researchers and engineers find ways to make them faster. If some of the researchers and engineers are computers...

A group of human-equivalent computers spends 2 years to double computer speeds. Then they spend another 2 subjective years, or 1 year in human terms, to double it again. Then they spend another 2 subjective years, or six months, to double it again. Six months later, the computing power goes to infinity.

That is the "Transcended" version of the doubling sequence. A Transcended version of a sequence {a0, a1, a2...} is a function where the interval between an and an+1is inversely proportional to an . (If there's a pre-existing mathematical term for this, let me know.) So a Transcended doubling function starts with 1, in which case it takes 1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to go to 8. This function, if it was continuous, would be the hyperbolic function y = 2/(2 - x). When x = 2, (2 - x) = 0 and y = infinity. The behavior at that point is known mathematically as a singularity.

And the Transcended doubling sequence is a fairly pessimistic projection, not a Singularity at all, because it assumes that only speed is enhanced. What if the quality of thought was enhanced? Right now, two years of work - well, these days, eighteen months of work. Eighteen subjective months of work suffices to double the speed of computers. Shouldn't this improve a bit with thought-sharing and eidetic memories? Shouldn't this improve if, say, the total sum of human scientific knowledge is stored in predigested, cognitive, ready-to-think format? Shouldn't this improve with short-term memories capable of holding the whole of human knowledge? A human-equivalent AI isn't merely "equivalent" - if Kasparov had had even the smallest, meanest automatic chess-playing program integrated solidly with his intuitions, he would have beat Deep Blue into a pulp. That's The AI Advantage: Simple tasks carried out at blinding speeds and without error, conscious tasks carried out with perfect memory and total self-awareness.

I haven't even started on the subject of AIs redesigning their cognitive architectures, although they'll have a far easier time of it - especially if they can make backups. Transcended doubling mightrun up against the laws of physics before reaching infinity... but even the laws of physics as now understoodwould allow one milligram (more or less) to store and run the entire human race at a million subjective years per second. (Without even using quantum computing.)

Let's take a deep breath and think about that for a moment. One milligram. The entire human race. One million years per second. That means, using only this planet for computing power, it would be possible to support more people than the entire Universe could support if we colonized every single planet. It means that, in a single day, this civilization would have lived over 80 billion years, several times older than the age of the Universe to date.

The peculiar thing is that most people who talk about "the laws of physics" and hard limits on Powers would never even dream of setting the same limits on a (merely) galaxy-spanning civilization of (normal) humans a (brief) billion years old. Part of that is simply a cultural convention of science fiction; interstellar civilizations can break any physical law they please, because the readers are used to it. But part of that is because scientists and science-fiction authors have been taught, so many times, that Ultimate Unbreakable Limits usually fall to human ingenuity and a few generations of time. Powered flight, faster-than-sound, space travel - all proved impossible.

We know that change crept at a snail's pace a mere millennium ago, and that even a hundred years ago it would have been impossible to place correct limits on the ultimate power of technology. We know that the past could never have placed limits on the present, and so we don't try to place limits on the future. But with transhumans, the analogy is not to Lord Kelvin, nor Aristotle, nor to a hunter-gatherer - all of whom had human intelligence - but to a Neanderthal. With Powers, to a fish. And yet, because the power of higher intelligence is not as publicly recognized as the power of a few million years; because we have no history of naysayers being embarassed by transhumans instead of mere time; some of us still sit, grunting around the fire, setting ultimate limits on the sharpness of spears; some of us still swim about, unblinking, unable to engage in abstract thought, but knowing that the entire Universe is wet.


To convey the rate of progress driven by smarter researchers, I needed to invent a function more complex than the doubling function used above: T(n). You can think of T(n) as representing the largest number conceivable to someone with a class n brain. More formally, T(n) is defined as the longest block of 1s produceable by any halting Turing Machine with n states acting on an initially blank tape. If you are familiar with computers but not Turing Machines, consider T(n) to be the largest number produceable by a computer program with ninstructions. Or, if you're an information theorist, you can think of T(n) as the inverse function of complexity; it produces the largest number with complexity n or less.

The sequence produced by iterating T(n), S{n } = T(S{n- 1}), is constant for very low values of n. S{0} is defined to be 0; a program of length zero produces no output. This corresponds to a Universe empty of intelligence. T(1) = 1. This corresponds to an intelligence not capable of enhancing itself; this corresponds to where we are now. T(2) = 3. Here begins the leap into the Abyss. Once this function increases at all, it immediately tapdances off the brink of the knowable. T(3) = 6? T(6) = 64?

T(64) = vastly more than 1080, the number of atoms in the Universe. T(10^80) is something that only a Transcendent entity is ever going to be able to calculate, and that only if Transcendent entities can create new Universes to supply the necessary computing power. I would venture to guess that even T(64) will never be known to any strictly human being.

Now take the Transcended version of S{n}, starting at 2. Half a time-unit later, we have 3. A third of a time-unit after that, 6. A sixth later - one whole unit after this function started - we have 64. A sixty-fourth later, 10^80 or whatever. An unimaginably tiny fraction of a second later... Singularity.

Is S{n} really a good model of the Singularity? Of course not. "Good model of the Singularity" is an oxymoron; that's the whole point;the Singularity will outrun any model a human could have formulated a hundred years ago, and the Singularity will outrun any model we formulate.

Also, T(10^17), or T(human), should presently equal 10^12, or the power of a computer, and S{n} should equal S{n-1} + T(S{n-1}). The main objection, though, would be that S{n} is an ungrounded metaphor. The Transcended doubling sequence models faster researchers. It's easy to say that S{n} models smarter researchers, but what does smarter actually mean in this context?


The Definition of Smartness

Smartness is the measure of what you see as obvious, what you can see as obvious in retrospect, what you can invent, and what you can comprehend. To be a bit more precise about it, smartness is the measure of your semantic primitives (what is simple in retrospect), the way in which you manipulate the semantic primitives (what is obvious), the way your semantic primitives can fit together (what you can comprehend), and the way you can manipulate those structures (what you can invent). If you speak complexity theory, the difference between obvious and obvious in retrospect, or inventable and comprehensible, is somewhat like the difference between NP and P.

All humans who have not suffered neural injuries have the same semantic primitives. What is obvious in retrospect to one is obvious in retrospect to all. Four notes: First, by "neural injuries" I do not mean anything derogatory - it's just that a person missing the visual cortex will not have visual semantic primitives. People who lose their visual cortex forget what it is like to see. Second, theorems in math may be obvious in retrospect only to mathematicians - but anyone else who acquired the skillwould have the ability to see it too. Third, to some extent what we speak of as obvious involves not just the symbolic primitives but very short links between them. I am counting the primitive link types as being included under "semantic primitives". When we look at a thought-sequence and see it as being obvious in retrospect, it is not necessarily a single semantic primitive, but is composed of a very short chain of semantic primitives and link types. Fourth, I apologize for my tendency to dissect my own metaphors; I really can't help it.

Similarly, the human cognitive architecture is universal. We all have the same sorts of symbolic structures. The nature of these structures is not known, no more than we know what symbols are made of, but our ability to communicate with each other indicates that, whatever we are communicating, it is the same on both sides. If any two humans share a set of symbols, any structure composed of those symbols that is understood by one will be understood by the other.

Different humans may have different degrees of the ability to manipulateand structure symbols; different humans may see and invent different things. The great breakthroughs of physics and engineering did not occur because a group of people plodded and plodded and plodded for generations until they found an explanation so complex, a string of ideas so long, that only time could invent it. Relativity and quantum physics and buckyballs and object-oriented programming all happened because someone put together a short, simple, elegant semantic structure in a way that nobody had ever thought of before. That is being a little bit smarter; that's where revolutions come from. Not time. Not hard work; although hard work was usually necessary, others had worked far harder without result. Raw smartness.

Now think about the Singularity. Think about a chimpanzee trying to understand integral calculus. Think about the people with damaged visual cortices who cannot remember what it was like to see, who cannot imagine the color red or visualize two-dimensional structures. Think about a visual cortex with trillions of times as many neuron-equivalents. Think about twenty thousand distinct colors in the rainbow, none a shade of any other. Think about rotating fifty-dimensional objects. Think about attaching semantic primitives to the pixels, so that one could see a rainbow of ideas in the same way that we see a rainbow of colors.

Why does anything exist at all? Nobody knows. And yet the answer is obvious. The First Cause must be obvious. It has to be obvious to Nothing,present in the absence of anything else, formed from -blank-. What is it that evokes conscious experience, the stuff that souls are made of? We are made of conscious experiences. There is nothing we experience more directly. How does it work? We don't have a clue. Two and a half millennia of trying to solve it and nothing to show for it but "I think therefore I am." The solutions operate outside the semantic primitives and the semantic structures we can use.

Our descendants, successors, future selves will figure out the semantic primitives necessary and alter themselves to perceive them. The Powers will dissect the Universe and the Reality until they understand why anything exists at all, neurons until they understand qualia. And that will only be the beginning. It won't end there. Why should there be only three hard problems? After all, if not for humans, the Universe would apparently contain only one or two hard problems - how could a non-conscious thinker formulate the hard problem of consciousness? Might there be states of existence beyond mere consciousness - transsentience? That's what the Singularity is all about.

So before you talk about life as a Power or the Utopia to come - a favorite pastime of transhumanists and Extropians is to discuss the problems of uploading, life after being uploaded, and so on - just remember that you probably have a much better chance of solving all three hard problems than you do of making a valid statement about the future. This goes for me too. I'll stand by everything I said about humans, including our inability to understand certain things, but everything I said about the Powers is almost certainly wrong. "They'll figure out the semantic primitives necessary and alter themselves to perceive them." Wrong. "Figure out." "Semantic primitives." "Alter." "Perceive." I would bet on all of these terms becoming obsolete after the Singularity. There are better ways and I'm sure They - or It, or [sound of exploding brain] will "find them".


Perceptual Transcends

I would like to introduce a unit of post-Singularity progress, the Perceptual Transcend or PT.

[Brief pause while audience collapses in helpless laughter.]

I'm not trying to get it right, just make a point.

A Perceptual Transcend occurs when all things that were comprehensiblebecome obvious in retrospect, and all things that were inventablebecome obvious. A Perceptual Transcend occurs when the semantic structures of one generation become the semantic primitives of the next. To put it another way, one PT from now, the whole of human knowledge becomes perceiveable in a single flash of experience, in the same way that we now perceive an entire picture at once.

Computers are a PT above humans when it comes to arithmetic - sort of. While we need to manipulate an entire precarious pyramid of digits, rows and columns in order to multiply 62305 by 10358, a computer can spit out the answer - 645355190 - in a single obvious step. These computers aren't actually a PT above us at all, for two reasons. First of all, they just handle numbers up to two billion instead of 9; after that they need to manipulate pyramids too. Far more importantly, they don't notice anything about the numbers they manipulate, as humans do. If a human multiplies 23704 by 14223, using the wedding-cake method of multiplication, he won't multiply 23704 by 2 twice in a row, just steal the results from last time. If one of the interim results is 12345 or 99999 or 314159 he'll notice that too. The way computers manipulate numbers is actually lesspowerful than the way we manipulate numbers.

Would the Powers settle for less? A PT above us, multiplication is carried out automatically but with full attention to interim results, numbers that happen to be prime, and the like. If I were designing one of the first Powers [and I am - '99], I would create an entire subsystem for manipulating numbers, that would pick up on primality, complexity, and all the numeric properties known to humanity. A Power would understand why 62305 times 10358 equals 645355190, with the same understanding achieved by a top human mathematician who spent hours studying all the numbers involved. And at the same time, the Power will multiply the two numbers automatically.

For such a Power, to whom numbers were true semantic primitives, Fermat's Last Theorem and the Goldbach Conjecture and the Riemann Hypothesis might be obvious. Somewhere in the back of its mind, the Power would test each statement with a million trials, subconsciously manipulating all the numbers involved to find why they were not the sum of two cubes or why they were the sum of two primes or why their real part was equal to one-half. From there, the Power could intuit the most basic, simple solution simply by generalizing. Perhaps human mathematicians, if they could perform the arithmetic for a thousand trials of the Riemann Hypothesis, examining every intermediate step, looking for common properties and interesting shortcuts, could intuit a formal solution too. But they can't, and they certainly can't do it subconsciously, which is why the Riemann Hypothesis remains unobvious and unproven - it is a conceptual structureinstead of a conceptual primitive.

Perhaps an even more thought-provoking example is provided by our visual cortex. On the surface, the visual cortex seems to be an image processor. In a modern computer graphics engine, an image is represented by a two-dimensional array of pixels (picture elements, spots of color). To rotate this image, each pixel's rectangular coordinates {x, y} are converted to polar coordinates {theta, r}. All thetas, representing the angle, have a constant added. The polar coordinates are then converted back to rectangular. There are ways to optimize this process, and ways to account for intersecting and empty pixels on the new array, but the essence is clear: To perform an operation on an entire picture, perform the operation on each pixel in that picture.

At this point, one could say that a Perceptual Transcend depends on what level you're looking at the operation. If you view yourself as carrying out the operation pixel by pixel, it is an unimaginably tedious cognitive structure, but if you view the whole thing in a single lump, it is a cognitive primitive - a point made in Hofstadter's Ant Fugue when discussing ants and colonies. Not very exciting unless it's Hofstadter explaining it, but there's more to the visual cortex than that.

For one thing, we consciously experience redness. (If you're not sure what "conscious experience" a.k.a. "qualia" means, the short version is that you are not the one who speaksyour thoughts, you are the one who hears your thoughts.) Qualia are the stuff making up the indescribable difference between redand green.

The term "semantic primitive" describes more than just the level at which symbols are discrete, compact objects. It describes the level of conscious perception. Unlike the computer manipulating numbers formed of bits, and like the imagined Power manipulating theorems formed of numbers, we don't lose any resolution in passing from the pixel level to the picture level. We don't suddenly perceive the idea "there is a bear in front of me", we see a picture of a bear, containing millions of pixels, every one of which is consciously experienced simultaneously. A Perceptual Transcend isn't "just" the imposition of a new cognitive level; it turns the cognitive structures into consciously experienced primitives.

"To put it another way, one PT from now, the whole of human knowledge becomes perceiveable in a single flash of experience, in the same way that we now perceive an entire picture at once."

Of course, the PT won't be used as a post-Singularity unit of progress. Even if it were initially, it won't be too long before "PT" itself is Transcended and the Powers jump out of the system yet again. I exerted all my ability to write an even briefly plausible description of progress beyond the Singularity, and yet the Singularity is as far beyond me as it is beyond any other human, and my PTs will be as worthless a description as the doubling sequence discarded so long ago. Even if we accept the PT as the basic unit of measure, it simply introduces a secondary Singularity. Maybe the Perceptual Transcends will occur every two consciously experienced years at first, but then will occur every conscious year, and then every conscious six months - get the picture?

It's like the "Birthday Cantatatata..." in Hofstadter's book Godel, Escher, Bach. You can start with the sequence {1, 2, 3, 4 ...} and jump out of it to w (omega), the symbol for infinity. But then one has {w, w + 1, w + 2 ... }, and we jump out again to 2w. Then 3w, and 4w, and w2 and w3and ww and w^(ww) and the ordinal e0 , which includes all exponential towers of ws.

The PTs may introduce a second Singularity, and a third Singularity, and a fourth, until Singularities are coming faster and faster and the first w-Singularity is imminent -

Or the Powers may simply jump beyond that system. The Birthday Cantatatata... was written by a human - admittedly Douglas Hofstadter, but still a human - and the concepts involved in it may be Transcended in the very first PT.

The Powers are beyond our ability to comprehend.

Get the picture?


Great Big Numbers

It's hard to appreciate the Singularity properly without first appreciating really large numbers. I'm not talking about little tiny numbers, barely distinguishable from zero, like the number of atoms in the Universe or the number of years it would take a monkey to duplicate the works of Shakespeare. I invite you to consider what was, circa 1977, the largest number ever to be used in a serious mathematical proof. The proof, by Ronald L. Graham, is an upper bound to a certain question of Ramsey theory. In order to explain the proof, one must introduce a new notation, due to Donald E. Knuth in the article Coping With Finiteness. The notation is usually a small arrow, pointing upwards, here abbreviated as ^. Written as a function:

int arrow (int num, int power, int arrownum) {

 int answer = num;

 if (arrownum == 0) return num * power;

 for (int i = 1; i < power; i++)answer = arrow(num, answer, arrownum - 1);

 return answer;

} // end arrow2^4 = 24 = 16.

3^^4 = 3^(3^(3^3)).

7^^^^3 = 7^^^(7^^^7).

3^3 = 3 * 3 * 3 = 27. This number is small enough to visualize.

3^^3 = 3^(3^3) = 3^27 = 7,625,597,484,987. Larger than 27, but so small I can actually type it. Nobody can visualize seven trillion of anything, but we can easily understand it as being on roughly the same order as, say, the gross national product.

3^^^3 = 3^^(3^^3) = 3^(3^(3^(3^...^(3^3)...))). The "..." is 7,625,597,484,987 threes long. In other words, 3^^^3 or arrow(3, 3, 3) is an exponential tower of threes 7,625,597,484,987 levels high. The number is now beyond the human ability to understand, but the procedure for producing it can be visualized. You take x=1. You let xequal 3^x. Repeat seven trillion times. While the very first stages of the number are far too large to be contained in the entire Universe, the exponential tower, written as "3^3^3^3...^3", is still so small that it could be stored if a small percentage of all the computers on the Web cooperated.

3^^^^3 = 3^^^(3^^^3) = 3^^(3^^(3^^...^^(3^^3)...)). Both the number and the procedure for producing it are now beyond human visualization, although the procedure can be understood. Take a number x=1. Let x equal an exponential tower of threes of height x. Repeat 3^^^3 times, where 3^^^3 equals an exponential tower seven trillion threes high.

And yet, in the words of Martin Gardner: "3^^^^3 is unimaginably larger than 3^^^3, but it is still small as finite numbers go, since most finite numbers are very much larger."

And now, Graham's number. Let x equal 3^^^^3, or the unimaginable number just described above. Let x equal 3^^^^^^^(x^s)^^^^^^^3. Repeat 63 times, or 64 including the starting 3^^^^3.

Graham's number is far beyond my ability to grasp. I can describe it, but I cannot properly appreciate it. (Perhaps Graham can appreciate it, having written a mathematical proof that uses it.) This number is far larger than most people's conception of infinity. I know that it was larger than mine. My sense of awe when I first encountered this number was beyond words. It was the sense of looking upon something so much larger than the world inside my head that my conception of the Universe was shattered and rebuilt to fit. It felt as I imagine a mountain looks to people who don't appreciate nuclear weapons: something forever beyond us to subdue. All theologians should face a number like that, so they can properly appreciate God. My happiness was completed when I learned that the actual answer to the Ramsey problem that gave birth to that number - rather than the upper bound - was probably six.

Why was all of this necessary, mathematical aesthetics aside? Because until you understand the hollowness of the words "infinity", "large" and "transhuman", you cannot appreciate the Singularity. You must knowthat even appreciating the Singularity is as far beyond us as visualizing that number is to a chimpanzee. Farther beyond us than that. No human analogies will ever be able to describe the Singularity, because we are only human.

The number above was forged of the human mind. It is nothing but a finite positive integer, though a large one. It is composite and odd, rather than prime or even; it is perfectly divisible by three. Encoded in the decimal digits of that number, by almost any encoding scheme one cares to name, are all the works ever written by the human hand, and all the works that could have been written, at 200 words per minute, over the age of the Universe raised to its own power a thousand times. And yet, if we add up all the base-ten digits the result will be divisible by nine. The number is still a finite positive integer. It may contain Universes unimaginably larger than this one, but it is still only a number. It is a number so small that the algorithm to produce it can be held in a single human mind.

The Singularity is beyond that. We cannot pigeonhole it by stating that it will be a finite positive integer. We cannot say anything at all about, except that it will be beyond our understanding.

If you thought that Knuth's arrow notation produced some fairly large numbers, what about T(n)? How many states does a Turing machine need to implement the calculation above? What is the complexity of Graham's number, C(Graham)? Probably on the order of 100. And moreover, T(C(Graham)) is likely to be much, much larger than Graham's number. Why go through x= 3^(x ^s)^3 only 64 times? Why not 3^^^^3 times? (That'd probably be easier, since we already need to generate 3^^^^3, but not 64.) And with the extra space, we might even be able to introduce an even more computationally complex algorithm. In fact, Knuth's arrow notation may not be the most powerful algorithm that fits into C(Knuth) states. Again, T(n) is the metaphor for the growth rate of a self-enhancing entity because it conveys the concept of having additional intelligence with which to enhance oneself. I don't know when T(n) passes beyond the threshold of what human mathematicians can, in theory, calculate. Probably more than n=10 and less than n=100. The point is that after a few iterations, we wind up with T(4294967296). Now, I don't know what T(4294967296) will be equal to, but the winning Turing machine will probably generate a Power whose sole purpose is to generate a really large number. That'swhat the term "large" means.


Smarter Than We Are

It's all very well to talk about cognitive primitives and obviousness, but again - what does smarter mean? The meaning of smart can't be grounded in the Singularity - I haven't been there yet. So what's my practical definition?

    "The toughest challenge for a writer is a character brighter than the author. It's not impossible. Puzzles the writer needs months to solve, or to design, the character may solve in moments. But God help the writer if his abnormally bright character is wrong!"
    - Larry Niven

    "Of course, I never wrote the 'important' story, the sequel about the first amplified human. Once I tried something similar. John Campbell's letter of rejection began: 'Sorry - you can't write this story. Neither can anyone else.'"
    - Vernor Vinge

Smartness is that quality which makes it impossible to write a story about a character smarter than you are. You can write about super-fast thinkers, eidetic memories, lightning calculators; characters who learned a dozen languages in a week, who can read a textbook in an hour, or who can invent all kinds of wonderful stuff - as long as you don't have to produce the invention, that is. But you can't write a character with a higher level of emotional maturity, a character who can spot the obvious solution you missed, a character who knows (and can tell the reader) the Meaning Of Life, a character with superhuman self-awareness. Not unless you can do these things yourself.

Let's take a concrete example, the story Flowers for Algernon(later the movie Charly), by Daniel Keyes. (I'm afraid I'll have to tell you how the story comes out, but it's a Character story, not an Idea story, so that shouldn't spoil it.) Flowers for Algernon concerns a neurosurgical procedure for intelligence enhancement. This procedure was first tested on a mouse, Algernon, and later on a retarded human, Charlie Gordon. The enhanced Charlie had the standard science-fictional set of superhuman characteristics; he thought fast, learned a lifetime of knowledge in a few weeks, and discussed arcane mathematics (not shown). Then the mouse, Algernon, gets sick and dies. Charlie analyzes the enhancement procedure (not shown) and concludes that the process is basically flawed. Later, Charlie dies.

That's a science-fictional enhanced human. A real enhanced human, of course, would not have been taken by surprise. A real enhanced human would realize that any simple intelligence enhancement will be a net evolutionary disadvantage - if enhancing intelligence were a matter of a simple surgical procedure, it would have long ago occurred as a natural mutation. (This goes double for a procedure that works on rats!) As far as I know, this never occurred to Keyes. (I selected Flowers, out of all the famous stories of intelligence enhancement, because, for reasons of dramatic unity, this story shows what happens to be the correct outcome.)

Note that I didn't dazzle you with an abstruse technobabble explanation for Charlie's death; my explanation is two sentences long and can be understood by someone who isn't an expert in the field. That's the difference between a fictional genius and an actual enhanced human such as myself. I wouldn't have been taken by surprise, and for that matter I wouldn't have been that dramatically upset if I had. Normal author no enhancement, real enhancement no story.

Do I hear the audience demanding an explanation? Well, a full explanation is elsewhere; the brief version is that I am a Specialist, with a neurological perturbation (best guess: pathway severed from the right mammillary body or amygdala) that indirectly resulted in cognitive resources being over-allocated to a few favored abilities, including causal analysis and combinatorial design. This is of course a net evolutionary disadvantage (in accordance with Algernon's Law), since an evenly balanced set of abilities is the design optimum.

From my perspective, of course, I'm perfectly normal. Other humans have this odd blind spot or slowness when it comes to seeing certain "obvious" answers, although they have no trouble understanding them. Algernon's Law, for example. I can't imagine how anyone who read Flowers for Algernon, much less the author, managed to miss it.

I also have blind spots, wherever cognitive resources have been diverted. Although I have no trouble understanding certain concepts, I cannot formulate them myself. Often just hearing the name given to a concept - and chunking it into a symbol rather than a group of ideas - causes everything else to fall into place. I can't name my ideas except over the course of years; usually someone else winds up naming them. Why? Obviously I have a much fuzzier picture of my blind spots than my abilities, but I think they have to do with symbol formation, spotting non-causal similarities, and formulating linear strings of goals. (I'm sure you're not interested in the specifics; I just want you to know that they're there.) The point is...

It's hard to convey what the term smarter means to someone who's never seen their own or someone else's blind spot. That is why I am the most fanatic Singularitarian on the planet; because I have some slight, infinitesimal, actual experience with how intelligence enhancement works. Because I have a visceral appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, in every day, and every way. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from "impossible" to "obvious". Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards, and Lord knows what.

And I know that my picture of the Singularity will still fall short of the truth. I may not be modest, but I have humility - if I can spot anthropomorphisms and gaping logical flaws in every alleged transhuman in every piece of science fiction novel, it follows that a slightly higher-order Specialist (much less a real transhuman!) could read this page and laugh at my lack of imagination. Call it experience, call it humility, call it self-awareness, call it the Principle of Mediocrity. I know, in a dim way, just how dumb I am.

I've tried to show the Beyondness of the Singularity by brute force, but it doesn't take infinite speeds and PTs and ws to place something utterly beyond us. All it takes is a little tiny bit of edge, a bit smarter, and the Beyond stares us in the face once more. I'venever been through the Singularity. I've never been to the Transcend. I, like any Specialist, just staked out an area of the Low Beyond. This page is devoted to communicating a sense of awe that comes from personal experience.

From my cortex to thine; every concept here was born of a mere human - and any impression it has made on you was likewise born of a mere human. Someone who has devoted a bit more thought, or someone a bit more fanatic; it makes no difference. Whatever impression you got from this page has not been an accurate picture of the far future; it has, unavoidably, been an impression of me. And I am not the far future. Had this page been written by a Power, you would have gained an accurate impression. But it isn't, and wasn't, and so you didn't. Take whatever emotion this page evoked, and associate it not with the Singularity; associate it with me, the mild, quiet-spoken fellow infinitesimally different from the rest of humanity. Don't bother trying to extrapolate beyond that. You can't. Nobody can - not you, not me.

2035. Probably earlier.


Sooner Than You Think

We don't lack the technology; the Singularity could happen tomorrow.The Internet has enough power, if properly reprogrammed, to run a human brain, and perhaps a smaller computer has enough power to run a seed AI. We possess machines capable of producing arbitrary DNA sequences. We have bacteria capable of turning DNA into protein. 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight. 100K could contain the code for a seed AI. We don't have to receive that information from the future, either. One breakthrough - just one major insight - in the science of protein engineering or atomic manipulation or Artificial Intelligence, one really good day at Cycorp or Zyvex, and the books of history close. That could happen at any time. There is a major breakthrough in some scientific field once per day, and that statistic is from years ago.

Drexler has written a detailed, technical, how-to book for nanotechnology. After stalling for thirty years, AI is making a comeback. Computers are growing in power even fasterthan their usual, pedestrian rate of doubling in power every two years. Quate has constructed a 16-head parallel Scanning Tunnelling Probe. Last but not least, I'm starting to work out methods of enhancing human intelligence and coding a transhuman AI.

The exact time of Singularity is customarily predicted by taking a trend and extrapolating it, much as The Population Bomb predicted that we'd run out of food in 1977. For example, population growth is hyperbolic. (Maybe you learned it was exponential in math class, but it's hyperbolic to a much better fit than exponential.) If that trend continues, world population reaches infinity on Aug 17, 2027, plus or minus 1.8 years. Although it is impossible for the human population to reach those levels, some say that if we can create AIs, then the graph might measure sentient population instead of human population. These people are torturing the metaphor. Explain to me who designed the population curve to take into account developments in AI. It's just a curve,a bunch of numbers. It can't distort the future course of technology just to remain on track.

If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level - nanotechnology - in I forget how many years (page vanished), but I think around 2035. This, of course, was before the time of Scanning Tunneling Microscopes, which has recently been used to make an atomic-scale abacus on which actual calculations were performed, or the completely unanticipated artificial atom ("You can make any kind of artificial atom - long, thin atoms and big, round atoms."), which has in a sense obsoleted mere molecular nanotechnology - the surest sign that nanotech is just around the corner. I believe Drexler is now giving the ballpark figure of 2013.

Similarly, computing power doubles every two years eighteen months. If we extrapolate forty thirty years ahead we find computers with as much raw power (10^17 ops/sec) as somepeople think humans have, arriving in 2035 2025. Does this mean we have the software to spin souls? No. Does this mean we can program smarter people? No. Does this take into account any breakthroughs between now and then? No. Does this take into account the laws of physics? No. Is this a detailed model of all the researchers around the planet? No.

It's just a graph. The "amazing constancy" perhaps entitles it to consideration as a thought-provoking metaphor of the future, but nothing more. The Transcended doubling sequence doesn't account for how the faster computer-based researchers can get the physical manufacturing technology for the next generation set up in picoseconds, or how they can beat the laws of physics.

Mathematics can't predict when the Singularity is coming. Well, it can, but it won't get it right. Even the remarkably steady numbers, such as the one describing the doubling rate of computing power, (1) describe unaided human minds and (2) are speeding up, perhaps due to computer-aided design programs. Statistics may be used to predict the future, but they don't modelit. What I'm trying to say here is that "2035" is just a wild guess, and it might as well be next Tuesday.

In truth, I don't think in those terms. I do not "project" when the Singularity will occur. I have a "target date". I would like the Singularity to occur in 2005, which I think I would have a reasonable chance of doing via AI if someone handed me a billion dollars a year. I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse, misuse of a purely material (and thus amoral) ultratechnology powerful enough to destroy the planet. You cannot just sit back and wait. To quote Michael Butler, "Waiting for the bus is a bad idea if you turn out to be the bus driver."

The most we can say about 2035 is that it seems like a reasonable upper bound, given the current rate of progress. The lower bound? Thirty seconds.

Breakthroughs archives.
Eurekalert
web page.


Uploading

Maybe you don't want to see humanity "replaced" by a bunch of "machines" or "mutants", even superintelligent ones? You love humanity and you don't want to see it obsoleted? Well, tough luck. But just because humans become obsolete doesn't mean you will become obsolete. You are not a human. You are an intelligence which, at present, happens to have a mind unfortunately limited to human hardware. That could change. With any luck, all persons on this planet who live to 2035 or 2005 or next year or whenever - and maybe some who don't - are going to wind up as Powers.

Transferring a human mind into a computer system is known as "uploading"; turning a mortal into a Power is known as "upgrading". The prototypical upload is the Moravec Transfer, proposed by Dr. Hans Moravec in the book Mind Children. The Moravec Transfer gradually moves (rather than copies) a human mind into a computer. You need never lose consciousness. The key assumption of the Moravec Transfer is that we can perfectly simulate a single neuron, which Penrose & Hameroff would argue is untrue. Let's assume that either the laws of physics are computational or we can build a trans-Turing computer that does the same thing a neuron does, to which P&H would have no objection. Note that the details which follow have been redesigned and fleshed out a bit (by yours truly) from the original in Mind Children.

A neuron-sized robot swims up to a neuron and scans it into memory. The computer starts simulating the neuron. The robot waits until the neuron perfectly matches its simulation inside the computer, and then replaces the neuron with itself as smoothly as possible, sending inputs to the computer and transmitting outputs from the simulation of a neuron inside the computer. This entire procedure has had no effect on the flow of information in the brain, except that one neuron's worth of processing is now being done inside a computer instead of a neuron. Repeat, neuron by neuron, until the entire brain is composed of robot neurons whose guts are inside the computer.

Despite this, the synapses (links) between robotic neurons are still physical; robots report the reception of neurotransmitters at artificial dendrites and release neurotransmitters at the end of artificial axons. Phase two replaces the physical synapses with software links. For every axon-dendrite (transmitter-receiver) pair, the electrical inputs are no longer reported by the robot; instead the computed axon output of the transmitting neuron is added as a simulated dendrite to the receiving neuron. At the end of Phase Two, the robots are all firing their axons, but none of them are receiving anything, none of them are affecting each other, and none of them are affecting the computer simulation. Finally, we disconnect the robots. You have now been placed entirely inside a computer, bit by bit, without losing consciousness. In Moravec's words, your metamorphosis is complete.

If either of the phases still seems too abrupt, the transfer of an individual neuron, or synapse, can be spread out over as long a time as necessary. To slowly transfer a synapse into a computer, we can use weighted factors of the physical synapse and the computational synapse to produce the output. The weighting would start as entirely physical and end as entirely computational. Since we are presuming the neuron is being perfectly simulated, the weighting affects only the flow of causality and not the actual process of events. Slowly transferring a neuron is a bit more difficult. The robot would have to surround the neuron and suddenly replace the axons and dendrites with robotic tentacles without disturbing the neural cell body. (That's going to take some pretty fancy footwork!) If this turns out not to be feasible, the robot can enclose the neuron entirely. At this point, the robot accepts weighted outputs from both the neuron and the computer. Once the weighting has shifted entirely to the computer, the neuron is discarded.

Assuming we can simulate an individual neuron, and that we can replace neurons with robotic analogues, I think that pretty thoroughly demonstrates the possibility of uploading given that consciousness is a function of neurons. And if we have immortal souls, then uploading is a realsnap. Take soul out of brain. Put soul in new substrate. Upload complete.

At this point it is customary to speculate about how one goes about eating, drinking, walking around; people state that they are unwilling to give up physical reality, worry about whether or not they will have sufficient computational power to simulate a hedonistic world of their wildest desires, and so on and so on ad nauseam. Even Vinge himself, discoveror of the Singularity, has gone on record as wondering whether one's true self would be diluted by Transcendence.

I hope that by this point in the page you have been sufficiently impressed by the power and scope and incomprehensibility and general Transcendence of the Singularity that you see these speculations for what they are. If you wish to remain undiluted, you will be able to arrange it. You will be able to make backups. You will be able to preserve your personality regardless of substrate. The only folks who have to worry about being unwillingly diluted are the first humans to Transcend - but we'll take the risk.

Of course, it may be that any being of sufficient intelligence wantsto be diluted. Exercising anxiety over that possibility seems spectacularly pointless, analogous to children worrying that, as adults, they will no longer want to be thoughtlessly cruel to other children. If you wantit, it's not something wrong that you should worry about.

The human brain has a finite number of neurons and therefore a finite number of states. Eventually, you will die, go into an eternal loop, or Transcend. In the long run... the really long run... mortality isn't an option.

Maybe, after Transcending, you will perhaps be a bit changed. If that is so, it is both inevitable and morally right. Given that absolutely nothing you can do will change that, why worry? Save your anxiety for what you can affect.

(The same goes for worries about hostile superintelligences wiping out humanity. I cannot guarantee this won't happen, since I'm not superintelligent. However, I am reasonably (95%) certain that the Powers will be ethical;I just don't know what really is "ethical", the true moral right. If wiping out your creators is morally wrong, it won't happen. If we do get wiped out, it will be because, were we upgraded to superintelligences instead, we would see the moral necessity and commit suicide. If that's the case, there is absolutely nothing we can do about it, both in the long run (see above) and the short run (see below); and, even were we granted the opportunity, it would be morally wrong to take it. Observer-dependent morality is a chimerical artifact of evolutionary competition; it dissolves under the application of sufficient intelligence. If you think it would be wrong to kill you, and you are rationally correct about it, presumably the Powers will see the same argument and you'll be perfectly safe. Again, why worry?)


The Interim Meaning of Life

    "You know, I don't understand why humans evolved as such thoughtless, shortsighted creatures."

    "Well, it can't stay that way forever."

    "You think we'll get smarter?"

    "That's one of the two possibilities."- Calvin and Hobbes

I know. You don't want things to move so fast. You want a slowSingularity, a soft Singularity, one that isn't quite so frightening. You would like all of humanity to enter the first stages together, in a smooth, synchronized Transcendence. You wish a serene song of rising intelligence, rather than a sudden shock. A gentle seduction.

Believe me, if that was possible, I'd do it.  Not for myself, I've already said my goodbyes to life as we know it. I'd do it to reassure all the people on whose cooperation the Singularity depends. My allegiance is to the Singularity, but I don't think it makes much of an intrinsicmoral difference when Earth's Singularity happens, as long as it's within the next ten thousand years or so. I am in such a tearing hurry for exactly one reason: Every delay increases the possibility that humanity will exterminate itself before reaching Singularity.

Nanotechnology, in my humble opinion, poses the most urgent threat. Complete control over the molecular structure of matter, via tiny self-reproducing robots, would make it too easy to deliberately wipe out all life on the planet. "Active shields" might suffice against accidental outbreaks of "grey goo", but not against hardened military-grade nano, perfectly capable of attacking active shields with fusion weapons. And yet, despite this threat, we can't even try to suppress nanotechnology; that simply increases the probability that the villains will get it first. (However, most nanotechnologists (e.g. Zyvex ) intend to sell generalized assemblers or otherwise publish their success. If you have the opportunity to politely explain to these people that they are being dumbfoundingly, suicidally naive about the benevolence of their fellow humans, by all means do so.)

Although nuclear war would almost certainly leave enough survivors to get civilization started again, I think it would probably alter the balance between AI and nanotech in the wrong direction. A major economic collapse would also slow AI more than nanotech; nanotech requires a single laboratory, while a seed AI requires an Internet and probably thousands of programmers. Those, then, are the three major "deadlines", as I call them:  Nanotech, nuclear war, and economic collapse.

Yudkowsky's Threats:

  1. The future is not a playground; it is a minefield.
    This isn't to say that there isn't any fun or happiness in the pre-Singularity future. But it isn't a Utopia and you have to be careful.
  2. Blind fear will get you killed even faster than blind enthusiasm.
    This includes all forms of technophobia.  It includes any panic, no matter how terrible the threat.
  3. Attempting to suppress a technology only inflicts more damage.
    It develops unevenly, or the compensating benefits are denied, or someone else gets it first. You can't even try to regulate it or slow it down. This holds true of any technology, no matter how dangerous.
  4. Technologies with military applications are always misused.
    This is human nature. Human nature, to a cognitive engineer, is a thing that can be changed; but until it changes, there will be someonewilling to take global risks for personal power.
  5. It is easier to destroy than defend.
    Ever since the invention of nuclear weapons, offensive technology has been overwhelmingly more powerful than defensive technology. Unless a new technology can defend against nuclear weapons produced and augmented by that technology, this will remain true.

Nor is the possibility of destruction the only reason for racing to Singularity. There is also the ongoing sum of human misery, which is not only a practical problem, not only an ethical problem, but a purely moral problem in its own right. Have you ever read P.J. O'Rourke's description of a crack neighborhood? If I had the choice of erasing crack neighborhoods or erasing the Holocaust, I don't know which I'd pick. I do know which project has a better chance of success. I also know that the victims, in retrospect if nothing else, will probably prefer life as a Power to life as a junkie.

Have you ever pondered the Great Questions of Life, the Universe, and Everything? Have you ever wondered whether it really matters, cosmically speaking, if you stay in bed this morning? Have you ever stared into the hard problem of ethics, or consciousness, or reality, and felt yourself slowly going insane as you realized that there is no justification for subjective experience, getting out of bed, or anything existing at all? How can we do anything, set any goals, without knowing the Meaning of Life? How can we justify our continued participation in the rat race if we don't know why we're running? What's it all for?

We don't know. We have to guess, and act on our best guesses. Regardless of the absolute probabilities, superintelligence has a better chance of discovering the true moral right, having the power to implement it, and wanting to implement it. The state where superintelligence exists is, with a very high degree of probability regardless of the True Meaning of Life, preferable to the current state. That's the Interim Meaning of Life, and it works well enough... but it's a long, long way from certainty, or really knowing what's going on!

I have had it. I have had it with crack houses, dictatorships, torture chambers, disease, old age, spinal paralysis, and world hunger. I have had it with a death rate of 150,000 sentient beings per day. I have had it with this planet. I have had it with mortality. None of this is necessary. The time has come to stop turning away from the mugging on the corner, the beggar on the street. It is no longer necessary to close our eyes, blinking away the tears, and repeat the mantra: "I can't solve all the problems of the world." We can. We can end this.

And so I have lost, not my faith, but my suspension of disbelief.Strange as the Singularity may seem, there are times when it seems much more reasonable, far less arbitrary, than life as a human.

I suppose the ancient philosophers might have had it worse - they, after all, had no knowledge of physics. Their world must have seemed arbitrary indeed. Why is this pink lump a hand? Why does it respond to my will? Why is it soft and not hard? But knowing physics changes nothing. The world remains arbitrary. I know my hand is a collection of atoms - but why should I care about this pink lump? What makes anything important? Why? Why? Why?

And while we make and stake our best guesses - "human life", "free will", "new ideas" - the sum of human suffering goes from bad to worse. And I knowthat there is a better way! Why rationalize this life? Why make it seem bright and happy? There is an alternative!

Our fellow humans are screaming in pain, our planet will probably be scorched to a cinder or converted into goo, we don't know what the hell is going on, and the Singularity will solve these problems. I declare reaching the Singularity as fast as possible to be the Interim Meaning of Life, the temporary definition of Good, and the foundation until further notice of my ethical system.


Getting to the Singularity

This page isn't a call to arms in the ordinary sense. I am not raising a political cause or forming a social group. Perceiving those elements of the mind, I prefer to stay away from them. I do not command. I upgrade my readers' minds until their goals match my own. I suppose that in our insane culture that sounds arrogant, but it's not one-tenth as arrogant as the condescenscion quietly implicit in exerting coercion.

This page is a call to awareness. "Incoming! Heads up! Watch out! In ten years it's all over!" If this makes you want to join the Cause or rebuild your ethical system or have a midlife crisis or send someone money, wonderful. But the main idea is that you should have trouble keeping a straight face whenever someone talks about "a hundred years down the road", issuing 50-year bonds or retiring in 2030.

That said, what can we do to accelerate the Singularity?

Some of my readers will be neurologists and researchers and programmers; for them, the correct course is to be ready when the Singularity requires you. Make brain-tampering and Artificial Intelligence your hobby, keep up with the forefront of the most modern methods and advances, and maybe even try to contribute your own ideas every now and then.

If you want to help and all you've got is money, probably a lot of researchers on paths to the Singularity are spending valuable time writing grant proposals or doing things that could be done by lab assistants. It would be a fine thing if there were a Singularity Support Foundation to ensure that these people weren't distracted. There is probably one researcher alive today - Hofstadter, Quate, Penrose, Chalmers, Drexler, Lenat, Moravec, Minsky, someone just graduating college, or even me - who is the person who gets to the Singularity first. Although some conceptual breakthroughs may be dependent on the laboratory, others may be dependent on how much time is spent in thought. Every hour that person is delayed is another hour to the Singularity. Every hour, six thousand people die, and most of the survivors are unhappy for an hour. Perhaps we should be doing something about this person's spending a fourth of his time and energy writing grant proposals.

One major problem faced by a new movement is turning into a cult, a mutual admiration society, or a bunch of crackpots. Drexler barely prevented this from happening to nanotechnology. I have two major hints as to how to go about this.

First, think of yourself as a business venture. Plan to make a profit. Bootstrapyour way to the Singularity instead of begging for spare change. Ask for venture capital and have something specific in mind. There's a lot of money to be made in AI. Even a little bit of semantics in a computer can result in huge increases in usability and power.

Second, don't go Utopian. Don't describe Life after Singularity in glowing terms. Don't describe it at all. I think the all-time low point in predicting the future came in the few brief paragraphs ofUnbounding the Futurethat I read, when they described a pedestrian being run over and his hand miraculously healing. That's ridiculous. Pedestrian? Run over? Hand?Biological bodies, cars, in a nanotech world? Why not just hire a bunch of apes to describe the ease of getting bananas with a human mind?

In the words of Drexler:

    "I would emphasize that I have been invited to give talks at places like the physical sciences colloquium series at IBM's main research center, at Xerox PARC, and so forth, so these ideas are being taken seriously by serious technical people, but it is a mixed reaction. You want that reaction to be as positive as possible, so I plead with everyone to please keep the level of cultishness and bullshit down, and even to be rather restrained in talking about wild consequences, which are in fact true and technically defensible, because they don't sound that way. People need to have their thinking grow into longer-term consequences gradually; you don't begin there." [Emphasis mine.]

The problem with people expounding their Utopian visions of a nanotech world is that their consequences aren't wild enough. Looking at stories of instantly healing wounds, or any material object being instantly available, doesn't give you the sense of looking into the future.It gives you the sense that you're looking into an unimaginative person's childhood fantasy of omnipotence, and that predisposes you to treat nanotechnology in the same way.

Worse, it attracts other people with unimaginative fantasies of omnipotence. There's no better way to turn into a bunch of parlor pinks, sipping coffee and planning the Revolution without actually doing anything. I suppose I shouldn't be too harsh on the nano-Utopia types. Some of them may be actual researchers or science-fiction writers or other people doing useful things, some of them may be rank-and-file sincerely trying to make it happen who just got caught in the general lack of imagination, and of course none of them have been to the Low Beyond. Once you've read this page, though, there's no excuse.

This page is about staring into the Singularity. It is about awe, the Beyond, the end of history, and things beyond human comprehension. It is intended to invoke a sense of future, and I hope that my readers will be inclined to view nanotechnology, artificial intelligence, neurology, and all the other paths to the Singularity in the same way - as part of the future. I hope that attracts the right sort of people.

In a moment of insanity, I subscribed to the Extropian mailing list. These people know what "Singularity" means. In theory, they know what's coming. And yet, even as I write [in '97 - they've improved a bit in '99, maybe even due to my prodding], folk who really ought to know better are arguing over whether transhumans will have enough computing power to simulate private Universes, whether the amount of computing power available to transhumans is limited by the laws of physics, whether someone uploaded into a trans-computer is really the same person or just an amazing soybean imitation, and - least believably of all - whether our unimaginably intelligent future selves will still be, er... let's say, "interested in the old-fashioned method of reproduction".

Why is this our concern? Why do we need to know this? Can it not be that maybe, just maybe, these problems can wait until afterwe're five times as smart and some of our blind spots have been filled? Right now, every human being on this planet who has heard of the Singularity has exactly one legitimate concern: How do we get there as fast as possible?What happens afterward is not our problem and I deplore those gosh-wow, unimaginative, so-cloying-they-make-you-throw-up, and just plain boringand unimaginative pictures of a future with unlimited resources and completely unaltered mortals. Leave the problems of transhumanity to the transhumans.Our chances of getting anything right are the same as a fish designing a working airplane out of algae and pebbles.

Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve.

How do we keep the world economy from disintegrating for at least another ten years? How do we keep first-generation nanites from eating the planet? If an enhanced human is evil, intelligent enough to outrun us, and not smart enough to be good, how do we defeat him? If the Internet woke up tomorrow, how would we know, how could we talk to it, and how could we make backups? If a nascent Power with ill intentions originates on the Internet (probably because some moron violated the Prime Directive of AI and imposed Asimov Laws), what containment procedures will be necessary and how much damage could it do? Even these aren't very practical questions, but at least they are our concern, things we may need to deal with using our naked wits.

How do we get a multi-billion-dollar Singularity Support Foundation up and running? Who's willing to fund an AI project? Who do we need to recruit for an AI project? Will OpenSourcing the AI help, and is it safe? How can we disable the standard technophobic backlash? How can we find the pre-existing Specialists needed to create the basic design patterns for the AI?

These are the practical questions that will be faced in the immediate future. The correct questions, and the answers, are the proper concern of mailing lists. I don't object to letting the imagination run free; it may produce useful ideas. But don't get so emotionally involved in it, don't even think about trying to claim that your position has a chance of being correct, and spend your time coding a transhuman AI - or simply making money - instead.

Copyright Singularity ©

 

 

   Staring Into The Singularity     

Google
WWW PIVOT.NET       (whole words work best)

Site Map       Parent Level

Transportation ] Keys to the Universe ] Space ] Knowledge Management ] The Singularity ] Artificial Life ] Nanotechnology ] Technology ] Science ] Life Extension ] Brain Food ]

Same Level

Singularitarian FAQ ] When we will be like the gods ] [ Staring Into The Singularity ] Subjective time ] Damien Broderick ] Brain Scanning ] DARPA's supersoldiers ] Cyborg future ] Music from the Singularity ] First alien will be AI ]

Child Level