Interview with Dr. Stephen Thaler
Questions by Sander Olson. Answers Dr. Stephen Thaler.
Dr. Stephen Thaler worked for many years as a solid-state physicist at Mcdonnell Douglass. Dr. Thaler recently created his own company, Imagination Engines, with the goal of advancing radically new neural net paradigms. His neural net solutions have been successfully used by a number of companies to improve their products and services.
1: Tell us about yourself. What is your background, and what are your current
background is in physics. Although my first inclination was to become a
theoretical physicist, several mentors persuaded me to become an experimental
solid state physicist, simply from the standpoint of economics. Thereafter, I
became involved in light scattering from solids, by day, and continued to play
with the mathematics of the solid state by night. It was during this period that
I became captivated with mathematical models of ferromagnetic and ferroelectric
crystals, the precursors to what are currently called artificial neural
14 years as a physicist with the now defunct aerospace giant, McDonnell
Douglass, I have struck out on my own to invent a broad suite of foundational
neural network patents that are inevitably crucial to the production of
trans-human synthetic intelligence. To perpetuate my very productive
independence from large corporations and academia, I have formed my own company,
Imagination Engines, Inc., for which I serve as President, CEO, and chief
2: How would you define a "neural network"? How does your concept of a
neural network differ from more conventional theories?
a challenge to convey the essence of a neural network when whole volumes have
been written on the subject. However, the following definition serves only as
the ‘mental scaffolding’ leading to a deeper appreciation of the term: An
artificial neural network (ANN) is a collection of interconnected on-off
switches or “neurons,” either simulated in software, or implemented as
hardware. This assembly of neurons stores information and relationships through
the systematic adjustment of the connections joining these switches.
that we have a core notion, allow me to expand to supply some additional detail.
To distinguish my ongoing pedagogical discussion from unconventional theories, I
have marked each observation as “traditional” (fairly accepted), “new
perspective” (a novel way of thinking about ANNs), and “new development”
(derived from IEI patents):
(traditional) A neural network is an input-output mapping that accepts
input patterns (i.e., vectors) and produces associated output patterns.
Typically, any number of intermediate or ‘hidden’ layers are involved.
Information, taking the form of both memories, and the interrelationship between
such memories is stored within the numerical values taken on by connection
strengths. In the brain, the input layers correspond to raw sensory signals,
such as the excitation pattern of retinal ganglia (i.e., image pixels), and
associated thoughts or feelings are represented by the activation patterns at
the network’s outputs.
(a new perspective) That information is stored via the adjustment of
connection strengths between neurons should come as no surprise. After all, we
routinely devise models of bits and pieces of the world through fitting
coefficients within statistical models. Therefore in modeling linear things and
phenomena, we devise fits of the form y = mx + b through the adjustment of slope
m, and y-intercept, b. In fitting cyclic behaviors, we use the sum of sinusoidal
basis functions and a series of Fourier coefficients, etc. Similarly, an ANN is
a statistical modeling scheme, wherein the adjustable fitting coefficients take
the form of variable connection strengths between neurons. Note however that
there is a powerful difference in the functional form represented by the ANN:
Rather than appear as a sum of terms, F(x)
= c1F1(x) + c2F2(x)
+ ... + cNFN(x),
the neural network takes a nested form, F(x)
Thus the ANN allows the modeling of causal chains through which something
happens, FN, because something else happened, FN-1,
stemming all the way back to some initial event, x.
Within the brain, the very inspiration for the ANN, the absorption of such
causal chains is both fundamental and crucial to survival of the host organism.
In effect, the brain models and then anticipates opportunities and dangers, and
all the causal or correlative chains leading to these life-determining
(a new perspective) Practitioners of ANN technology have unconsciously
expanded the definition of a neural network from a collection of “on-off”
switches to that of an interconnected array of processing units that incorporate
a broad range of functional relationships. For example, while still retaining
the overall nested functional form, the individual functions Fi, may,
for instance, be linear or Gaussian. Some neural networks may consist of a
collection of neural network modules, rather than just simple computational
(traditional) Whereas the typical curve fitting routines utilize linear
matrix theory to adjust fitting coefficients, the typical neural network,
because of its nested functional form, is not amenable to such techniques.
Instead, one ‘trains’ a neural network through a variety of means, foremost
of which is a celebrated technique called “back-propagation.” In effect, one
could call this process a ‘mathematical spanking” in that input patterns
propagate through the successive layers of the net, producing at first erroneous
output patterns. Corrective error signals, representing the vectorial difference
between actual and desired outputs, then back-propagate from the output to input
ends of the network, iteratively correcting the connection weights involved, via
the partial differential equations of the learning algorithm
(new perspective) As neural networks train, the connection weights
collectively take on the form of ‘logic’ circuits that effectively capture
the implicit rules and heuristics concealed within the input-output pattern
pairs presented to them. In effect, the network is being forced to correctly
connect inputs and outputs, and in so doing, devises a ‘theory’ to account
for the relationships involved. Of course such a theory, constructed from on-off
switches, is unfathomable to humans.). …Allow me to note that this aspect of
ANNs is the hardest for the typical outsider to accept. The notion that machines
can now devise their own logic and theories, based only upon the presentation of
raw data patterns from the environment, is usually the hardest to accept, but is
key to the development of totally autonomous synthetic intelligence. Otherwise,
hordes of computer programmers must be typing in myriad “if-then” rules in a
process that can hardly be called autonomous!
(new development) If, instead of using simple on-off switches as the
individual processing units, we use neural network modules that incorporate
various analogy bases, then the neural network devises something closer to what
we would call a theory. During
training, only the connections to the more important analogy networks strengthen
in producing an accurate input-output mapping. Those irrelevant or inapplicable
to the mapping erode away. In the end, the neural network transforms into what
looks like a semantic network, connecting the most relevant analogies into a
larger picture. (This is a major departure from the conventional definition of a
neural network, and requires a few patented IEI technologies to build.)
(new perspective) The similarity between the synthetic and biological
neuron is two-fold: (1) the signals arriving at any given neuron are summed or
‘integrated’ within the neuron, and (2) if, the integrated input signal
exceeds some threshold, the neuron switches from a silent to active state,
outputting its own signal. My claim is that these two overlapping aspects of
synthetic and biological neural networks are sufficient to create artificial
cognition. After all, we have distilled the essence of flight from the
biological bird, the Bernoulli effect, without having to attach feathers to
aircraft or requiring them to drop messy payloads from the sky. Further, those
who offer the criticism that synthetic neurons just aren’t complicated enough
to capture human intelligence, I point out that intelligence is not stored in
neurons. It is instead absorbed within connections between neurons!
(new development) My most heretical redefinition of a neural network has
to do with the radical departure from notion 1, the ANN viewed as an
input-output device. The so-called Creativity Machine Paradigm, that I describe
below, works without the presentation of inputs to an ANN. Instead, spontaneous
and unintelligent fluctuations internal to such a network produce very
intelligent outputs. In effect, there goes the old rule of garbage-in,
garbage-out. Instead garbage now yields gold.
last concept has caused the most controversy, perhaps unseating our cherished
notion that human cognition is profound. Instead, it may very well be a lowly,
3: Tell us about your "creativity machine."
the late 80’s and early 90’s, I witnessed a very unusual phenomenon in
trained neural networks that went against the grain of what neural network
theorists and practitioners professed. Starving the inputs of a trained
artificial neural network of any meaningful inputs and then mildly perturbing
the synapses connecting its processing units, the network, to my utter
amazement, produced useful information rather than the anticipated gibberish.
For example, after showing the net human-originated literature and randomly
tickling the net’s synapses, it produced new and meaningful literature.
Allowing the neural network to listen to many examples of top-ten music and
similarly applying internal perturbations, it produced new and palatable
melodies. Exposing the net to thousands of known chemical compounds, and again
stimulating it via synaptic perturbations, the formulas of plausible chemical
compounds astonishingly emerged at its outputs. It dawned on me that here was a
very promising and general search engine for new concepts, that self-organized
itself. Since all information in the world may be represented as numerical
patterns, this neural network effect could be used in any conceptual space
imaginable. The only problem at this stage was that the emerging patterns (i.e.,
ideas) were being produced at rates faster than my ability to appreciate them.
What was obviously needed was another computational agent to monitor the former
than just write a computer program to serve as a critic of patterns emerging
from the dreaming network, I found it much more convenient to train an
additional neural network. After all, it would be extremely daunting and
time-consuming to write a heuristic algorithm representing, for instance,
literary or musical preferences. For the materials problem, complex quantum
mechanical and thermodynamic theories would need to be enlisted, to map chemical
formulas to properties such as hardness, superconducting critical temperature,
thermal conductivity, etc. The viable alternative was to simply train relatively
small artificial neural networks, within a matter of minutes, to provide the
necessary formula to property mappings. Therefore, allowing the literary critic
to monitor the literary dreams of the former net, the best literature could be
extracted. A critic net trained to capture subjective musical preferences, was
able to filter out only the most appealing candidate melodies. Likewise,
networks dreaming new potential materials could be monitored, for instance, for
that hypothetical material superconducting the closest to room temperature.
I discovered that the latter critic network could exercise feedback control over
the at first randomly hopping synaptic perturbations within the dreaming net.
What at first was random hopping became systematic, as the system cumulatively
learned where best to place perturbations to optimize the rate of turnover of
new ideas. Over time, I called the dreaming networks “imagination engines”
(IE), and the critic nets, “alert associative centers” (AAC). The embodiment
of both IE and AAC, actively involved in this feedback loop (i.e., a
brainstorming conversation) came to be known as a “Creativity Machine.” Soon
thereafter, many IEs and AACs were combined into compound Creativity Machines,
now capable of carrying out the process of juxtapositional
invention, wherein isolated concepts undergo fusion, while watching critic
nets associate such combinations with some utility or esthetic value.
reader will note that I refer to the Creativity Machine as a paradigm. The CM is
not a software or hardware product, but a fundamental computational principle
that may be implemented in any computer language and on any computational
platform including nano-scale
those who are familiar with the field, probably the only precedence for such
Creativity Machine are recurrent neural networks that have been inspired from
statistical physics, Hopfield Nets and Boltzmann Machines. Note however, that in
such historically significant schemes, these nets are serving as associative
memories that reconstruct such memorized patterns from incomplete input
patterns. Because these systems are reconstructing memories, they aren’t being
very creative. They are simply serving up the most appropriate, and previously
know memory appropriate to the problem at hand (i.e., the traveling salesman
problem), perhaps choosing the best alternative, via heuristically based
algorithms and/or human-contrived cost functions. In Creativity Machines, the
attractor landscape is being warped and melted to produce new attractors
representing non-memories (ideas). The transient perturbations that are being
applied to this imagination engine serve to both drive new activations across
the output layers and to warp the attractor landscape. Recurrence is unnecessary
and the objective and cost functions used take the form of neural networks,
making it entirely practical to implement very subjective cost functions such as
the musical or literary critics mentioned above.
relevant benchmark is that of the genetic algorithm, a type of discovery system
that mimics the Darwinian concepts of mutation and natural selection. Because
the generation of concepts by such algorithms is blind, often producing a
combinatorial explosion of nonsensical possibilities that inundate the
competition for robustness, the range of problems over which GAs may be applied
is rather limited, with researchers in the area routinely exercising great care
in minimizing the dimensionality (usually less than 10) of the problem at hand.
By virtue of their ability to preserve and to gradually dissolve constraints,
the Creativity Machine, implemented on mere desktop PCs, routinely handle
problems having hundreds or even thousands of dimensions. …Dare I say that
genetic algorithms are made totally obsolete by the Creativity Machine Paradigm.
After all, we think with biological neural networks and not our genetic
even more important distinction to make between genetic algorithms and
Creativity Machines is that there is nothing autonomous about writing a genetic
algorithm. As the name implies, every time a problem is encountered ostensibly
requiring a GA, a programmer must painstakingly write the necessary code and
introspect upon the important constraint relations that are involved. In
contrast, the Creativity Machine is a totally autonomous, one-size-fits-all
solution that builds itself using
IEI’s patented, self-training artificial neural networks. Therefore, if one
intends to build a totally self-reliant form of synthetic intelligence, one
would preferentially harness the CM paradigm.
proceeding on to the next question, allow me to observe that the Creativity
Machine is especially intimidating to academics. Think about it, rather than
savor their graphs, equations, and theses, all they really need to do is
‘boxcar’ two neural networks together and then passively spectate the new
ideas and revelations that emerge! Therefore, the primary obstacles to
acceptance of the CM Paradigm stem from not only economic vested interests in
contraptions such as genetic algorithms, but all too human pride. I predict that
this vanity will actually stand in the way of human progress.
4: How intrinsic is the link between emotions and intelligence? Could one, in
theory, create a sentient, self-aware, intelligent machine without any emotions?
my cumulative world view, I don’t identify with concepts of ‘emotion’,
‘intelligence’, ‘sentience’, and ‘self-awareness’. These terms
effectively represent oftentimes silly and erroneous approximations in
accounting for the world at large, probably the result of just a couple of
pounds of brain mass trying desperately to account for an expansive society and
even vaster physical universe. If I knew exactly what intelligence and emotion
were, then I could make a clear and concise association between the two. In the
meantime, I won’t.
is what I can justify saying along these lines...In
the brain, there are only activation patterns. This is what the
neuroscientist sees through functional MRI or PET scan. That these patterns seem
so much more significant to us is that they are being perceived by other neural
networks within the brain. The fact of the matter is that a neural network with
sufficient processing units may be trained to map any pattern to any other. As a
whimsical example of this process, I can easily train an ANN to instantaneously
convert the acoustic input pattern corresponding to the 60’s rock tune
“Innagoddadavida” into the “Star Spangled Banner.”
a similar manner, I may readily mentor a neural network to produce an alarm when
it senses fire. Further, I can build additional neural networks to convert this
alarm pattern into visions of death and destruction, as well as an activation
pattern associated with pain. Finally, I can allow at least one additional
neural network to flip the state of all connection weights within the overall
system of neural networks, resulting in perceptual shift of the system as a
whole. In actual neurobiology, this shift in perception is achieved through the
release of neurotransmitters such as adrenaline.
note what the release of diffusing molecules such as adrenaline achieves. It
serves as a source of signal noise within the synaptic clefts, transiently
perturbing the neural networks of the brain. As they are so perturbed, they
induce never before seen activation patterns that begin to deviate from the
overall memory store. In effect, the brain is implementing Creativity Machine
Paradigm to invent potentially new strategies that are especially convenient
within adrenaline-releasing scenarios such as tigers jumping out of the jungle,
or asteroids hurtling toward earth. In summary, emotion yields (or should I say,
is associated with) creativity, one of the purported hallmarks of intelligence.
reason that such perturbations are so necessary to creativity stems from the
fact that a quiescent neural network, no matter how large and complex, can only
store memories and relationships through direct exposure to arriving input
patterns from the environment. In effect, such unperturbed networks contain only
rote memories and a few native confabulations that are mathematical artifacts of
training. The introduction of transient perturbations allow for ‘dimples’ to
form in the network’s attractor basin structure that may be perceived as good
ideas by the surrounding neural networks of cortex. Thus creativity emerges, as
valuable non-memories (i.e., ideas) arise from the memories.
crux of the matter is this: The universe is the biggest simulation out there and
has an immense numbers of degrees of freedom. The brain, whose major function is
to model the external universe, has far fewer degrees of freedom and must
therefore pull some clever tricks to anticipate it. It must therefore warp its
attractor landscape beyond the normal valleys representing its memories to
produce new potential memories that represent candidate world scenarios.
Emotions are the byproduct of this process through which typically non-specific
chemical diffusion of neurotransmitters alter perception throughout cortex.
5: Ray Kurzweil argues that rapidly improving brain-scanning technologies will
eventually allow us to reverse engineer the brain. Do you believe that reverse
engineering of the brain will ever be feasible?
an AI perspective, I feel that we have reached the point of diminishing returns
in reverse engineering the brain. In short, now that the Creativity Machine
Paradigm has been inspired by the brain, we may synthesize all aspects of brain
function, including the internal genesis of new ideas. Of course, in regard to a
medical understanding of the brain, such reverse engineering is critical and
should be regarded as a high priority. However, we may proceed to build
synthetic, trans-human intelligence without grasping the anatomical fine
structure and mechanics involved in the human brain.
6: Roger Penrose argues that the brain employs quantum computing techniques. Do
and no. (BTW, I don’t think you mean quantum computing “techniques.”
Penrose believes that quantum mechanical effects are involved in the process of
like to draw upon a famous quote from Christof Koch, “Quantum mechanics is
mysterious, and consciousness is mysterious. Q.E.D.: quantum mechanics and
consciousness must be related.”
I particularly resonate with this snide syllogism. Equally so, I believe that
the same applies to the brain and quantum computing. Pattern-based computing,
which the brain does so well, may be achieved without recourse to quantum
coherence, collapsing wave functions, and microtubules. Ironically, it may be
achieved with many classical systems including rubber bands, gears, and
is true that all of the computational architectures we can imagine are based
upon atoms and photons, both of which may be described within a quantum
mechanical framework. However, quantum mechanics, as any good physicist knows,
is only a useful and perhaps fanciful analogy that has proven useful in
quantitatively predicting the behavior of matter and energy. In short, we
don’t know that quantum mechanics is the reality. It is only a description and
the last time I looked, descriptions are not causal. They only serve to
I may be perfectly candid, here is the problem that has created this quantum
quackery. Very mechanistic events are going on in the brain. We’ve all seen
them in the brain scans, as regions light up and subside. These are the low
resolution pictures of what is actually happening in the brain. However, other
neural networks within the brain, are perceiving these activation patterns as
bigger than life itself, attaching sanctimonious and vibrant significance to
them. The result is that the individual, as well as society view cognition as so
much more than it actually is. Therefore, we race to provide a profound
explanation for our own cumulative misperception, leading us back to Koch’s
brilliant, but sarcastic jab.
however, that there is great utility in the brain for stochastic fluctuations in
inducing creative function. The quantum mechanical framework allows for a wide
variety of such noise sources, ranging from quantum mechanical tunneling effects
(leakage of neurotransmitters across the synapse) to spontaneous fluctuation in
cell membrane potentials. However, I emphasize that QM is not the cause, it is a
7: How much potential is there for your "creativity machines"? Could
your machines ever develop genuine intelligence and sentience?
is genuine intelligence and sentience? I know that this is a heretical thing to
say, but where may I find prime examples of these things? I am aware that we
celebrate the ‘human genius’ but how much of that attitude has been nurtured
by societal programming? We should all be cautious of the human race’s
self-aggrandizements because we have not yet found an unbiased outsider to rate
human cognition. Could it be that only a few great thinkers, who I can probably
name on one hand, have ever achieved intelligence, and if so, only for a few
momentous seconds? Then what about the average human being? Could it be that the
vast majority of humanity is simply preprogrammed, repetitively dealing with the
same matters, in the same ways, each and every day?
Creativity Machines duplicate these human behaviors, whatever their splendor? Of
course they can... In the learned opinion of cognitive neuroscience, there are
only three activities going on in the human brain: (1) learning, (2) perception,
and (3) internal imagery. The first two of these functions may be achieved with
standard neural network technology, appropriately scaled. The latter activity is
achievable via Creativity Machine Paradigm, allowing the system to generate new
activation patterns that differ from those of memories. …That is all.
8: Writers such as Ray Kurzweil and Hans Moravec argue that the human brain is a
million times more powerful than current computers. Marvin Minsky has argued
that a 1 Mhz machine could become intelligent and self-aware. Which opinion do
you agree with?
probably much closer to Minsky on that count...
to my picture of what a brain is, essentially a connectionist model of a
connectionist universe, then the accuracy of that model is contingent upon the
number of connections available through the biological synapses. If the brain
attains an equal number of connections to that of the universe, then it may
perfectly model it. Until then, it must make some gross approximations that
inevitably manifest as false assumptions and inaccuracies.
connections are the primary gauge of intelligence, then current computers come
nowhere close. However, the breakeven point is quickly coming.
that the human brain has a clock cycle of roughly 10 Hz, but its self-proclaimed
intelligence is due to massively parallel computing. So, I don’t think that a
1 MHz computer incorporating even a million interconnects is very intelligent on
a human scale where we see on the order of 100 trillion interconnects. However,
it may be self-aware through any weakly coupled components that intercommunicate
in a feedback loop. The machine may not imagine itself to be Brad Pitt, but each
component is primitively informed of the other’s presence and is baffled about
the exact location of that presence owing to the circular references involved.
9: Can you site specific
examples of your neural nets accomplishments?
It would seem that a technology with as much potential as yours would be
in great demand by both the public and private sector.
from having an immense suite of extremely fundamental and unavoidable neural
network patents, here are some specific projects that I have successfully
undertaken in the last 5 years.
Personal Hygiene Product Design Creativity Machine (1997). Under contract
with an internationally known company located in Boston, IEI built a
Creativity Machine to produce a number of advanced toothbrush designs that could
offer at least 20% improvement in stain removal and depth of penetration between
teeth. This effort contributed to the design of the now famous toothbrush seen
daily on network television.
Theoretical Materials Creativity Machine for US Air Force (1998). Within
a Phase I SBIR effort through the Air Force Research Laboratory’s Materials
Directorate at Wright-Patterson AFB, a Creativity Machine, containing 16
individual neural network modules and approximately 1,000 processing units,
produced an interactive database containing approximately a half million new
potential binary and ternary chemical systems, along with 17 accompanying
chemical and physical properties. Within a period of three months, this system
built itself, through training upon a variety of preexisting chemical databases.
Supermagnetic Composites Creativity Machine for BASIC Research
Corporation (1998). Under contract with BASIC Research Corporation in San Diego,
IEI built a special materials design Creativity Machine to invent new
supermagnetic rare earth iron boride composites and their required processing
paths. On the order of 200 new rare earth boride formulations were prescribed,
each with comparable magnetic properties to neodymium-based borides, but at
nearly half the price!
Retail Sales Creativity Machine (1998). Under contract with a major
beverage manufacturer in St. Louis, IEI built a proof-of-principle Creativity
Machine that prescribes a tailored shelving model that optimizes sales of these
products as a function of the demographics surrounding a given retail outlet.
Built into this Creativity Machine was the ability to skeletonize its neural
networks so as to provide an intuitive explanation facility to that company's
Neural Networks for the Forging Supplier Initiative (1999). Working with
a major foundry in Milwaukee, IEI assisted in training personnel in the use of
neural networks, as well as in building a variety of neural networks and
Creativity Machines to both model and optimize a number of metallurgical
Neural Network-Driven Intelligent Questionnaires for the State of
California (2000). Working with the Legal Aid Society of Orange County (LASOC),
IEI built a prototype, interactive domestic violence questionnaire intended for
public utilization via kiosks and the Internet. In one version of this
proof-of-principle experiment, Creativity Machines dictated the traversal of
forms so as to optimize the user’s experience. Once the user had filled out
this form, a court-admissible complaint form was output.
Warhead Design Creativity Machine for US Air Force (2000). Under the
"Revolutionary Ordnance Technology Initiative" IEI built for the AFRL
Munitions Directorate at Eglin AFB, a Creativity Machine that could design a
warhead on the basis of the fragmentation field and energetics desired from the
Creativity Machine-Based Semantic Parser for Booz-Allen & Hamilton
(2001). Under contract with Booz-Allen and Hamilton, IEI built a fully trainable
text parsing application that was capable of seeking sentence content containing
targeted entities, from the Internet and then semantically disambiguating such
sentences so as to classify them within 12 pre-selected conceptual categories.
Furthermore, a cumulative semantic network autonomously formed, then serving as
a convenient text summarization tool.
Satellite Beam Planning for a Major Aerospace Company (2001). IEI has
developed an automated satellite beam planning capability patterned after the
functionality of the Global Broadcast Service. The delivered capability
accurately performed beam planning by optimizing the delivery of multiple types
of broadcast information products to numerous ground receive suites under the
constraints of geographical location, proximity to other receive suites within
the beam footprint.
the most noteworthy accomplishment of the Creativity Machine Paradigm was the
invention of a new neural network scheme called the “Self-Training Artificial
Neural Network Object” (STANNO), a totally autonomous self-learning system
that may clone itself ad infinitum to produce swarms of independent neural
networks that may exhaust all potential discoveries within a targeted database.
In this case, we have a prime example of a neural system inventing another
resistance to this technology stems largely from a widespread lack of
understanding just about ‘vanilla’ neural networks. Although it has taken
about 50 years for von Neumann paradigm and “if-then” symbolic programming
to catch on, it may take another 10-15 years for the general public to
understand that neural networks effectively write their own computer code. In
the meantime, there are a lot of programmers and venture capitalists out there
making lots of money from this antique technology, and at the moment they can
afford the Madison Avenue types to shout very loudly and drown out people such
me also note that once the essence of the more advanced Creativity Machine sinks
in, denial sets in. After all, human intellect and creativity are very sacred
things. One’s most profound thoughts are not the result of noise in the
machine...therefore, these systems cannot be achieving such lofty function. At
this point the sale or investment is lost. …In summary, I can typically sell
niche applications that do rather amazing things, but default and spiritual
misconceptions about the brain stand in the way of a greater appreciation of the
mammoth scope of the technology.
10: What is your opinion of evolvable hardware? Will we ever be able to use
genetic algorithms to create intelligence and sentience?
don’t believe in genetic algorithms at all. Again, human beings think with
neural networks and not with their genetic apparatus! (although many have been
accused of doing so). Genetic algorithms emulate the blind processes of mutation
and natural selection that are very wasteful and laborious, usually taking place
over the eons. If genetic algorithms can be credited with anything, it is in
having shown the way with the human brain. …We don’t need to reinvent the
wheel. The lesson learned is that intelligent systems are massively parallel,
neural network based systems. Furthermore, brilliant ideas nucleate upon various
sources of stochastic fluctuation within such systems once they have absorbed
knowledge through their interconnects.
much as the GA aficionados don’t want to hear this, the only use I foresee for
genetic algorithms is in the adaptation of structures through the competitive
struggle to survive (i.e., sharper claws, faster legs, etc.).
I don’t see this process occurring as a result of human will. The
process will take the form of a competition between evolvable hardware systems
to dominate the scene. Later, these systems will direct their attention toward
the protoplasmic world and the human race.
11: Writers such as Max More, Damien Broderick, and Vernor Vinge argue that
computer hardware technology is advancing exponentially, and that intelligent
machines will soon exist. They
argue that these machines will have intelligences that dwarf human intellects.
Do you see this scenario as likely?
but they are leaving out some very crucial intermediate steps. How do these
machines attain creativity? Machines just don’t get larger and more complex
and then mysteriously become intelligent and self-aware. That’s what I call a
‘star-trek’ myth that actually permeates not only the lay community, but the
whole field of artificial intelligence.
critical step is definitely the Creativity Machine Paradigm.
12: What are your plans for the future?
plans for the future are two-fold: (1) to carve out small pieces of the IEI
patent suite to develop niche applications that simply do amazing things for the
consumer, without having to convince them of the profoundness of my theories,
and (2) to advance those lofty theories, in all their glory, to prove that this
is in fact the future of all AI and the underpinnings of human cognition.
further both endeavors, I am actively proposing and developing what can only be
called a true world brain, wherein the TCP/IP nodes of the Internet are
converted to neurons, forming a global neural network cascade that can then
introspect on human-originated content. In this system, whose numbers of
interconnects exceeds that of the human brain, creative function will be
achieved through the nucleation of novel patterns through the noise caused by
inevitable connectivity issues, hosts being booted up and down, as well as human
learn more about the coming World Brain go to http://www.imagination-engines.com/wbcc/wbcc.htm.
 Horgan, John (1996). The End of Science, Addison-Wesley, Massachusetts, p.173.