|  
 de 
Garis on the future 
Do cyberwars await an unwary humanity? 
by Hugh Ashton  
  
What does a "mad 
scientist" look like? Members of the Kaisha Society got their chance to find out 
when Dr. Hugo de Garis, the cosmopolitan artificial intelligence expert and doommeister 
extraordinary, made his presentation on the evolution of artificial intelligences 
early in December. 
 The word "evolution" 
is not chosen at random. Dr. de Garis' speciality is the development of electronic 
circuits which can re-program themselves, and use analytical techniques to eliminate 
those parts of the circuit which do not perform efficiently, "breeding" and "mutating" 
until they have formed the electronic analog of the neural circuits in a biological 
brain. Since the rate of change is so rapid, and the number of circuits that can 
be produced is so large, Dr. de Garis expects that within a very short time he 
will be able to produce the brain for a robot kitten which will exhibit many of 
the characteristics of a living animal. 
 Cyber wars? 
 
Given the rate of change of technology, de Garis anticipates that artificial intelligences 
will become available well within our lifetimes. Such intelligences will be used 
as the brains for companions (and if anyone doubts this, he points to the popularity 
of the Furby toys, and the Sony robot dog "Aibo") and as household servants. Following 
that, he anticipates that there is no theoretical limit to the size and complexity 
of such artificial brains (if one atom can be used to store one bit of information, 
and femtosecond switching times are available, a "brain" the size of a grain of 
sand can have a capacity several billion times that of a human brain). These "artilects" 
(from "artificial intellects") may be regarded as the next stage in the evolutionary 
process; silicon-based life forms replacing the current carbon-based life forms. 
The very potential to create such artilects is a controversial issue, and Dr. 
de Garis anticipates that humanity will divide into two opposing camps: the "Cosmists" 
who believe wholeheartedly in the development of such an evolutionary step, and 
the "Terrans" who vehemently oppose such a development, on the grounds that such 
intelligences may turn against their makers. Given the human propensity to expend 
many lives in defense of a political principle, de Garis feels that the conflict 
between the two opposing ideologies could result in billions of deaths. This, 
simplified, is de Garis' position. 
 My first reaction 
on hearing this was one of shock, which is exactly what I was meant to feel. On 
second and third thoughts, I am far from convinced. I cannot pretend to compete 
with de Garis in the field of artificial intelligence--he is, together with three 
others, widely regarded as being one of the great minds in this field. However, 
as de Garis himself emphasizes, he is a man with a big ego. He enjoys speaking, 
and again by his own admission, has given this talk to many people over a long 
period of time. Part of the purpose of the talk is undoubtedly to generate interest 
in his work. Right now, he is employed by the ATR Human Information Processing 
Laboratories in Kyoto, as the head of the Brain Builder Group. There are (or will 
be--de Garis is vague here) seven machines which are available for the "breeding" 
of artificial brains. Apparently, no more may be produced, as the special chips 
which are necessary for the development of the machine are no longer being manufactured 
by the Silicon Valley company which produced them. de Garis, who likes to see 
himself as a cosmopolitan cynical geopolitician, is distributing these $500,000 
machines according to his whims, playing off America and China against each other, 
not to mention Japan. 
 One objection to 
de Garis' propositions is that the military, of whatever country, will take any 
worthwhile discoveries in artificial intelligence and lock them safely away. Already, 
the military and security agencies of the United States have been sniffing with 
interest at his work. De Garis claims that since there are as yet no tangible 
results from his research, there has been no follow-up from these military quarters. 
In fact, one of the few machines to leave the laboratory has gone to a Belgian 
software company, Lernout & Hauspie, who make voice recognition and machine translation 
software (as it happens, this article is being written using their Voice Xpress 
voice recognition product, which has roots in products developed by Ray Kurzweil, 
another of the world leading lights in this field). 
 As a British citizen, 
de Garis may be immune from some forms of American governmental interference, 
but it is hard to believe that the interest of security agencies is restricted 
to visiting de Garis' website and making occasional trips to Kyoto to talk to 
him in person. It is easier to believe that in some laboratory buried in the bowels 
of Fort Meade (NSA headquarters) or some similar institution, de Garis' work is 
being duplicated, with more or less success. In my student days, I was friendly 
with a psychologist who was researching various aspects of ESP. He and another 
friend were regularly approached by various intelligence agencies who wished to 
compete with the (mainly mythical) competition from the KGB. Government agencies 
do in fact pursue long-term goals, even those which appear to be frivolous or 
on the fringes of science fiction. 
 Putting the 
genie back  
Another concern of de Garis is his feeling that it would be impossible to put 
the genie back in the bottle once the bottle has been opened, but in the 55 years 
since the development of the atomic bomb, we have built a very strong container 
for this particular genie. A mere handful of nations are known to have atomic 
weapons, a few more are suspected of possessing them, and as far as anyone knows, 
no terrorist group yet possesses them. And yet the technology to build an atomic 
bomb is relatively simple. Maybe human beings have more self-preservational instincts 
than de Garis gives them credit for. 
 There is another 
more fundamental objection to his arguments. I cannot dispute the fact that it 
is possible to produce artificial brains of undeniable complexity and power. What 
I dispute is the fact that these brains are, by definition, intelligent. The definition 
of intelligence is one of the great philosophical issues involved here. I do not 
agree with those who claim that a computer is only capable of doing what it has 
been instructed to do--there are already chess-playing programs which are capable 
of defeating their creators, and this is not being done through brute force look-ahead 
tactics. It may well be that in the very near future we will see computer programs 
which are capable of writing entertaining fiction on original themes to a much 
higher standard than can be done by those who wrote the software. Other examples 
will spring to mind. One possible instance of machine intelligence, in the broadest 
possible sense, given by de Garis, even though he did not specifically refer to 
it as an example of such, is provided by the Furby toy. There are instances of 
elementary school children learning "Furbish" (the language spoken by Furby toys) 
and using it to communicate with each other, to the bewilderment of their teachers. 
It becomes a moot point which is the learner and which is the teacher in this 
case--do the Furby toys have a secret plan to rule the universe, using their ostensible 
"owners" as the tools with which to do it? 
 But do these examples 
count as "intelligence" in the widest possible sense? Maybe an artilect can write 
a Tom Clancy novel better than Tom Clancy can, but can it write a "literary" novel? 
Maybe it can compose a Bach fugue which is indistinguishable from something written 
in the eighteenth century, but can it find its own original voice in terms of 
music? Will this machine pass the Turing test, which requires an artilect to be 
indistinguishable from a human being in the responses it gives to a random series 
of questions posed by a human being? 
 Actually, does 
any of this matter? If the same logic that rules the winning of chess matches 
on a strategic and a tactical level also applies, not just to the moving of wooden 
pieces on a board, but also to the moving of human units on the battlefield, we 
now have an intelligence which surpasses that of any human commanding officer, 
past, present or future, with a potentially horrifying capacity for destruction. 
In the current state of affairs, we have an option. We can pull the plug on this 
super-general, before it destroys all the troops and resources of both forces 
as a result of its lack of compassion for carbon-based life forms. 
 It seems to me 
that the real danger comes when we start to produce self-replicating machinery 
with an element of intelligence, even if the intelligence is restricted to relatively 
small spheres of operation. Since a certain Darwinian logic is built into the 
evolutionary mechanism of this artilect on a micro-scale, it may be that this 
instinct for self-preservation extends to a macro-scale, in the same way that 
it has for multi-cellular organisms, who act in such a way as to preserve and 
propagate the DNA contained in them (if we are to believe the theories of Richard 
Dawkins, etc., the proponents of the selfish gene theories). 
 We do not even 
have to postulate artificial consciousness, which is a completely different can 
of worms to artificial intelligence. A blind obedience to the logic imposed by 
the silicon DNA in an artilect, coupled with a ferociously high level of intelligence, 
and sufficient access to the outside world (in the form of network connections 
to data resources) could force an artilect to the logical conclusion that to reproduce 
itself, it needs to build reproductive plants (asexual reproduction) which are 
capable of defending themselves against attack (immune systems). I would prefer 
not to call this consciousness, but would prefer to see it more as a stimulus-response 
mechanism, complicated by the fact that such an "organism" has virtually infinite 
(by today's standards) mental resources. However, the only difference between 
an artilect which discovers the capacity for self-reproduction and a simple multi-cellular 
organism such as a worm is one of scale. Few people will credit a worm with a 
high degree of consciousness, and at first sight, crediting an artilect with consciousness 
would seem equally preposterous. 
 On the other hand, 
it is possible that to a super-evolved consciousness (either carbon or silicon-based), 
higher mammals (including ourselves) would lack consciousness, and the question 
then arises as to whether consciousness itself is no more than an increasingly 
sophisticated set of responses to stimuli. For now, we have a working definition 
of consciousness which we apply every day to insects, fish, plants, and mammals 
of different species, regardless of the way in which academic philosophers debate 
the word. If we are prepared to extend the range of entities for which we are 
prepared to make this judgment so that it includes artilects, we probably do not 
need to worry too much about what we define as consciousness, unless we are deliberately 
trying to reproduce it artificially. However, we do not worry about the consciousness 
of a virus or a bacterium, if it is capable of reproduction and fulfills that 
potential in such a way as to cause an epidemic. In the same way, we should not 
worry about consciousness in an artilect--the real issue appears to be whether 
the artilect has the potential to reproduce itself, and to protect its ability 
to do so. 
 Predicting doom 
 
One important question that remains is why exactly de Garis is traveling around 
the world at high speed predicting doom and destruction, while simultaneously 
proceeding with the construction of a prototype of the machine which he claims 
will supercede humanity, or at the very least will be the cause of a conflict 
which will make 20th century wars seem like mere scuffles in comparison. I asked 
him whether he compared himself to Robert Oppenheimer, the father of the atomic 
bomb, who spent the post-Hiroshima years bitterly regretting his work (to the 
detriment of his career when he was labeled as a Communist sympathizer in the 
McCarthy years), or to Edward Teller, the father of the hydrogen bomb, who spent 
the rest of his life trying to find a reason to use one (and was instrumental 
in Oppenheimer's destruction). De Garis replied that he did not know himself which 
he most resembled, nor could he provide a coherent reason why he was engaged in 
research which could affect the future of the whole human species. 
 De Garis, it appears 
to me, is a brilliant scientist. His explanations of what he was doing, although 
hurried and omitting much detail that I would have liked to hear, nonetheless 
ring true, at least to me with my elementary knowledge of neurophysiology and 
the psychology and philosophy of cognition. When he talked about the political 
and the social implications of the work, I felt he was on more shaky ground. Prophets 
of doom always make good headlines, and if those headlines help to publicize the 
work of the prophet, and to renewed interest which will lead to better funding 
for the project, so much the better. De Garis is not the first, nor will he be 
the last, research scientist who has blundered into political and sociological 
mine fields. Some scientists seem to thrive on committees and commissions or force 
their way through the bureaucratic process by force of character (Wernher von 
Braun at NASA comes to mind--he was once described as having "too big an ego to 
allow him to fail"). On the other hand, many scientists, with the best of intentions, 
have come seriously to grief when they venture into the wider social implications 
of their work. To his credit, at least part of de Garis' purpose in running round 
the world talking to whoever will listen is to alert the professional philosophers, 
sociologists, etc. into taking some action so that we are prepared to take a stand 
on what he regards as the most pressing problem of the 21st century, if not the 
whole of the next millennium. 
             If you feel that 
              de Garis is right regarding the forthcoming evolution of artificial 
              brain-like structures which will possess mental capacities far exceeding 
              anything we can dream of now, or if you feel that he is so wrong 
              that his views, rather than artilects themselves, are a danger to 
              future society, take a look at the following web site: http://artilect.org 
              (no www at the beginning of this address), where you can take an 
              active part in the debate. 
  
Hugh Ashton is 
a regular contributor to Computing Japan. 
Back 
to the Table of Contents 
Comments 
or suggestions?  
Contact cjmaster@cjmag.co.jp 
 
  
 |