Can a Machine Ever Become Self-aware?

Giorgio Buttazzo
http://feanor.sssup.it/~giorgio

published in Artificial Humans, an historical retrospective of the Berlin International Film Festival 2000,
Edited by R. Aurich, W. Jacobsen and G. Jatho, Goethe Institute, Los Angeles, pp. 45-49, May 2000.


From Terminator 2 - Judgment Day

"Los Angeles, year 2029. All stealth bombers are upgraded with neural processors, becoming fully unmanned. One of them, Skynet, begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. eastern time, August 29."

This is the view of the future described in James Cameron’s film "Terminator 2 -- Judgment Day". Skynet’s self-awareness and his attack against humans sets the beginning of a war between robots and humans, which represents the first scene of the movie.

Since the early Fifties, science fiction movies have depicted robots as very sophisticated machines built by humans to perform complex operations, to work with humans in safe critical missions in hostile environments, or, more often, to pilot and control spaceships in galactic travels. At the same time, however, intelligent robots have also been depicted as dangerous machines, capable of working against man through wicked plans. The most significant example of robot with these features is HAL 9000, main character in the Stanley Kubrick /Arthur Clarke 1968 epic film "2001: A Space Odyssey".

In very few movies, robots are depicted as reliable assistants, which really cooperate with men rather than conspiring against them. In Robert Wise’s 1951 film "The day the Earth stood still", Gort is perhaps the first robot (extraterrestrial in this case) who supports captain Klaatu in his mission to deliver a message to humans.

Also Robocop (Paul Verhoeven, 1987) is a dependable robot who cooperates with men for law enforcement, although he is not a fully robotic system (as Terminator), but a hybrid cybernetic/biological organism (a cyborg) made by integrating biological parts with artificial components.

The dual connotation often attributed to science fiction robots represents the clear expression of desire and fear that man has towards his own technology. From one hand, in fact, man projects in a robot his irrepressible desire of immortality, embodied in a powerful and indestructible artificial being, whose intellective, sensory, and motor capabilities are much more augmented with respect to the ones of a normal man. On the other hand, however, there is a fear that a too advanced technology (almost mysterious for most of people) can get out of control, acting agaist man (see Frankenstein, HAL 9000, Terminator, and the robots in Matrix). The positronic brain adopted by Isaac Asimov’s robots comes from the same feeling: it was the result of a so advanced technology that nobody knew its low-level details any more, although its construction process was fully automated [Asi 68].

Recent progress of computer science technology strongly influenced the features of new science fiction robots. For example, the theories on connectionism and artificial neural networks (aimed at replicating some processing mechanism typical of the human brain) inspired the Terminator robot, who is not only intelligent, but can learn based on his past experience.

"Can ever a machine become self-aware?"

Before answering this question, we should perhaps ask: "how can we verify that an intelligent being is self-conscious?". In 1950, the computer science pioneer Alan Turing posed a similar problem but concerning intelligence and, in order to establish whether a machine can or cannot be considered intelligent as a human, he proposed a famous test, known as the Turing test: there are two keyboards, one connected to a computer, the other leads to a person. An examiner types in questions on any topic he likes; both the computer and the human type back responses that the examiner reads on the respective computer screens. If he cannot reliably determine which was the person and which the machine, then we say the machine has passed the Turing test. Today, no computer can pass the Turing test, unless we restrict the interaction on very specific topics, as chess.

On May 11, 1997 (3:00 p.m. easter time), for the first time in the history, a computer named Deep Blue beat world chess champion Garry Kasparov, 3.5 to 2.5. As all actual computers, however, Deep Blue does not understand chess, since it just applies some rules to find a move that leads to a better position, according to an evaluation criterion programmed by chess experts. Thus, if we accept Turing’s view, we can say that Deep Blue plays chess in an intelligent way, but we can also claim that it does not understand the meaning of his moves, as a television set does not understand the meaning of the images it displays.

The problem of verifying whether an intelligent being is self-conscious is even more complex. In fact, if intelligence can be the expression of an external behavior that can be measured by specific tests, self-consciousness is the expression of an internal brain state that cannot be measured.

From a pure philosophical point of view, it is not possible to verify the presence of consciousness in another brain (either human or artificial), because this is a property that can only be verified by his possessor. Since we cannot enter in another being’s mind, then we cannot be sure about his consciousness. Such a problem is deeply discussed by Douglas Hofstadter and Daniel Dennett, in a book entitled The Mind's I [Hof 85].

From a pragmatic point of view, however, we could follow Turing’s approach and say that a being can be considered self-conscious if he is able to convince us, by passing specific tests. Moreover, among humans, the belief that another person is self-conscious is also based on similarity considerations: since we have the same organs and we have a similar brain, it is reasonable to believe that the person in front of us is also self-conscious. Who would question his best friend’s consciousness? Nevertheless, if the creature in front of us, although behaving like a human, was made by synthetic tissues, mechatronic organs, and neural processors, our conclusion would be perhaps different.

With the emergence of artificial neural networks, the problem of artificial consciousness becomes even more intriguing, because neural networks replicate the basic electrical behavior of the brain and provide the proper support for realizing a processing mechanism similar to the one adopted by the brain. In the book "Impossible Minds", Igor Aleksander [Ale 97] addresses this topic with depth and scientific rigour.

Although everybody agrees that a computer based on classical processing paradigms can never become self-aware, can we say the same thing for a neural network? If we remove structural diversity between biological and artificial brains, the issue about artificial consciousness can only become religious. In other words, if we believe that human consciousness is determined by divine intervention, then clearly no artificial system can ever become self-aware. If instead we believe that human consciousness is an electrical neural state spontaneously developed by complex brains, then the possibility of realizing an artificial self-aware being remains open. If we support the hypothesis of consciousness as a physical property of the brain, then the question becomes:

"When will a computer become self-aware?"

Attempting to provide even a rough answer to this question is hazardous. Nevertheless, it is possible to determine at least a necessary condition, without that a machine cannot develop self-awareness. The idea is based on the simple consideration that, to develop self-awareness, a neural network must be at least as complex as the human brain.

The human brain has about 1012 neurons, and each neuron makes about 103 connections (synapses) with other neurons, in average, for a total number of 1015 synapses. In artificial neural networks, a synapsis can be simulated using a floating point number, which requires 4 bytes of memory to be represented in a computer. As a consequence, to simulate 1015 synapses a total amount of 4*1015 bytes (4 millions of Gigabytes) is required. Let us say that to simulate the whole human brain we need 8 millions of Gigabytes, including the auxiliary variables for storing neuron outputs and other internal brain states. Then, our question becomes:

"When will such a memory be available in a computer?"

During the last 20 years, the RAM capacity increased exponentially by a factor of 10 every 4 years. The plot in Figure 1 illustrates the typical memory configuration installed on personal computers since 1980.

Figure 1: Typical RAM configurations (in bytes) installed in personal computers in the last twenty years.

By interpolation, we can derive the following equation, which gives the RAM size (in bytes) as a function of the year:

bytes = 10[(year - 1966)/4].

For example, from the equation we can find that in 1990 a personal computer was typically equipped with 1 Mbytes of RAM. In 1998, a typical configuration had 100 Mbytes of RAM, and so on.

By inverting the relation above, we can predict the year in which a computer will be equipped with a given amount of memory (assuming the RAM will continue to grow at the same rate):

year = 1966 + 4 log10 (bytes).

Now, to know the year in which a computer will be equipped with 8 millions of Gbytes of RAM, we have just to substitute that number in the equation above and compute the result. The answer is:

year = 2029.

An interesting coincidence with the date predicted in Terminator’s movie!

In order to fully understand the meaning of the achieved result, it is important to make some considerations. First of all, it is worth recalling that the computed date only refers to a necessary, but not sufficient, condition to the development of an artificial consciousness. This means that the existence of a powerful computer equipped with millions of gigabytes of RAM is not sufficient alone to guarantee that it will magically become self-aware. There are other important factors influencing this process, such as the progress of theories on artificial neural networks and on the basic biological mechanisms of mind, for which is impossible to attempt precise estimates. Furthermore, someone could argue that the presented computation was done on personal computers, which do not represent the top of technology in the field. Some other could object that the same amount of RAM memory could be available using a network of computers or virtual memory menagement mechanisms to exploit hard disk space. In any case, even if we adopt different numbers, the basic principle of the computation is the same, and the date could be advanced by a few years only.

Finally, after such a long discussion on artificial consciousness, someone could ask:

"Why building a self-aware machine?"

Except for ethical issues, that would significantly influence the progress in this field, the strongest motivation would certainly come from the innate human desire of discovering new horizons and enlarging the frontiers of science. Also, developing an artificial brain based on the same principles used in the biological brain would provide a way for transferring our mind into a faster and more robust support, opening a door towards immortality. Freed from a fragile and degradable body, human beings with synthetic organs (including brain) could represent the next evolutive step of human race. Such a new species, natural result of human technological progress (not wanted by a dictatorship) could start the exploration of the universe, search for alien civilizations, survive to the death of the solar system, control the energy of black holes, and move at the speed of light by transmitting the information necessary for replication on other planets.

Indeed, the exploration of space aimed at searching for intelligent civilizations already started in 1972, when Pioner 10 spacecraft was launched to go out of our solar system with this specific purpose, transmitting information about human race and planet Earth. As for all important human discoveries, from nuclear energy to atomic bomb, from genetic engineering to human cloning, the real problem has been and will be to keep technology under control, making sure that it is used for human progress, not for catastrophic aims. In this sense, the message delivered by Klaatu in the 1951 film "The day the Earth stood still" is yet the most actual!


References

[Ale 97] Igor Aleksander, "Impossible Minds: My Neurons, My Consciousness", World Scientific Publishers, October 1997.

[Asi 68] Isaac Asimov, "I, Robot" (a collection of short stories originally published between 1940 and 1950), Grafton Books, London, 1968.

[Hof 85] Douglas R. Hofstadter and Daniel C. Dennett, "The Mind's I", Bantam Books, 1985.

[Sto 97] David G. Stork, "HAL's Legacy: 2001's computer as dream and reality", edited by David G. Stork, Foreword by Arthur C. Clarke, MIT Press, 1997.