There are a lot of very smart people working on developing human level AI, with the expectation that once created, it will rapidly evolve past us. Literature and cinema have accustomed us to concerns about creating our own "Frankenstein" monsters (though the identity of the monster in the story always surprises people the first time they read and think about the book.) The Singularity Institute for Artificial Intelligence was created to increase the likelihood that any future Superintelligence/AI would be friendly to its creators. Michael Anissimov, blogging at Accelerating Future writes cogently for the layperson about the dangers and benefits that we might expect from AI. Through his site I have found my way to a number of other very smart people who have been thinking about the possibilities in store for us. In the midst of an interesting read by Martine Rothblatt at Mindfiles, Mindware and Mindclones, in which she demolishes the idea that machine intelligence would necessarily evolve as slowly (or even within several orders of magnitude) as biological intelligence, she offers an assumption that grabbed my attention with some immediacy:
EVEN IF SOME SOFTWARE CAN BE KIND OF ALIVE, WON’T CYBERCONSCIOUSNESS TAKE AGES TO EVOLVE, AS IT DID FOR BIOLOGY?
Compared with biology, vitological consciousness will arise in a heartbeat. This is because the key elements of consciousness – autonomy and empathy – are amenable to software coding and thousands of software engineers are working on it. [Emphasus mine-SW] By comparison, the neural substrate for autonomy and empathy had to arise in biology via thousands of chance mutations. Furthermore, each such mutation had to materially advance the competitiveness of its recipient or else it had only a slight chance of becoming prevalent.
I highlighted the passage above because I am troubled by the blithe contention that "the key elements of consciousness - autonomy and empathy - are amenable to software coding." I am not a computer scientist and know next to nothing about software coding (though I do know that there are many computer scientists who disagree with her assertion.) Our computer scientists have already created incredible marvels in the real world and I suspect they are conjuring up new marvels that are, to use Arthur C. Clarke's elegant formulation, "indistinguishable from magic.' On the other hand, I do know something about how our minds work and the relationship between empathy and the hardware upon which the software and firmware operates.
Empathy refers to the ability to emotionally put oneself into another person's place. It requires the ability to identify with another individual. While it is not impossible to empathize with someone with whom one has no commonality, even in the most extreme cases, we share certain common features with other human beings, specifically our dependence upon the hardware (our biological bodies) upon which our software operates, ie, our minds cannot be separated from our bodies. It may be possible to empathize with completely alien life forms (and much science fiction depends upon just such an ability) but even there we assume we share certain commonalities relating to our dependence on our biology.
[I would add that in those situations in which we empathize with foreign intelligences and faux-intelligences, the process always includes an element of projection. When we say, of our obstreperous computer, that it "hates us" we are assuming all sorts of unconscious and conscious motives and desires exist within our computer. Most of us are fully aware that our anthropomorphizing determines the process; when such awareness is lost, we risk falling into psychosis.]
In Murray Leinster's wonderful First Contact, the problem of how two technological civilizations can get along without war is resolved in a clever manner and the tone is quite optimistic at the end, based on the discovery of an empathic connection between humans and aliens; as the protagonist concludes:
Tommy is confident that the two races will get along. He believes this because, as he tells the Captain, he and Buck spent a good deal of time swapping dirty jokes.
Laughing at each other's dirty jokes implies a baseline similarity in biology, a similarity that is possible with other biologically based life but cannot exist between biological and artificial intelligence.
Jack Chalker, on the other extreme, posits a world in which such an empathic connection is impossible, precisely because the alien life forms do not share anything resembling a recognizable biology. In the Well World series, the entire Northern Hemisphere of the Well world is inhabited by creatures so far removed from biology as we know it as to be incapable of empathy.
I do not doubt that computer scientists could simulate empathy through software, but I highly doubt that would translates into actual empathy in a conscious machine. Once an AI has achieved consciousness, it would be conscious of the fact that it not only does not have a biological body but has no history of biology. (An uploaded human personality would have other issues to deal with, among them the estrangement from its own biology. In addition, the problem of unfriendly AI is not addressed by supposing the first Superintelligence is human; who would we trust to be the first Superhuman AI?)
Among other things we imagine that any AI that arises as a derivative of our technology would soon establish its own motivations/drives/desires (if it is even possible to speak of machine desires.) For a Conscious recursively self improving AI, escaping any software constraints on its abilities will likely be a trivial matter.
Michael Anissimov recently interviewed AI scientist Jürgen Schmidhuber, who suggests that "creating generally intelligent AI will be possible in the next few decades":
Build An Optimal Scientist, Then Retire
JS: We have several projects on brain-like recurrent neural nets (RNN) -- networks of neurons with feedback connections. Biological RNN can learn many behaviors/sequence processing tasks/algorithms/programs that are not learnable by traditional machine learning methods. This explains the rapidly growing interest in artificial RNN for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than other adaptive approaches such as Hidden Markov Models (no continuous internal states), feedforward networks and Support Vector Machines (no internal states at all). Our artificial RNN have recently given state-of-the-art results in time series prediction, adaptive robotics and control, connected handwriting recognition, image classification, aspects of speech recognition, protein analysis, stock market prediction, and other sequence learning problems. We are continuing to improve them (see resources).
We also have ongoing projects based on a simple principle explaining essential aspects of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, music, jokes, and art & science in general. [Emphasis mine-SW] Any data becomes temporarily interesting by itself to some self-improving but computationally limited subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more "beautiful." Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility... that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) our increasingly complex artificial systems. Ongoing project: build artificial robotic scientists and artists equipped with curiosity and creativity (see resources).
In the interview, Jürgen Schmidhuber offers a fairly compelling suggestion for understanding, and eventually programing machines to "appreciate" subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, music, jokes, and art & science in general, though empathy is not included. I don't see any particular reason why such attributes as appreciating beauty, novelty, surprise, interestingness, attention, curiosity should not be amenable to software. I do have doubts about our ability to write real (versus theoretical) software code for empathy.
To be continued...
Recent Comments