In Empathy and AI: Part I I discussed the possibility of coding for empathy in our imagined AI offspring. The discussion among the commenters was impressive, as so often occurs here, and toughed upon many themes worth exploring in more detail. Two themes stood out in the comments. First the probability that even were we able to write code determining the acquisition of consciousness, we could never delineate a wide enough set of "human like" experiences for our AI. We might be able to simulate Jimmy J's transcendental experience but we could never do it in a way in which the AI would believe it emerged from within his own (organic) conscious mind and he would always be aware that it did not enter him from outside of his programming. Second, the idea of the necessity for mirroring to be part of the process which I will address shortly.
There are compelling reasons why we would desire that any AI would contain code for empathy. It would clearly be in our best interests to assure that any AI would have empathy for its creators. Without a modicum of empathy we would be in the position of an ant in relation to an elephant. Where our interests conflicted with the AI, our desires would simply not gain the threshold of its "awareness." Were we able to code for empathy it would be a very good thing. However, as noted, our empathy rests upon our awareness, both consciously and unconsciously, of our shared biology. A machine intelligence could be programmed to simulate such an awareness and could be simulated to have its own "biology", but it would be a simulation and simulations have importance to our psychology.
There is a fascinating discussion going on in some precincts of the blogosphere and within Cosmology, etc, concerning the possibility that we might all be simulations in a universe computer. It is an interesting discussion, but irrelevant to this topic, for, even if we are merely simulations in an uber-computer that runs the universe, we do not know (or know, which includes emotional as well as intellectual knowledge) that we are simulations; we think we are real and that is all the difference. A computer that believes it is real when it can also know it is simulating a real body is a computer that has a lie at its core. Another way to understand the lie would be as a fixed, false belief or a delusion. It may, theoretically be possible, at the end of a very long process, for a computer intelligence to understand all there is to know about human biology and therefore to appreciate our drives, desires, passions, etc, but this is highly speculative. In addition, intellectual understanding is never enough for empathy to exist; there must be an emotional understanding for empathy. To the promise that we could program in emotions, these would be machine analogues of our emotions. It may well be that we can quantify and calculate curiosity, for example, but since our minds are incapable of separating ourselves from our affects and an AI would necessarily be the inverse (it could only simulate affects as an 'add on" rather than directly experience them) empathy would again fail. It is very possible, perhaps likely, that prior to a final understanding of such an apparent contradiction, our software would Begin to exhibit behavioral disturbances analogous to the kinds of behavioral disturbances found in humans when such internal contradictions, ie conflicts, become impossible to resolve.
Although we may be approaching a decent working knowledge of the constituents of conscious awareness, including self-awareness, understanding how knowledge translates into consciousness and meaning remains problematic and it is in that juncture that Empathy between our AI offspring and our meager intelligence will arise or fail.
Ray Tallis, writing in New Scientist suggests that You won't find consciousness in the brain:
Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms. To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being the noisy, colourful, smelly place we live in, is colourless, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically. In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/qualia, the redness of red wine or the smell of a smelly dog.
Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.
Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no constraints on the appearance of objects once they are objects of consciousness.
Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.
Our understanding of qualia remains rudimentary. The mystery of qualia is roughly analogous to the mystery of Quantum Mechanics. Just as a Quantum particle exists in a statistical distribution and only become "real" when observed, qualia represent the condensation of events that take place in a network of neurons as the brain/consciousness sums a statistical process and precipitates a subjective event. That is surely stretching a metaphor far beyond its usefulness, but we remain mystified as to how attributes attain their subjective sense in our minds.
In 2000, M. E. Tson offered A Brief Explanation of Consciousness which is germane both to the discussion of consciousness and empathy:
Self-awareness isn’t an all-or-nothing quality that humans are born with and animals are not, but consciousness--the information that an entity can process about itself and the world--is a continuum with bacteria (or physical reactions like rust) at one end and human self-awareness at the other. Nor is our continuum the only one imaginable. Self-awareness is possible whenever basic information processing systems are organized in such a way as to collectively detect, react to, and associate stimuli as a monad or single unit within an environment of interaction and communication with other comparatively structured individuals. Furthermore, each of the innate abilities we outlined (detection, reaction, and association) is conceivably artificially reproducible. There is no theoretical reason why we couldn’t construct an android which, like a baby, was capable of developing self-awareness through experience. Our organs and autonomic responses are unique to species that share our evolutionary history, so our android wouldn’t feel its heart quicken and muscles tighten when it was startled (although it might, in a similar way, detect changes in its internal energy level and readiness.) It might not express its emotions through laughter or crying. Nonetheless, it could come to be aware of itself and of its place in the universe in a sense that would be different than--yet still comparable to--our own.
Note that Tson essentially suggests we cannot automatically code for consciousness or empathy (a particular subset of consciousness.) Even in his theoretical android, consciousness must evolve. Somewhere between the hardware of our biology and/or silicon and the stimuli that impinge upon us, consciousness appears. In humans, the very lengthy process of separation/individuation, which occurs prior to the acquisition of a Conscience and morality, allows the infant personality to understand itself by reflecting in the mirror of the mother's mind. It is a slow process requiring an extremely large number of iterations.
We might one day be able to code for an incipient AI to have curiosity. It would "look for" novelty, perhaps in a manner suggested by Jürgen Schmidhuber. Coding for the process of empathy would be more problematic since it would be impossible to specify the mirroring object for the AI. The first AI would be unique. Were they to be mirrored by their creators in a process similar to the mirroring between a mother and child (though with a much steeper curve of acquisition, ie much faster and with no way to ensure a wide enough variety of mirroring experiences) the primary emotional reaction mirrored might well be awe, or fear. Such mirroring does not bode well for the offspring so mirrored.
Finally, if our AI offspring successfully have empathy, ie their minds are like our minds in some fundamental ways, then it might be quite wise for us to consider the myriad ways in which such a process could go awry. If we cannot succesfully code for empathy, our offspring will be far more alien than we might expect, an event equally to be prepared for. In Part III, I will offer some speculations along these lines.
Recent Comments