One of the more important discussions that is going on today rarely makes the news but whether or not true Artificial Intelligence is possible and whether or not such AI will be more intelligent than H. sapiens, may well be answered in our lifetimes. I am not a computer scientist, nor am I a neuroscientist, but it seems to me that both groups pay insufficient attention to the role of our dynamic unconscious in our psychology; since our psychology is at least on some level being modeled by current AI approaches, neglecting to factor in the dynamic unconscious may introduce confounding variables.
[I wrote about this in The Singularity and the Unconscious in 2007; that post remains relevant to this discussion.]
Michael Anissimov has written extensively on the difficulty of ensuring "friendly AI." Once recursively self improving AI initiates (sometime in the 2020s according to Ray Kurzweil, but since it depends on factors that cannot be conclusively determined in advance, any date must be considered highly speculative) we will have very little ability to determine the direction of its morality. In Abandoning the Ghost of Moral Realism Michael Anissimov argues that there can be no non-mundane value system around which to develop AI morality and he notes the danger of such a human based value system:
One fundamental challenge here, in my view, is that any proposed mechanism for self-correction at a high level of abstraction produces more opportunities for a catastrophic failure of Friendliness. For instance, programming an AI to analyze a wide selection of possible extrapolation and aggregation parameters and use some objective standard (presumably derived from the region of greatest coherence among previous extrapolation and aggregation attempts) to update moral preferences. The simpler the goal system, the easier it will be to update and revise without the risk of it drifting way off course. The problem is that the simplest possible goal system for success may have to be very complex, even in its basic principles.
Michael Anissimov has explicitly rejected religion as a source of morality; he is a proud atheist and a scientist. However, in this brief excerpt he reinforces the danger of using a moral system upon which to base AI that has no reference to an independent, external, and fundamental morality, ie, a Deity based morality. This needs amplification.
For humans, even the most rational and refined, to develop their moral system based on their own understanding of what a universal moral system would involve, risks the danger inherent in all human endeavors of "unconscious leakage."
Our unconscious desires are always seeking gratification. As such, our Executive Function Apparatus (the Ego) is constantly seeking ways to manage our conflicting Id derived gratifications that are acceptable to the current state of our Ego/Superego system. Our manifest behavior represents the summation of our competing unconscious determinants with an overlay of conscious rationalization and minimal conscious direction. Morality, which largely resides in our Superego, is a weak but important contributor to controlling unacceptable unconscious impulses. When morality is solely considered to derive from man, it is understood correctly to be, like any other man-made convention, susceptible to rational attack, ie rationalization. Man-made morality tolerates both conventional "sinning" and despotism to a far greater degree than a G-d based system does. This is not an argument for the existence of G-d about whom I am relatively agnostic (though I find Pascal's paradox compelling) but an argument for taking advantage of a system of morality that has served us well in enabling modern civilization.
Beyond the utility of G-d based morality is the recognition that through the blood and effort of millions of martyrs, we have, in Judeo-Christian morality, arrived at a workable system in which morality, which is always absolute, has become tempered with mercy and tolerance. Moral systems that have their roots firmly in human rationality and our Superegos, and that explicitly deny any privileging of humans by a greater (higher) power, tend to be equisitely intolerant. Consider Communist morality that doomed millions of "counter-revolutionaries"; consider the Global Warming zealots who believe they know that in order to save the planet we must control people's behavior. The more extreme, and more honest admit, that their prescriptions for saving the plant may eventuate in the death of millions (perhaps billions) but this is merely the price to be paid for the greater good. Human moral systems typically value the systems more highly than the humans who they seek to control.
We would be foolish to jettison a system that has helped us control our more damaging impulse for the last several thousand years because of the conceit that we are fully rational and immune to the allure of our own unconscious wishes.
BW at Next Big Future points to another data point (and new data points arrive almost daily) on an alternative route to Superintelligent AI:
Brain Scan Mind Reading of Spatial Information Progress
New Scientist reports : Scans of the part of the brain responsible for memory have for the first time been used to detect a person's location in a virtual environment. Using functional MRI (fMRI), researchers decoded the approximate location of several people as they navigated through virtual rooms. The work is continuing to use scans and more precise scans to discern what someone was doing, where they are or were and where they plan to go.
This finding suggests that more detailed mind-reading, such detecting as memories of a summer holiday, might eventually be possible, says Eleanor Maguire, a neuroscientist at University College London.
"This is a very interesting case because it was previously believed impossible to decode [spatial] information," says John-Dylan Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin, Germany.
As our scanning becomes more refined and our scanners become more intimately connected to our biology, a merger of hardware and wetware is a distinct possibility. It is likely that any Superintelligence that arises from such a hybrid will be primarily based on the human host. It is arguable whether this would be more or less likely to produce an unfriendly AI.
The classic explication of the effect of the unconscious mind on superintelligence can be found in Forbidden Planet. Dr. Morbius was obsessed with discovering what happened to the Krell, the race that had built a supercomputer which could grant their every wish. Dr. Ostrow uses the Krell machine to boost his intelligence enad understands the danger just before he dies:
Doc Ostrow: Morbius was too close to the problem. The Krell had completed their project. Big machine. No instrumentalities. True creation.
Commander John J. Adams: Come on, Doc, let's have it.
Doc Ostrow: But the Krell forgot one thing.
Commander John J. Adams: Yes, what?
Doc Ostrow: Monsters, John. Monsters from the Id.
Commander John J. Adams: The Id? What's that? Talk, Doc!
[Doc slumps and dies]
Commander John J. Adams: Doc?
Commander John J. Adams: [sadly] Oh, Doc. Doc.
Many of our most brilliant scientists take it as a matter of faith that they are ruled by rationality. However, they remain human, and that is both a comforting and a frightening thought.
Recent Comments