My last two posts, especially The Social Singularity, dealt with issues that may well seem rather esoteric, far from concerns that most of us have to confront. There is a Science Fiction quality to discussions of the Singularity, Social or Technological (and ultimately the two aspects can not be clearly distinguished), but the pace of change has already increased so quickly that we may well be in the early stages of profound change. Certainly to a learned and sophisticated European from the 19th century, we are already beyond the event horizon of the Singularity. I would suggest that we are all living in a time where the event horizon is rapidly approaching, though like the theoretical astronaut approaching a black hole, exactly when we pass the event horizon will never be obvious to a participant observer.
The Singularity Institute for Artificial Intelligence correctly believes that the single most significant aspect of technological advance will be in the realm of artificial or augmented intelligence. Their blog is an excellent place to explore some of these concepts. For example, today Michael Anissimov discusses What is Intelligence?, while last week, Mitchell Howe considered Artificial Consciousness, two subjects of great interest to me. Mitchell Howe points out an interesting conundrum for the AI community:
What about consciousness? The sense of being one person and not somebody else? The sense of being a captive and concerned audience to the happenings of your particular self?
I think Singularitarians like myself tend to ignore this question — not because it’s unimportant, but because we don’t see it as relevant to our core concerns. Conscious or not, recursively self-improving artificial intelligence presents a risk and opportunity impossible to understate.
...
The implication that tends to follow here is that even if AI is possible, AC — Artificial Consciousness — may not be. Rather than jump right in to a rebuttal, however, let me toss this question to the cognitive science experts in the room: To what extent, if any, are human brain functions relating to creativity and aesthetic appreciation linked to those behind our perceived senses of identity and free will? Has the research progressed far enough to tell us?
I’m willing to hazard that an assumption this widespread could be a reflection of an actual relationship in the hardware, at least in the case of humans. So I’m not so quick to dismiss the above reasoning entirely.
But regardless of how research answers that question, we must be wary of assuming that mental capacities are linked in human brains because of properties inherent to intelligence-in-general.
We must remember that our brains are not so different from those of species unable to appreciate a good Picasso. Our newer abilities are owed to existing brain structures recently re-purposed by small evolutionary changes. [Emphases mine- SW]
Actually, this is a bit of an internal contradiction. As has been noted, AI will in short order be so exponentially different from human intelligence as to be of a different nature altogether. The small evolutionary changes noted have led to extremely significant morphological changes, including completely new structures in the brain and it is these new structures that have supplied the substrate from which intelligence has emerged.
However, a larger issue I have with the question of AI Consciousness is somewhat more subtle. This morning I received a note from Warren Bonesteel which included a link to an article by Bill Hibbard on The Technology of Mind and a New Social Contract. Bill Hibbard is a serious thinker:
Bill Hibbard is a Senior Scientist at the Space Science and Engineering Center of the University of Wisconsin - Madison. He works on visualization and machine intelligence. He is principal author of the Vis5D, Cave5D and VisAD open source visualization systems. Vis5D was the first open source 3-D visualization system and is the leading system for animated 3-D visualization of weather simulations. Cave5D is the most widely used software system for scientific visualization in immersive virtual reality. VisAD is the leading visualization system written in Java. Bill Hibbard is also author of the book Super-Intelligent Machines and several articles about the Technological Singularity.
He points out that we are already beginning to see augmented intelligence; after all, what is the internet except a way to augment our ability to gather data, remember data, and rearrange data, often through the intermediation of others who share our interests and may have different approaches to the same information.
[Digression: When the financial industry notices, we are well into the trend:
Leveraging the Internet for Idea Generation: Only the Creative Need Apply
I have long marveled at the power of the Internet for idea generation. In fact, much of my writing has been in this vein. But what I haven't really discussed is how the 'Net can also be used to design new products and services, and I am proud to announce that one of my portfolio companies is doing just that.
Clear Asset Management, the algorithmic asset management firm and creator of innovative ETF products, has launched a contest involving three schools (Columbia, Lehigh and NYU) and Facebook. The goal: come up with creative, high-value ideas for new ETFs products.
End Digression]
There was one item in Hibbard's article that caught my attention and suggests to me a quite significant, barely acknowledged risk:
The most familiar measure of intelligence is IQ, but it is difficult to understand what a machine IQ of a million or a billion would mean. A practical measure of intelligence, as we develop machines much more intelligent than humans, is the number of people that a mind can know well. This is roughly 200 for humans (Bownds 1999). It is no secret that Google is working hard to develop intelligence in their enormous servers, which already keep records of the search histories of hundreds of millions of users. As their servers develop the ability to converse in human languages, these search histories will evolve into detailed simulation models of our minds. Ultimately, large servers will know billions of people well. This will give them enormous power to predict and influence economics and politics; rather than relying on population statistics, such a mind will know the political and economic behavior of almost everyone in detail. As often pointed out, intelligence cannot be measured by a single number. But one measure of a mind's intelligence, relevant to power in the human world, is the number of humans the mind is capable of knowing well. [Emphases mine-SW]
Hibbard spends the bulk of his article discussing how we might go about developing a new social contract so that the new Machine Superintelligence will consider us in the same way that a parent considers his or her child, with the well being of the child of paramount importance. His prescriptions are important and need to be considered, yet he omits an important point. It is indeed true that a measure of intelligence is related to the number of minds we know well; of even more significance, our intelligence develops out of the mind to mind interface between a parent and a child. Eventually other minds are brought within the purview of the developing child. However, of even greater significance is that our minds develop, in large part, from the interface between the unconscious mind of the child and the unconscious mind of the adult.
When Google saves the search histories of millions and billions of people, they are saving a collection which reflects a large quantity of desires that are derivatives of the unconscious, uncensored mind. The internet is filled with dehumanizing sex and violence pornography, in part because the usual Superego constraints on the unbridled seeking of infantile gratifications is relaxed by the imagined anonymity of the internet. Sex and violence may well be the most predominant feature of the internet at large. (Anecdotally, I have read that somewhere north of 30-40% of all search queries are related to sex.) If, as many believe, a mind, a Superintelligence, emerges from this interaction, such a mind will have little way of setting a value on our unshackled passion's pursuit of gratification.
Perhaps those who imagine they can direct the evolution and emergence of such a mind believe they can factor in a "healthy" machine Superego; the problem here is that as with all aspects of the mind, most of the operations of the Superego are unconscious, unavailable to conscious manipulation. This is as true for the programmers as for any other human being. The Law of Unintended Consequences is a recognition that humans have a genius for missing the mark; granted this is not always due to the influence of the unconscious, but a fair amount of the time, it is precisely related to the workings of the unconscious.
The nascent field of NeuroPsychoanalysis has found much in the burgeoning Neuroscience research to support Psychoanalytic concepts. Unless we attend as much to the structure of our own minds as we do to the hardware and software of our computing devices, we risk evolving Superintelligent minds which include, and neglect to account for, their own "monsters from the Id."
Recent Comments