Slow and steady progress is being made in unraveling one of the great mysteries in science, the nature of consciousness. A European group is building a model of the brain:
Building a Brain on a Silicon Chip
A chip developed by European scientists simulates the learning capabilities of the human brain.
An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain's ability to learn more closely than any other machine.
Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS.
The hope is that recreating the structure of the brain in computer form may help to further our understanding of how to develop massively parallel, powerful new computers, says Meier.
This is not the first time someone has tried to recreate the workings of the brain. One effort called the Blue Brain project, run by Henry Markram at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, has been using vast databases of biological data recorded by neurologists to create a hugely complex and realistic simulation of the brain on an IBM supercomputer.
FACETS has been tapping into the same databases. "But rather than simulating neurons," says Karlheinz, "we are building them."
The FACETS approach assumes that neurons are essentially simple elements; they either fire or not, depending on a reliable set of rules or instructions that are part of their structural endowment. In this model neurons are directly analogous to programmable computer chips that either are in a binary +1 or 0 state. Many computer scientists and neuroscientists are of the belief that the brain is simply a complex parallel processing computer; they may be right. What is most provocative about this approach is that there may be testable hypotheses that will be answered in the next decade or two.
Michael Anissimov asks a related question:
From where I’m standing philosophically, the answer is “obviously not, our particular emotions are contingent aspects of human intelligence which exist for specific evolutionary reasons”.
Intelligence and consciousness, ie self awareness, are related questions with an undetermined amount of overlap. I would extend Michael's question, which I think doesn't go far enough:
Is self awareness, ie Consciousness, necessary for General Intelligence?
Three years ago I discussed Consciousness and Conscious Robots; I pointed out that Roger Penrose and Ray Kurzweil are at odds on the question of consciousness:
One of the great mysteries of science is the origin and the nature of consciousness. In The Emperor's New Mind, Roger Penrose suggested that consciousness was "non-computable", that is, so complex that it could not be adequately described by any executable algorithm. However, as our understanding of the brain and its behavior has progressed, computer scientists have become more and more adept at modeling the brain (what Ray Kurzweil has called "reverse engineering" the brain) and surprises are now becoming expectable.
In the same post, I linked to a story about Japanese researchers developing a robot that appears to be able to differentiate self and other:
Robot Demonstrates Self Awareness
Dec. 21, 2005— A new robot can recognize the difference between a mirror image of itself and another robot that looks just like it.
I noted that the ability to differentiate self from other is a key component of what we consider self awareness, a co-variant with consciousness:
The researchers, a team led by Junichi Takeno at Meiji University in Japan, are using an evolutionary approach to the formation of neural networks (analogous to "mirror neurons") which can learn to differentiate between itself and another robot, even if the other robot is identical to it. At the moment the robot is ~70% accurate in determining whether it is looking at itself in a mirror or another robot. This is an impressive feat which is hugely significant.
One of the most important steps necessary for consciousness to exist is the ability to differentiate self and other. It is the earliest differentiation an infant has to make. One of the tests of individuation is the mirror test; can an animal (besides man) differentiate its own mirror image from the image of another animal. Very few animals have unequivocally passed the mirror test; it is a necessary precondition for the ability to self-reflect, to think about oneself and one's behavior and how it affects an independent other person. Without the ability to self-reflect there can be no sense of agency, ie the sense that one can effect one's environment.
The next, and more crucial question, is at what point does such differentiation become consciousness.
I described my puppy, now an 85 pound bundle of desire and affection, as having some nascent consciousness but lacking the ability to self reflect. Dogs generally fail the "mirror test" and as a result are not considered fully conscious. Nonetheless, my dog has agency. He "knows" (whatever that means in this context) that if he performs certain actions he will evoke certain acts that are gratifying to him.
At the moment, I tend to disagree with Michael's comment; I do not think it is so obvious that we can have General Intelligence without emotions. In order to solve problems I believe there must be desire.
[Note that much depends on how one defines the limitations of problem solving. A computer can be instructed to solve problems in which case the agency belongs to a human or a derivative of a human being. At what point do we ascribe agency to the computer? This can become as thorny a question as what consciousness is and how one exhibits it?]
I suspect you cannot have General Intelligence without self awareness and you cannot have self awareness without emotion. Emotions are mental representations that arise at the juncture between the mind and the body. In other words, need states arise in the body and are translated into feeling states by the mind. Without needs or desires, there is no explanation for General Intelligence. Without desire there are no problems to be solved. This raises the question for me as to whether or not General Intelligence and Consciousness can arise without having a biological body from which desire can arise.
This goes to the key question that the computer scientists and the neuroscientists are groping toward: Can the mind exist independent of the body? Do we need biology in order to have mind?
There is as yet no answer to this question (and those who insist they know the answer are expressing their faith, not their reason) and I will continue to be fascinated by the search for the answer.
PS. I hope I am wrong about this since the ability to "back up" our minds in hardware could be quite useful.
Recent Comments