New technology often leads to exponential increases in human knowledge. Consider how the discovery of the X-Ray eventually led to an entire new field of human endeavor, allowing for an investigation of the internal workings of the body. Or consider the transistor, which eventually led to the personal computer and eventually, the internet.
The Neurosciences are now in the early stages of an exponential increase of knowledge about the workings of our brains and a potentially commensurate increase in knowledge about our minds. The fMRI (functional MRI), PET Scan, increasingly sophisticated computerized sensor arrays that are coming on line will facilitate great leaps in understanding. Already reports from the frontiers of Neuroscience have made their way into the MSM; such reports typically sensationalize the research in ways which would (or should) embarrass the authors, and the reports also tend to trivialize the difficulties ahead in taking the rudimentary knowledge being accumulated and For example, recent reports that scientists can via their computers and sensors, "read" the mind of a subject are extreme exaggerations. The movement from being able to distinguish via patterns of neuronal firings between two distinct viewed images requires such an exponential increase in complexity that to suggest what is happening is mind reading brings to mind Wolfgang Pauli's famous retort to a nonsensical paper, "This isn't right. This isn't even wrong."
Theodore Dalrymple made many of the same points in an article in the New English Review last year: {HT: Sigmund, Carl, & Alfred]
Do the Impossible: Know Thyself
I attended a fascinating conference on neuropsychiatry recently. Neuroscience, it seems to me, is the current most hopeful candidate for the role of putative but delusory answer to all Mankind's deepest questions: what is Man's place in Nature, and how should he live. What is the good life, at least in the western world?
...
During the conference, I heard one of the best lectures I have ever heard by a professor at the Salpetriere in Paris.
(This hospital, of course, has one of the most distinguished histories in neurology of any hospital in the world.) Not only did the professor speak brilliantly, with wit, learning and charm, but he showed astonishing before and after videos of patients treated surgically for a variety of conditions, from Parkinson's disease to Gilles de la Tourette's syndrome. It was difficult then not to succumb to a sort of euphoria, that consisted of the belief that at last we really did understand, at least in principle, what it was to be a human being. This was further reinforced by neuroimaging studies showing the areas of the brain that were active when a man in love perceives his beloved: the neurological basis of romantic love, as it were. Somewhat disappointingly for romantics, the parts of the brain that are activated during the encounter are primitive from the evolutionary point of view, and present in the pigeon and the lizard.
...
Nevertheless, several speakers strongly implied that with the exponential growth of neuroscientific research, we were about to understand ourselves to a degree unmatched by any previously living humans. I confess that, whenever I heard this, I thought of the old proverb about Brazil: that it is, and always will be, the country of the future.
At the very end of the conference, a well-known professor of philosophy was brought in to confirm that man's self-understanding would soon advance by leaps and bounds, thanks to the neurosciences. The professor was a man of great erudition, and spoke fluently without notes, with enormous and beguiling wit. Many times before, he said, Man had believed that he understood himself; this time, it was going to be true.
Theodore Dalrymple's discussion revolved around two points:
Two main questions arose in my mind during the neuropsychiatric conference. The first was whether any scientific self-understanding was possible. The second was whether, if possible, it was desirable. My answer to both questions was, and is, no.
Read his essay to see how he arrives at his conclusion. I would answer the questions slightly differently. First, I would partially agree that scientific self-understanding is likely to be impossible. I doubt we will ever be in a position to fully understand our own minds in all their complexity. In The Emperor's New Mind, Roger Penrose did an adequate job convincing me that such self-awareness that comprises a mind may well be non-computational; in other words, and to the limits of my understanding, the complexity is such that no system could completely compute every possible state of every possible element within the confines of that system (a version of Gödel's incompleteness theorem.)
[At the same time, Ray Kurzweil might well insist that once we have passed the Singularity and turned all matter into "computonium", such computations could be accomplished. Of course, this begs the question of how such increasingly complex computers can overtake the computational complexity of their makers and themselves.]
As to Dalrymple's second point, the desirability of such self-knowledge, I could not be in greater disagreement with his contention that such an outcome is undesirable. He relies on David Hume for support:
In my opinion, the great philosopher David Hume understood why human self-understanding was forever beyond our reach. It is not a coincidence that he always expressed himself with irony, for the deepest irony possible is that of the existence of a creature, Man, who forever seeks something that is beyond his understanding.
Hume was simultaneously a figure of the enlightenment and the anti-enlightenment. He saw that reason and consideration of the evidence are all that a rational man can rely upon, yet they are eternally insufficient for Man as he is situated. In short, there cannot be such a thing as the wholly rational man. Reason, he said, is the slave of the passions; and in addition, no statement of value follows logically from any statement of fact. But we cannot live without evaluations.
Ergo, self-understanding is not around the corner and never will be. We shall never be able seamlessly to join knowledge and action. To which I add, not in any religious sense: thank God.
Allow a slight digression at this point. Several years ago there arrived a new entrant into the discussion of the Fermi Paradox, which asks, "if there are intelligent extraterrestrial races, where are they?" Robin Hansen raised an important question:
The Great Filter - Are We Almost Past It?
Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?
Combining standard stories of biologists, astronomers, physicists, and social scientists would lead us to expect a much smaller filter than we observe. Thus one of these stories must be wrong. To find out who is wrong, and to inform our choices, we should study and reconsider all these areas. For example, we should seek evidence of extraterrestrials, such as via signals, fossils, or astronomy. But contrary to common expectations, evidence of extraterrestrials is likely bad (though valuable) news. The easier it was for life to evolve to our stage, the bleaker our future chances probably are.
Until the middle of the last century, it seemed that a limited variety of extremely rare events could have the potential to destroy all intelligent life on the planet earth. Even the worst plagues that evolved in our biosphere would leave some survivors, which left it to such events as asteroidal impacts, a nearby supernova explosion, or some other exotic galactic event to destroy eradicate humanity. That changed with the detonation of the first nuclear warhead; we had gained the power to extinguish humanity from the earth. The last 60 + years have included many periods of existential dread.
In 2000, Bill Joy, in the pages of Wired magazine, discussed Why the future doesn't need us. He pointed out that several, then nascent, technologies were so powerful as to represent a genuine threat to humanity. Further, it was clear that such technology would eventually empower mere individuals to hold the ability to destroy the human race.
Throughout human history there has been a constant struggle between those forces which arise within us to build, and those which tend to destroy. The positive progressive impulse has been just sufficiently greater than the destructive, that compounded over centuries and millenia, the human race has survived and thrived until now.
We have already seen what people can do with box cutters and committment when motivated by an unmerciful G-d. Unless we can learn to understand ourselves better, there will come a time when the entire race will have to deal with madmen and true believers who have the power to destroy all. This is the subtext of the Long War, even as both those who conduct it and those who rail against it are often unaware of the future risks that are accruing.
Further, and equally important, sometime in the next 20-30 years we are going to reach that point at which our computers have achieved greater computing power than the human brain. At that point we had better hope and prey that any nascent AI is friendly. It is a humbling thought to recognize that any AI will be our heirs; further, decisions being made today, in hardware architecture and software programming, will have an impact on the future development of any potential AI. If the programmers and hardware engineers proceed blissfuully unaware of the Monsters from the Id, we can only imagine what mischief can ensue from those with curiosity and all the best of intentions.
If we intend to pass through the Great Filter, we had better be much more self-aware than we are today.
Commander John J. Adams: What is the Id?
Dr. Edward Morbius: [frustrated] Id, Id, Id, Id, Id! [calming down]
Dr. Edward Morbius: It's a... It's an obsolete term. I'm afraid once used to describe the elementary basis of the subconscious mind.
Commander John J. Adams: [to himself] Monsters from the Id...
Dr. Edward Morbius: Huh?
Commander John J. Adams: Monsters from the subconscious. Of course. That's what Doc meant. Morbius. The big machine, 8,000 miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control. Morbius, operated by the electromagnetic impulses of individual Krell brains.
Dr. Edward Morbius: To what purpose?
Commander John J. Adams: In return, that ultimate machine would instantaneously project solid matter to any point on the planet, In any shape or color they might imagine. For *any* purpose, Morbius! Creation by mere thought.
Dr. Edward Morbius: Why haven't I seen this all along?
Commander John J. Adams: But like you, the Krell forgot one deadly danger - their own subconscious hate and lust for destruction.
Dr. Edward Morbius: The beast. The mindless primitive! Even the Krell must have evolved from that beginning.
Commander John J. Adams: And so those mindless beasts of the subconscious had access to a machine that could never be shut down. The secret devil of every soul on the planet all set free at once to loot and maim. And take revenge, Morbius, and kill!
Dr. Edward Morbius: My poor Krell. After a million years of shining sanity, they could hardly have understood what power was destroying them.
[pause]
Dr. Edward Morbius: Yes, young man, all very convincing, but for one obvious fallacy. The last Krell died 2,000 centuries ago. But today, as we all know, there is still at large on this planet a living monster.
Commander John J. Adams: Your mind refuses to face the conclusion.
Dr. Edward Morbius: What do you mean?
Recent Comments