One of the ways in which Obamacare is set to "bend the cost curve down" is to set up panels that will determine cost-effective, "evidence-based clinical guidelines" for Doctors to follow in treating their patients. There is a superficial appeal to such an idea, after all, we all know that Drug companies spend a great deal of money getting Doctors to prescribe expensive, new, brand name drugs in place of cheap generics. Under such a system, such practices would be discouraged. In a recent New York Times piece, Peter Orszag wrote about "evidence-based clinical guidelines" in the context of Malpractice; his entire argument rests on the assumption that "evidence-based clinical guidelines" are real constructs with clinical utility and based on real science:
The health care legislation that Congress enacted earlier this year, contrary to much of today’s overheated rhetoric, does many things right. But it does almost nothing to reform medical malpractice laws. Lawmakers missed an important opportunity to shield from malpractice liability any doctors who followed evidence-based guidelines in treating their patients.
As President Obama noted in his speech to the American Medical Association in June 2009, too many doctors order unnecessary tests and treatments only because they believe it will protect them from a lawsuit. Instead, he said, “We need to explore a range of ideas about how to put patient safety first, let doctors focus on practicing medicine and encourage broader use of evidence-based guidelines.”
Why does this matter? Right now, health care is more evidence-free than you might think. And even where evidence-based clinical guidelines exist, research suggests that doctors follow them only about half of the time. One estimate suggests that it takes 17 years on average to incorporate new research findings into widespread practice. As a result, any clinical guidelines that exist often have limited impact.
How might we encourage doctors to adopt new evidence more quickly?
The problem is that the platonic ideal of "evidence-based clinical guidelines" is a chimera, and when one uses flawed science to create policy, that policy is no longer based on science but on pseudo-science.
In the November issue of the Atlantic, David H. Freedman described the work of John Ioannidis, a Greek researcher who has taken as his area of interest the meta-analysis of medical research; in other words, he investigates the credibility of medical research. The results would be shocking to most people who do not understand how medical research is conducted and the limitations of our scientific methodology:
Lies, Damned Lies, and Medical Science
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
The article is long and will reward your perusal; I would take issue with the author's comment that much of medical science is "bad science." I think most medical research is average science, limited in its utility by the complexity of its subject matter and the foibles of the humans who endeavor to understand that complexity. Here are some salient passages: [Emphasis mine-SW]
That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.
...
Later, Ioannidis tells me he makes a point of having several clinicians on his team. “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians. It’s not that he envisions doctors making all their decisions based solely on solid evidence—there’s simply too much complexity in patient treatment to pin down every situation with a great study. “Doctors need to rely on instinct and judgment to make choices,” he says. “But these choices should be as informed as possible by the evidence. And if the evidence isn’t good, doctors should know that, too. And so should patients.”
Dr. John Ioannidis's conclusion?
We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
There are three important points that need to be understood about modern medical care:
1) With the exception of some limited, mature, areas of medicine, all of modern medical practice represents a naturalistic experiment.
2) The major clinical, scientific advances in medicine come only after drugs and innovative treatments have been approved and have been put into use by large numbers of patients.
3) Modern medicine is closer to engineering than science. In other words we tinker with extraordinarily complex machinery and hope that when we tighten a nut here and loosen a bolt there the entire system works slightly better rather than slightly, or catastrophically, worse.
Because our science has become so sophisticated there is a tendency for most people, including many scientists and Doctors, and almost all policy makers, to believe that we actually know far more than we do. The human proteome (the complex of interacting proteins that comprise the functional components of our cells) contains approximately 30,000 proteins that exist in multiple permutations. There may/will come a day when we can understand each individual's proteome well enough to use targeted and specific treatments to cure all manner of disease and discomfort. Even then, surprises will be expectable and unavoidable. To imagine we are currently anywhere close enough to such a level of understanding that freezing treatments at the current level would represent an advance in medical care is risible and counter-productive.
Each of us should be recognized as an autonomous agent with a unique proteome. When ill or suffering, we should each be offered treatment options, assisted by our Physician's best understanding of the current risks, for which we can make an informed consent, with the understanding that there are no guarantees in life and that untoward effects are potential and often inevitable with every treatment.
Allow me to offer a clinical example:
Once upon a time a diagnosis of Schizophrenia meant that the sufferer would have a life punctuated with flagrant psychotic episodes and a chronic downward course typically ending in a state institution or on the street. The advent of effective anti-psychotic drugs, which primarily treated the positive symptoms of Schizophrenia (hallucinations and delusions) meant that the Schizophrenic could be restored to "normal" functioning, although often not to their pre-morbid level. These drugs did not typically effect the negative symptoms of Schizophrenia such as withdrawal, apathy, and social isolation. A further problem with the first generation anti-psychotics was that they were dysphoric (people taking them did not like how they felt; descriptions ranged from feeling as if their heads were stuffed with cotton, to feeling chronically drugged and lethargic) and had serious side effects, including abnormal movement disorders; as many as 30% would develop a terrible neurological movement disorder, Tardive Dyskinesia, with long term use. Yet, if the patient stopped his medicine, he faced almost certain relapse of his illness.
In 1996, Zyprexa was approved by the FDA. It represented a life changing improvement for many Schizophrenics. It not only effectively treated the positive symptoms of Schizophrenia but also seemed to work on the negative symptoms. A Schizophrenic treated with Zyprexa and the other atypical anti-psychotics could be far more functional than one treated with the traditional, first generation drugs. Further, there were minimal side effects; there was no dysphoria, no "Thorazine shuffle", no apparent risk of the acute muscle reactions seen with the first generation drugs, and most significantly, almost no risk of Tardive Dyskinesia. Unfortunately, a few years after Zyprexa came into wide spread use, we learned that for some people there was significant weight gain and perhaps 30% or more would develop metabolic changes that signaled a significantly increased risk of developing Type II Diabetes.
[I wrote about the problems with Zyprexa in more detail in 2005: The Doctor's Dilemma: Risks, benefits, and liabilities.]
A multi-center study of anti-psychotic treatments determined that the newer atypical anti-psychotics offered no scientifically measurable improvement in outcome over the much cheaper, traditional drugs, and efforts were made to convince Psychiatrists to use these "evidence-based clinical guidelines" to change prescribing practices. Not surprisingly, these guidelines ignored the patients' subjective experience of the drugs and their quality of life. Under a system which rewards Doctors for using the guidelines (which means that they punish those of us who do not use the guidelines, by increasing our risk of Malpractice convictions, for example) our patients would be forced to use drugs that promised terrible, though different, long term side effects, and a decreased quality of life.
The question then is who should decide such things: a government panel of experts or the patients and their Physicians? Who should decide whether a patient can have a better quality of life with an increased risk of Diabetes versus a lesser quality of life with an increased risk of a neurological disorder? I think such decisions are best left in the hands of the patient, especially because we know that the "experts" so often do not know what they are talking about.
Recent Comments