The Art of Science in Medicine
Less than 15 percent of medical decisions are based on "appropriate evidence." Now that we have your attention, let us explain why this is so. Up...
Less than 15 percent of medical decisions are based on "appropriate evidence."
Now that we have your attention, let us explain why this is so. Up until about 40 years ago, medical decisions were based on clinical experience, information in textbooks, or by asking colleagues and experts. For years, quality was indeed in the eye of the practicing physician and quality of care was taken for granted.
This model began to unravel in the early 1970s when a growing body of research showed that doctors across the country were treating patients with the same diseases in different ways. No one really knew what the "best" treatment was. The medical "shot heard round the world" came in 1989 when the results of a large clinical trial called the Cardiac Arrhythmia Suppression Trial were published. This study was designed to give information about and help some of the 400,000 people in the United States who die suddenly each year because of coronary heart disease. Before the CAST trial, doctors knew that after a heart attack, people often died suddenly. A greater the number of extra "premature" heartbeats in patients with heart disease increased the likelihood of dying suddenly. The CAST trial was designed to see which of the medicines used would decrease the extra beats most effectively and prevent sudden death.
Twenty-seven clinical centers around the country enrolled 4,400 patients who had previously had heart attacks and gave them either a placebo (a sugar pill) or one of three drugs used to treat premature heartbeats. The clinical trial was stopped early because it was clear that one group was doing much better than the others. When they looked at the groups, the one doing the best was the group receiving the placebo -- all the actual drugs were making the patients worse!
The CAST trial showed us that conventional wisdom in medicine is not always right and that currently held medical assumptions need testing. As a result, the "double blind clinical trial" became the gold standard for determining the effective use of diagnostic and therapeutic technologies and for improving medical practice. In a double blind clinical trial, neither the doctor nor the patient knows whether they are getting the treatment; one group of patients receives a placebo while the other group receives the experimental drug; and patients are assigned to each group completely at random.
Practice Guidelines and Evidence-Based Medicine
The CAST trial and other studies demonstrated that there is a need for good evidence to inform medical decision making and improve quality care, but there are other reasons as well: Doing the "wrong thing" is wasteful, and costs more. Only a small percentage of our medical decisions are based on these kinds of large-scale clinical trials, mainly because they are not always practical and because they can be extremely expensive. Many medical decisions are based on other types of scientific research findings that permit important conclusions but which are less reliable than the randomized controlled trial. Some decisions will always be based on how physicians have traditionally been taught -- something "seems to work" based on the experience of their teachers. Medical training has historically relied on mentors' experiences, but mentors have very different clinical experiences. As physicians are "taught" in this way, very different practices evolve to treat similar patients. The cycle continues when young physicians develop their own "experience," which is certainly neither randomized nor controlled.
For all of these reasons, there is tremendous variation in how patients are treated. So what are we to do? Physicians need to develop data on how to practice efficiently and agree on best practices. Recognizing this problem, physicians have been developing "practice guidelines." These are "rules of thumb" for how to handle the most common illnesses. They are prepared primarily by national societies of physicians who have groups of experts convene to produce guidelines for certain conditions and treatments. These guidelines are revised frequently and constitute the best available information for physicians. Patients treated according to the guidelines have better outcomes. More and more "quality measures" of physician and hospital performance are based on how well the practice guidelines are followed.
Translating Evidence into Medical Practice
Physicians follow practice guidelines about 50 percent of the time, a number disappointing to those who believe that practice guidelines will reduce variation and tie medical decision-making to evidence. Why don't physicians follow guidelines? Most often it is because the physician has not been able to keep up with new information about treating a medical condition or because the physician does not agree with the guidelines -- he or she "knows better." That may be the case, but that is not a valid argument. The most important and valid reason for not following the guidelines is that they do not apply to that particular patient. Practice guidelines are created for the "most typical" patient and, in some cases, the physician may feel that the guideline is not a good fit. The patient may be too old, too sick, have complicating factors, or have personal preferences that make implementation of the guideline less appropriate.
In the federal economic stimulus bill passed last winter, over a billion dollars was allocated to a "comparative effectiveness " research initiative to assess the effectiveness of competing medical treatments, such as whether watchful waiting or radiation is better in the treatment of prostate cancer or whether surgery or exercise is better for lower back pain. In turn, these comparative effectiveness studies will yield new information that can be included in national guidelines that will help standardize practices and improve overall care. Britain, France, and other countries already have similar institutes that assess technologies and compare the effectiveness, and sometimes the cost, of different treatments. By using the comparative effectiveness guidelines, U.S. physicians will have better certainty about what works and what doesn't in the treatment of their patients, which will discourage the use of costly, ineffective treatments. As we mentioned in our March column, ignoring the cost of a treatment in the name of "comparative effectiveness" is wrong. Teaming comparative effectiveness research with cost-effectiveness calculations -- paying for what is effective and a good use of resources -- will ensure that we are spending our health care dollars wisely.
Unfortunately, even the most well-used and accepted guidelines cannot teach us when to ignore the guidelines. This is where the "art" of medicine becomes so important: We must know when to apply and when not to apply the guidelines. Most physicians do well with routine cases, but one measure of an outstanding physician is how well he or she does with the cases that are not routine. Unifying the "art" of clinical judgment with the science of well reasoned guidelines is optimal, with decision-making based on evidence as well as experience. Computerized electronic medical records can help as data becomes more specific to individual patients. Eventually "acceptable evidence" will be available for physicians to use in treating all their patients, even the ones who don't fit the guidelines as well. As more and more physicians use evidence-based guidelines, variation in treatment patterns and outcomes will decrease and results will improve. By blending an experienced clinician's intuition with evidence, we will be able to increase the 15 percent portion of medicine that is based on "acceptable evidence." But even as the science improves, the need for the "art" of medicine will remain.