Article

lock Open Access lock Peer-Reviewed

0

Views

MY OPINION

Process improvement: the sorcerer and the sorcerer's apprentice

Vincent A GaudianiI

DOI: 10.1590/S0102-76382009000100002

After a long period of denial, physicians are now recognizing that hospitals share common ground with manufacturing facilities and therefore require a form of industrial quality assurance that has long been commonplace in the business world. While doctors have focused their quality assurance efforts on their own knowledge base and judgment, businesses have developed their quality efforts more broadly for all employees and all processes that serve customers. They call these efforts "Zero Defects," "Quality is Job #1" and "Six Sigma" among other names. In general, they recognize that every person and every process involved with an output to the customer can diminish or improve that output. Physicians were slow to distinguish their own quality issues from those of their hospitals because until recently, healthcare facilities focused more on physician needs than they did on patient needs. This is part of what we called "physician exceptionalism" in another essay for In My Opinion.

Culture changes slowly, but we are beginning to understand that both physicians and their institutions require quality assurance mechanisms that are both separate and interconnected. Now that we have recognized the importance of process improvement, we suddenly find ourselves overrun with various "guidelines" that are usually outcome surrogates. How should we organize our thoughts about such quality initiatives? Which initiatives are valid and will save lives and suffering and which are merely bureaucratic meddling?

In this essay I focus on how we should think about emerging "guidelines" that standardize care for hospitalized patients who require cardiac operations. I propose that when we standardize care based on the results of randomized controlled trials (RCT's), we act in concert with the greatest of sorcerers, the producer of the most extraordinary and surprising outcomes, that is, science. On the other hand, when we regiment care based on guidelines that are the best guesses of experts without the benefit of well tested hypotheses, we act like the sorcerer's apprentice, that is, we impede ourselves and the care of those we serve.

The Sorcerer's Apprentice refers to an ancient German folktale later scored as a light classical piece by Paul Dukas that Walt Disney incorporated into the movie length cartoon, Fantasia. This wonderful work fused the technology of color cartoons with the highbrow culture of classical music into a memorable visual expression of orchestral music. As you may recall (see it on youtube.com), Mickey Mouse stars as the Sorcerer's Apprentice. Tired of hauling water for the master, he dons the sorcerer's hat, but not his brain, and conjures a broom to perform his task of hauling water. Satisfied with his accomplishment, he falls asleep to dream of controlling the sea and the stars, but awakens to learn that he cannot even control the broom that has now filled the room with water. The chaos resolves only when the true sorcerer returns to resolve the mess and rebuke the apprentice. I found myself humming the memorable tune of Dukas' work while reviewing some of the recent "guidelines" of The Society of Thoracic Surgeons (STS) and the American College of Cardiology (ACC), but it took me some time to understand why.

Then it struck me. Class I recommendations and Level of Evidence A and perhaps B are the sorcerer, or as close as we are able to get to one. That is, such recommendations are based on science. All other classes of recommendations and levels of evidence are based on observational studies, expert opinion, and/or consensus. Some physicians and members of the public may be reassured to imagine a continuum of knowledge based on the relatively solid results of randomized controlled trials and extended seamlessly to the grey heads of expert opinion, but this is simply a sad delusion based on the following misjudgments: first, an unwarranted hubris about what we know combined with a desire for certainty and guidance when there is none; second, a failure to understand what constitutes a scientific idea; and third, in the case of surgical knowledge, a reluctance to accept the diversity of surgical abilities and results. Recommendations based on consensus and expert opinion have far more in common with the sorcerer's apprentice than they do with the sorcerer, and that is why I found myself thinking of Mickey Mouse as I was reading the intricacies of the Class II recommendations. Such a reflection would be simply humorous, but for the fact that these recommendations impede quality improvement and have serious implications in an Internet society trying to judge what cardiac specialists do.

I have just asked you to swallow two rather large eggs in a row. First, that Class II recommendations are largely unscientific pretense and second, that they cause actual harm. Let's work through each of them separately and start with the three reasons given above to explain Class II recommendations. First, cardiac specialists risk patients' lives, and we would naturally be reassured if a large prestigious body like the ACC or the STS placed its imprimatur on as many of our decisions as possible. This is not much different than the various rules of conduct propounded by religious organizations to reassure the faithful they are acting in a way acceptable to their god. The appointment of high priests to decide on the rulebook is also standard, and history never records a shortage of volunteers to be high priests. Furthermore, when we confer formal title on what amounts to current opinion, we are pretending to the public that we know more than we really do. Neither of these positions is consistent with our goal of practicing "evidence based medicine." In fact it is consistent with practicing "circumstantial evidence based medicine" or cult based medicine. This is religion, not science.

Some of these attitudes may have developed because we forgot or never knew the difference between science and philosophy. Both of these fields of inquiry contain important scientific information, but they differ at a critical point. Scientific ideas can always be stated in such a way that an experimental result can refute them. Philosophic ideas cannot be stated in a way that permits refutation by experiment. For instance, the pre-Socratic philosopher, Democritus, imagined that the world might be composed of "atomic particles" and coined the term, but nearly 2500 years passed before this idea could be tested experimentally. When we finally get around to stating an idea in scientific terms and testing it experimentally, we often discover new approximations of the truth that are completely at odds with "expert" opinion. In physics, for instance, many scientists agreed that there must be an "ether" that would carry electromagnetic waves the way water carried ocean waves, but no experiment could find it. The experts were wrong; there is no ether. In our own field, the COURAGE trial tested the idea that stenting would augment survival in patients with multivessel coronary disease who were receiving optimal medical management. The companies that manufacture stents supported this trial because they were convinced that their products would have this effect. The experts were wrong. The study showed that stents did not improve outcomes among patients treated with good medical therapy. The experts immediately attacked the study because its results were at odds with their opinions, but this is exactly the point of science - to elaborate what Emily Dickinson called "the Truth's superb surprise," or if you prefer, this is the sorcerer coming back to rebuke the apprentice. By the way, science never tells the final truth because there is no final truth, except in religion and dogma. Science is the sum of those hypotheses that have withstood experimental testing and been found useful. Millions of experiments confirmed Newton's laws of motion until Einstein came along and pointed out that they are only a special case of motion at slow speeds. Science only approximates the truth, devises ways to test it, and never confuses it with expert opinion.

Among surgeons who operate on patients with life threatening illnesses, judgment by expert opinion may lighten the burden of judgment by outcomes. Why emphasize the number of infections when you can emphasize whether the antibiotics were given on time? Why emphasize mortality when you can be graded on whether beta blockers were given pre op? Many surrogate outcome markers obfuscate the burden of plain old outcomes. This is a sorrowful trend in thoracic surgery that extends beyond our focus on how we should conduct process improvement. You can be a member of STS without reporting your results to the STS national database. As an adult cardiac surgeon, you can recertify as a member of the ABTS by passing a written test that covers esophageal and pulmonary disease, but does not require presentation of 10 years of solid performance as an adult cardiac surgeon. Getting high marks for following guidelines obfuscates the necessity for plain old good outcomes, lower morbidity and mortality.

Furthermore, cardiac surgery is not a straightforward science like biology or physics. Cardiac surgery is also a performance art, like a piano recital. The two interact. Just as a piano recital would have a different effect on the audience if played at half the usual speed, an operation performed with twice the bypass time will have a different effect on the patient. The performance part affects the science, so for instance, systemic inflammatory response syndrome doesn't occur in patients who receive short operations. Expert opinion and consensus views account for neither the science part of surgery nor the performance art.

Finally, I asserted that the elaboration of process improvement standards that are not based on solid scientific data is actually counterproductive. As suggested above, most of the history of scientific advance is the overthrow of expert opinion. This is what Thomas Kuhn meant by "paradigm shift," It is therefore illogical to set standards of care based on the same expert opinion that has so often proved to be faulty. As a corollary to this obvious idea, setting performance standards on such ideas enforces orthodoxy on process that insures that our current errors will persist. In 1492, expert opinion believed the earth was flat; in the nineteenth century, expert opinion did not accept that hand washing before infant delivery would reduce puerperal sepsis, and we can only hope that science will continue to confront orthodoxy with the results of experiment.

The problem is not with this necessary confrontation. The problem occurs when we craft guidelines based on orthodoxy and impose them on practice as if they carried the intellectual force of experimental result. Think for a moment about all the recommendations about anticoagulation after cardiac operations. Except for the necessity of anticoagulating patients who receive mechanical heart valves, most expert recommendations during the past decade have been confounded by the use of aprotinin. Which of the experts' programs were using it routinely; which weren't? What will be the guidelines in five years when none of their programs use it? Which guidelines that we scurry after now are incorrect because of it? What did those guidelines accomplish?

So what should we do? We offer the following ideas:

1. Assiduously follow recommendations base on randomized controlled trials and solid experimental science.

2. Measure only true outcomes, survival and incidence of complications. Measuring surrogates focuses the team on gaming the system, rather than improving outcomes.

3. Stop enshrining expert opinion as "guidelines" that end up being valuable mostly to plaintiff's attorneys and bureaucrats and learn to live with the anxiety of not knowing.

4. Frankly identify controversial areas and encourage open discussion without prematurely sanctifying any view.

5. Especially in cardiac surgery, carefully study programs that are getting top 5% results to understand why they perform better.

6. Fund randomized controlled studies and experimental science.

If we start thinking this way, we are more likely to be sorcerers and less likely to be the sorcerer's apprentice.
CCBY All scientific articles published at www.rbccv.org.br are licensed under a Creative Commons license

Indexes

All rights reserved 2017 / © 2024 Brazilian Society of Cardiovascular Surgery DEVELOPMENT BY