We physicians have never had a clearly defined mission. That mattered less when expectations were lower and we could do less. Now though, the reigning paradigm is grounded in basic science, excessively confident, inpatient-centric, and broadly focused on treatment of symptoms and signs, on diagnosis and therapy. The development of a new medical paradigm seems to me to require: 1. a narrowing of scope; 2. more focus on hard outcomes like death and preventable disability, and less on symptoms, signs, and intermediate end points; 3. more focus on what happens, less on how and why—so more on populations, epidemiology and statistics, less on pathophysiology and biochemistry; 4. more on the vertical and less on the horizontal patient; and 5. more on the primary medical literature and less on guidelines, reviews, and expert opinions, which are more subject to corporate influence. A change in the paradigm could take many other paths than the one this article presents; keeping the old paradigm is, however, probably not a good idea.
Medicine is a venerable theoretical and practical science and, over the years, has had its share of governing paradigms. In its slow evolution from splint and herb and poultice to bone fixation, dicloxacillin, and wound care, it has episodically redefined what illness, injury and death are, how they come about, and how we can and should address them. Some common early paradigms were centered on balance (Galen’s humors or Chinese qi). Achieving a proper balance of yin and yang, or of blood, phlegm, yellow bile, and black bile, could, it was once thought, prevent ailments and restore health.  Other theories involved demonic possession or punishments by God for bad human behavior. Still others involved human agency, such as voodoo or witchcraft. The miasma theory of disease (positing poisonous vapors in the air) introduced the idea that disease could be communicable, an early premonition of the germ theory. The study of scurvy introduced deficiency as a cause of disease. Each paradigm suggested its own approaches to teaching, research, testing, cause and effect, diagnosis and intervention. The punishment theory involved querying and placating the gods, often with animal or human sacrifices. The humoral theory was associated with treatments with leeches or lancets for the plethoric, with gold amulets for the phlegmatic, or diet or herbal remedies. And, though we cannot know for sure, we suspect that many of these medical paradigms were associated with more practical harm than good.
All that, of course, is history. However, at least since the 1960s, medicine has once again been in crisis. There has been a pretty general consensus that there are substantial problems both with patient care and with the teaching of medicine. This has resulted in an avalanche of new curricula, and a plethora of correctives: from the problem-oriented medical record to patient-centered care and patient safety, from the medical home to precision medicine, to telemedicine and the import of engineering principles such as total quality management, Lean, and Six Sigma). Some of these have corrected some of healthcare’s worst abuses; others may have made things worse. All told, though, the problems and insecurities have persisted. One reason may well be that all of these newly introduced modules and concepts are intended to help shore up the paradigm that has pretty much reigned for the past century, rather than to change it. I am writing to propose that a paradigm shift would be a more appropriate response to our present crisis. If you are convinced, feel free to stop here and start to consider your own proposals for how to define a useful new paradigm. If not, let me expand a little on the crisis and propose one possible new paradigm.
There are, of course, many book-length histories of twentieth century medicine that can be consulted for in-depth discussion, and much remains to be detailed and analysed. Here I will provide a pertinent extract from one of these to help define and illuminate our present beleaguered paradigm. This paradigm had its birth in the extraordinary scientific progress of the late nineteenth and early twentieth century. From telegraph and telephone, to automobile, airplane, phonograph, and radio, everything seemed to be yielding to our scientific knowledge. These successes justified an assumption that any problem we could find, we could fix. And for a while, our progress in medicine seemed equally impressive. We started to measure blood pressure using the current methodology in 1905. Vitamins were identified between 1912 and 1927, insulin was discovered in 1921, penicillin in 1928, and sulfa in the 1930s. Life expectancy was increasing substantially. As a doctor at the time suggested,
” . . . we may soon expect to see such advertisements in the religious and daily newspapers as: ‘A new operation for neurasthenia; craniotomy for unselfishness; preventive inoculation in case of threatened breach of promise; vaccinations for antivivisectionists; damaged heart valves surgically repaired while you wait; kidneys transplanted immediately following the next electrocution; complete maturation of the artificially fertilised ovum in our new twenty-first century incubator.’”
Our rapid progress created such optimism and enthusiasm that the unspoken expectation since the 1930s has been the slow yielding of all disease to human ingenuity. Like Conan-Doyle’s Sherlock Holmes, doctors are expected to achieve an understanding of all the laws that govern medicine and the pathophysiology of disease, and to use that knowledge to ferret out illness (diagnose) and put it behind bars (treat to eradicate). Doctors began to seem like minor deities, omniscient and omnipotent, expected to have an understanding of the how and why and an answer for every problem. Some of us feel that we do. Of course, Sherlock Holmes is a fictional character, who owes his infallibility more to his author than his skills. And the exhilaration of our progress was founded more on anticipation and promise than on any concrete achievement in the present.
Yet it is not just unwarranted confidence that characterises our present paradigm. The 1910 Flexner report on medical education clearly created a new and better paradigm for training in its day. However, its emphasis on biochemistry and pathophysiology (a boon for research, perhaps less so for patient management), and its promotion of inpatient training (management of the ‘horizontal’ patient rather than the ‘vertical’) have created biases that persist to this day. In many countries, doctors are paid best for being aggressive—for doing tests and procedures, for intervening, for efficiently telling patients what to do, for making decisions for them. For sharing evidence with patients, for time spent in behavior change, for sparing patients unnecessary tests or procedures, there is little or no material reward. Technically-oriented medical students too often become technocratic but unsympathetic doctors. The emphasis on pathophysiology and biochemistry, on how and why things happen, rather than on what actually works, led doctors into dangerous byways. And, because the paradigm suggested that doctors reason inductively from first principles, relatively few doctors have developed the habit of reading the medical literature that underpins the decisions they make.
The crisis that began in the 1960s was precipitated by the many failures of the paradigm to deliver on its promises, in particular by their divergence from “First, do no harm.”. While costs rapidly increased, many of the newly introduced tests, processes and procedures not only failed to be helpful, but often caused harm. In many cases, multiphasic screening proved to do more harm than good, and was abandoned. Oestrogens, for example, and margarine, vitamin E, vitamin A, Swan-Ganz catheters, ticrynafen (also known as Selacryn or tienilic acid), quinidine for premature ventricular contractions, antipyschotics for dementia, quinine for leg cramps, prostate-specific antigen testing, troglitazone, the thyroid exam, benzodiazepines, nonsteroidal anti-inflammatory agents: these were all introduced with great fanfare. But they all had significant downsides, quite often outweighing the promised benefits. That has not stopped us from continuing to introduce new, inadequately studied interventions. We have, of course, also had many successes. Hypertension and heart disease have responded to our ministrations with a gratifying decrease in subsequent death and disability. Some cancers, once untreatable, have come under a measure of control. We even have an effective, if astronomically expensive, set of treatments for hepatitis C. But we still can’t effectively manage inflammatory bowel disease or sarcoidosis, or how explain how thiazide diuretics decrease blood pressure. Most diseases yield slowly, if at all, and there are always new ones (Zika virus, Middle East Respiratory Syndrome, and of course, COVID-19), increased incidence of old ones (diabetes, obesity), resurgences (Syphilis, tuberculosis); and disease created or exacerbated by medical care (drug reactions, Clostridium difficile, the opioid epidemic).
It is almost certain that the introduction of the cohort study and the double-blind controlled trial caused the cognitive dissonance evident in the current paradigm. Teaching, research, and practice have yet to catch up, to draw the appropriate conclusions from the relatively reliable evidence accumulating in the medical literature. Double-blind controlled trials are studies of populations. Though they began by evaluating the impact of drugs like antibiotics and vitamins, they soon expanded to include other interventions (surgical, individual foods, general diets), and over time the results have been expressed not only in infections cured or symptoms improved but also in costs, hospitalisations, nursing home admissions, emergency room visits, disability and death—in hard, irreducible end points. Yet practice continues to be based not only on the diagnosis and treatment of disease, but also on the treatment of signs, symptoms and surrogate endpoints, on therapy without any wider context. In treating signs and symptoms, we sometimes – perhaps often – inadvertently increase death and disability. It is, therefore, possible for a doctor to do more harm than good in a lifetime of practice. Diagnosis and therapy are valuable tools, but they are only tools, and must be used in an appropriate context.
The development of a new medical paradigm seems to me also to require the following.
1. a narrowing of scope;
2. more focus on hard outcomes like death and disability, and less on symptoms, signs, and intermediate end points;
3. more focus on what happens, less on how and why—so more on populations, epidemiology and statistics, and less on pathophysiology and biochemistry;
4. more on the vertical and less on the horizontal patient;
5. more on the primary medical literature (on hard evidence) and less on guidelines, reviews, and expert opinions, which are more subject to corporate influence.
One approach might be to craft a new definition, or better yet, a mission statement for medicine. Definitions help to create and harmonise expectations. But the definition of a doctor and their responsibilities is at present far from clear-cut. Leaving out the specialising, and the teaching and research, and the evolution over time, the definition of a doctor remains elusive.
The World Health Organisation has defined health as “a state of complete physical, mental, and social well-being, and not merely the absence of disease or infirmity”. It states that “the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being”, and suggests that “the extension to all peoples of the benefits of medical, psychological and related knowledge is essential to the fullest attainment of health.” The Hippocratic Oath suggests that we help the sick according to our ability and judgment and abstain from all intentional wrong-doing or harm. During our training, doctors are often advised “to cure sometimes, to treat often, to comfort always.” According to these guidelines, a doctor might be a scientist who diagnoses and treats disease, or an artist seeking for each patient “a state of complete physical, mental, and social well-being.” Today’s dictionaries still focus on prevention, diagnosis, and treatment of diseases and their symptoms, while none seem to mention populations, research, epidemiology, or death.
Suppose that medicine were to be explicitly and narrowly redefined as the use of medical evidence to help patients to optimally delay death and prevent those disabilities that are preventable; and that only when we have reached a predefined objective success would we assume other tasks—like achieving “complete physical, mental, and social well-being,” tasks which, for the time being, might better be assigned to others. Such a definition would, if widely adopted, have profound implications. It would change who among us would choose to apply to medical school, what and how we would be taught, how we would practice, and what research we would collectively pursue. It could also increase life expectancy, reduce preventable disabilities, improve communication, clarify expectations and improve patient and job satisfaction.
That is, of course, only one possible (and controversial) approach to changing the reigning paradigm. You will almost certainly have other, potentially better, solutions. What seems crucial though is that something needs to change. In a world where the half-life of truth in medicine has been estimated at 45 years, it may seem that love, touch, and humility are the only enduring clinical truths. But smoking is dangerous, and exercise and the Mediterranean diet are protective. So perhaps the most important enduring clinical truth may be a fiduciary responsibility: an informed, objective, independent reading of the medical literature by each practitioner. Most practitioners in search of information still depend on guidelines, on the opinions of colleagues, specialists and even drug representatives, with their substantial risk of bias. They could instead depend on the reading and interpretation of double-blind clinical trials, on other studies, or on unsponsored and independent reviews and editorials. An independent reading is the best protection against misinterpretation. Too often, vested interests influence the interpretation of the evidence. Identifying, understanding, interpreting, and sharing meaningful and objective data with patients in order to help them delay death and disability seems to me to be a goal worthy of inclusion in whatever new medical paradigm we may choose to implement. For all our sakes, let it be soon.
 Nutton, V, 2008, The Fatal Embrace: Galen and the History of Ancient Medicine, Science in Context, 18(1):111-121. doi:10.1017/S0269889705000384
 Tang, JL, Liu, BY, Ma, KW, 2008, Traditional Chinese medicine, The Lancet, 372:1938-1940
 Andersson, R, Eriksson, H, Torstensson, H., 2006, Similarities and differences between TQM, six sigma and lean, The TQM Magazine, 18(3) Available at https://www.emerald.com/insight/content/doi/10.1108/09544780610660004/full/html Accessed August 14, 2019
 Gould, GM, 1915, Personal biologic examinations, Scientific American Supplement, 79:146-147
 Zimmerman HJ, Lewis JH, Ishak KG, Maddrey WC, 1984, Ticrynafen-associated hepatic injury: analysis of 340 cases, Hepatology, Mar-Apr;4(2):315-23.0
Featured image: The Anatomy Lesson of Dr Nicolaes Tulp, Rembrandt, Public domain, via Wikimedia Commons
Alan Cohen has taught at Yale, University of Illinois, and UCSF. He was Associate Program Director in Champaign-Urbana, and Chief of Primary Care and responsible for residency affairs at the VA in Fresno, California; and has been thinking about how medical teaching and practice could be improved for most of his life. He believes the ideas his article presents should be a part of the discussion concerning future of medicine and medical education.