Improving Diagnostic Reasoning to Improve Patient Safety


Alvin Rajkomar, MD; Gurpreet Dhaliwal, MD

Summer 2011 - Volume 15 Number 3


Both clinicians and patients rely on an accurate diagnostic process to identify the correct illness and craft a treatment plan. Achieving improved diagnostic accuracy also fulfills organizational fiscal, safety, and legal objectives. It is frequently assumed that clinical experience and knowledge are sufficient to improve a clinician's diagnostic ability, but studies from fields where decision making and judgment are optimized suggest that additional effort beyond daily work is required for excellence. This article reviews the cognitive psychology of diagnostic reasoning and proposes steps that clinicians and health care systems can take to improve diagnostic accuracy.


The ability to transform medical data into an actionable diagnosis is paramount to the functioning and identity of every physician. This first fundamental step in patient care is complex and prone to errors yet is infrequently considered to be a focus of potential improvement.

Given the costs and dangers of an incorrect diagnosis, improving diagnostic accuracy has been called the next frontier for patient safety.1 An incorrect working diagnosis can lead to treatment of a nonexistent condition as well as a delay in appropriate therapy for an existing condition. Shojania et al found that 5% of autopsies demonstrate diagnostic errors leading to lethal complications that would have been averted by treatment if the correct disease had been diagnosed.2 Malpractice lawsuits about diagnostic errors are more common than lawsuits about medication errors and result in larger payouts.3

Knowledge and experience are the cornerstones of strong diagnostic skills, but the ongoing improvement of a clinician's diagnostic skills requires a basic understanding of the cognitive process that underlies diagnosis and a commitment to lifelong learning and expertise principles. Decades of study on physicians' judgment and reasoning4,5 have yielded practical insights into how to optimize the diagnostic thought process.

In the first part of this article, we present a summary of the cognitive psychology of diagnostic reasoning. In the second part, we suggest changes that both individual clinicians and health care systems can adopt to improve diagnostic accuracy and improve patient care and safety.

The Science of Diagnostic Reasoning

Many clinical encounters require a modest number of data points for diagnosis. For example, a brief medical history from a healthy woman, age 30 years, with dysuria is largely sufficient to diagnose a urinary tract infection. Extensive listing of other diagnostic possibilities is impractical and frequently superfluous. This type of reasoning employs the intuitive system in our brain, which conducts a rapid mental comparison of the current case with an abstract prototypical picture ("illness script") of common causes of dysuria such as a urinary tract infection. The brain performs this comparison on the basis of past experience and knowledge through a process that is largely inaccessible to conscious control or manipulation. This seemingly instantaneous process was celebrated by Malcolm Gladwell for its utility and efficacy in his best seller Blink: The Power of Thinking Without Thinking.6

Common, straightforward cases dominate daily practice, but clinicians are also faced with patient encounters that do not fit previously recognized patterns. Take, for example, the case of a man, age 42 years, with back pain, a serum calcium level of 6.9 mg/dL, and a hemoglobin level of 6.3 g/dL. To make sense of such a scenario, a clinician employs the more deliberate and time-consuming method of analytic reasoning. The physician must comb through memory and knowledge stores and frequently use external information sources to derive a clinical solution. The aforementioned case does not immediately trigger a unifying diagnosis and explanation, but with extended thinking, consideration of pathophysiology, consultation with colleagues, and use of online resources, the physician might deduce that this patient's anemia is due to malabsorption leading to vitamin D deficiency, with the ensuing osteomalacia causing bone pain. Further analysis may allow her to arrive at the underlying diagnosis of celiac disease.

Studies using functional magnetic resonance imaging suggest that intuitive and analytic reasoning correspond to the activation of separate brain structures—the ventral medial prefrontal cortex and right inferior prefrontal cortex, respectively.7 However, in human reasoning and decision making, the two systems are not used in isolation. Rather, they exist on a cognitive continuum: Ideas generated by intuition are subject to analytic scrutiny, and conclusions that are reached through formal analysis may be overridden by intuition8 (eg, "I will admit this patient with chest pain for exclusion of MI [myocardial infarction] despite the low TIMI [Thrombolysis in Myocardial Ischemia] risk score").

Hypothetico-Deductive Model

Although little is known about the inner workings of intuition, analytic reasoning has been characterized by multiple models. The longest-standing conceptualization of analytic reasoning is the hypothetico-deductive method, in which diagnostic hypotheses are proposed, tested, and either verified or rejected.9

Within seconds of a patient encounter, the physician starts developing a short list of possible diagnoses (typically two to five), either as specific entities such as influenza or malaria or as broad categories such as "an infectious disease." These hypotheses transform the cognitive task of the patient encounter from deciding "What is this patient's illness?" to deciding "Is this fever and pharyngitis a case of strep throat, mononucleosis, or acute HIV?" These specific questions then direct further inquiry and data gathering from the history, physical examination, laboratory tests, and imaging studies. During this process, some initial hypotheses are rejected, new ones are generated, and broad categories are refined into specific disease states.

When the number of hypotheses is narrowed to one or two, they are subjected to a process of verification.10 The final working diagnosis must be adequate and coherent—that is, it must explain most normal and abnormal findings and conform to the patient's demographics, presentation, and clinical course. Stated otherwise, there must be a reasonable (but as clinicians know, rarely perfect) match between the clinical features of the patient in front of them and the illness script (ie, template of the disease) in their mind. Physicians generally seek assurance that the worst-case scenarios have been excluded, such as an ectopic pregnancy in a young woman with abdominal pain. Moreover, a single unifying diagnosis in the spirit of Occam's razor is preferred although not always feasible in complex illnesses. Usually the verified diagnosis must meet a high-enough likelihood to merit treatment (ie, meet the treatment threshold11), which often varies with the risks of the associated therapy.

Cognitive Heuristics and Biases

For many routine patient encounters, physicians can use mental shortcuts and rules of thumb to arrive at the correct diagnosis. These shortcuts, or heuristics, are very efficient and allow clinicians to complete high levels of work, but they are also prone to producing predictable mistakes in the course of reasoning. Despite their shortcomings, these double-edged swords are used constantly in practice and everyday life because they usually yield correct decisions. The following five heuristics are commonly employed in clinical practice and can lead to diagnostic errors:

  • The representative heuristic leads clinicians to judge the probability of a disease by how closely a patient presentation matches a prototypical case without considering the prevalence of a disease. For example, a clinician may strongly suspect that a patient with hypertension, headache, diaphoresis, and palpitations has a pheochromocytoma, given the match with the textbook description. However, each individual symptom is very commonly encountered in clinical practice, and the true likelihood of the unifying diagnosis of pheochromocytoma is vanishingly low.
  • The availability heuristic leads the clinician to judge the probability of a disease on the basis of how easily that disease is recalled, which is often skewed by recent and memorable cases. For example, a physician who arrives at an accurate diagnosis of constrictive pericarditis after examining a patient with edema may overestimate the likelihood of that diagnosis for other patients who present with lower-extremity edema. This effect sometimes colors judgment for weeks to months, but frequently it modifies clinicians' judgment for their entire career (eg, "One time in fellowship I saw X, so I always do Y").
  • The anchoring heuristic leads clinicians to cling to their initial diagnostic hypotheses even as contradictory evidence accumulates. For example, a patient with stage 5 chronic kidney disease was admitted with altered mental status and myoclonus of the left arm attributed to uremia (the anchor). However, as the patient's condition failed to improve with dialysis (contradictory evidence), the clinicians had a difficult time revising the formulation to the eventual diagnosis of status epilepticus.
  • Premature closure describes settling on a diagnosis without sufficient evidence or without seeking or carefully considering contradictory information. For example, a patient with rheumatoid arthritis who was taking immunosuppressive medication presented with shortness of breath and was found to have a small distal pulmonary embolus. A consulting physician was not satisfied with this explanation in light of the diffuse fine infiltrates on the chest radiograph and requested bronchoscopy, which revealed Pneumocystis jiroveci pneumonia.
  • A related problem, confirmation bias, is the tendency to look for evidence to support a working hypothesis, ignore contradictory evidence, and misinterpret ambiguous evidence. For example, for a patient with symptomatic anemia with a nearly absent reticulocyte count, there was an incidental finding of potentially full mediastinum on a screening chest radiograph. The reticulocyte count was considered to support the diagnosis of iron-deficiency anemia and the radiograph finding was discounted, although the patient was later found to have aplastic anemia from a thymoma.

Intuitive or Analytic Reasoning?

Intuitive reasoning can quickly sort through large volumes of data (which characterize many complex medical encounters) through an unknown algorithm with a reasonably high success rate. We rely on this balance of efficiency and accuracy to make it through the busy clinical day. The algorithm, however, relies heavily on cognitive shortcuts that use recollection of cases that are disproportionately memorable (availability bias) or rare (representativeness, or the closely related concept of base-rate neglect), and it may abort a search once a diagnosis conforms to currently available data (confirmation bias, premature closure, search satisfaction—stopping the diagnostic search when a single abnormality is detected).

The majority of educational and psychological writings have advanced analytic reasoning as the more accurate and reliable method of problem solving. This matches the common perception that a decision arrived at by detailed analysis is more valuable and accurate than one reached through intuition. However, abundant research suggests that both the intuitive and the analytic systems are critical, are interwoven, and have strengths and weaknesses.12,13

A modern characterization of expert clinical judgment is the adroit recognition of the limits of intuition and acknowledging when analytic reasoning is required (ie, "knowing when to slow down").14 Diagnostic experts develop a base of experience and knowledge that increasingly employs intuitive reasoning to accurately diagnose the cases they confront, but they also develop an accurate sense of when analytic reasoning is merited. Just as pilots know when to turn off or to turn on autopilot, physicians too can develop a sense of when to use intuitive versus analytic reasoning.

Improving Diagnostic Reasoning

Measures that may improve diagnostic accuracy can be broadly grouped into two categories: improving individual clinicians' diagnostic reasoning skills and improving health care systems to support clinicians through the diagnostic process.

Individual Continuous Improvement

The individual clinician can pursue at least three different measures to improve diagnostic performance: feedback, deliberate practice, and metacognition. These methods are derived from the literature on expertise, lifelong learning, and professional development.


The only way decision makers can improve their judgment is through feedback. When diagnostic decisions are correct, reinforcement occurs. When diagnostic decisions are incorrect, recalibration occurs. Too often in medicine, however, there is no feedback on decision-making episodes. Schiff outlined the barriers to feedback in medicine, which include fragmentation of care, a culture of not providing clinician-to-clinician feedback (especially when clinicians are wrong), and lengthy delays between diagnosis and test results.15 The natural tendency of the human mind is to equate the absence of feedback with positive feedback, and that leads to miscalibration and overconfidence.16

Actively seeking feedback on diagnostic decisions not only refines the clinician's judgment but also serves as an important patient-safety mission. Reliance on patients to return for follow-up care if there is no improvement without scheduled surveillance invites an open loop in the decision-making process, whereby the patient is both physically and cognitively discharged from the physician, and forfeits the potential for detection of early adverse clinical events.17

In high-stakes fields where decision-making optimization is continually sought (eg, military), feedback on judgments is not optional; rather, the feedback is systematic and comprehensive. All physicians get low-frequency, random feedback on their decisions, but individual clinicians can consider how they (or the system) can increase the rate and scope of feedback through scheduled follow-up visits, phone or electronic communications, or triggered alerts on subsequent diagnostic tests or consultations.

Deliberate Practice

It is tempting to assume that daily practice for many years is sufficient to develop superb diagnostic skills (an expert), but research in other professional fields shows that simply attending to the day's work without additional reflection and training creates an experienced nonexpert.18 In every profession where top performance has been studied,19 it has been demonstrated that additional training and effort—termed deliberate practice20—is required to achieve an individual's maximal potential.

Whether the goal is excellence in playing an instrument, leading a military battalion, or honing diagnostic skills, there is no substitute for practice. In each profession where excellence is sought, performers seek additional opportunities to refine their judgments and actions. In other fields, these efforts go by the names of rehearsal, scrimmage, simulation training, and practice. The question for physicians is: What does deliberate practice look like for us?

Generally, most busy clinicians are not seeking additional patient encounters in order to augment their skills, but they can amplify their clinical experience and practice clinical reasoning by actively reading about clinical cases, whether on paper or online. Although reading or clicking through a case will never reproduce the full cognitive challenges or rewards of interacting with a patient, an active reading approach that focuses on solving the case can yield some of the same intellectual benefits.

The general theme of this reading approach is to maximize the challenge of solving the case by continuously making decisions along the way rather than perusing the case while waiting for the answer to be revealed. Action steps to recreate this challenge include concealing the title or other early artificially placed diagnostic clues (eg, imaging or pathology results), interrupting reading at regular junctures to make decisions and assessments and then comparing judgments with the author's judgments, and pursuing learning (can be done quickly) that is based on knowledge gaps that become apparent. This active approach mentally weaves the simulated case into the memory of related clinical experiences, so that the thought process of working through a case becomes an episodic memory in the way that a true patient encounter does. Studies are needed to examine whether this method translates to improved diagnostic performance.


Since the 1970s, there has been an increasing awareness of the complexity of the diagnostic process and an appreciation of its pitfalls, including cognitive errors. As the popularity of Jerome Groopman's How Doctors Think21 can attest, this is a subject of great public interest as well. Some authors have championed metacognition—thinking about thinking—as a way to minimize physician cognitive errors. The proposition is that physicians who have a heightened awareness of their thought processes are best positioned to recognize and counteract incipient errors.

Croskerry suggested that physicians develop an awareness of cognitive errors in order to develop cognitive forcing strategies that subvert errors in real time.22 For instance, the physician who understands the ever-present risk of premature closure may habitually force herself to always ask, "What else could this be?" before discharging a patient from the Emergency Department, just as the diagnostic radiologist who recognizes the pitfall of search satisfaction may adhere to a thorough checklist for chest radiographs despite the detection of a clearly defined infiltrate.

Physicians who strive to improve their diagnostic process must be aware of it, and there are many reasons to expect that an awareness of cognitive errors and a general commitment to reflection and learning from one's practice and decisions will improve clinical knowledge and judgment. However, it remains to be seen whether routine reflection in action—essentially a habitual override of intuition or rapid decision making—will lead to improved patient outcomes.12

Improving Health Care Systems to Improve Diagnosis

Currently, the refinement of diagnostic skill is an individual pursuit, powered by a clinician's own drive for excellence. However, as institutions recognize the financial, quality, safety, and legal ramifications of diagnostic errors, they will become increasingly motivated to help clinicians improve their diagnostic accuracy through technology, through processes based on information systems, and through cultural approaches.

Decision-Support Systems

Since the 1980s, numerous versions of computer-based decision-support systems—DXplain (, PKC (, and most recently, Isabel (—have been developed to help the clinician by suggesting diagnostic possibilities in real time after clinical data are entered. The premise of providing an aid or check on physician reasoning is logical and attractive. Although such systems have demonstrated modest usefulness and satisfaction, to date the results, in terms of physician adoption and effect on patient outcomes have been disappointing.

Barriers to the success of decision-support systems have included physicians' perceived lack of need for such assistance, the time-consuming entry of patient data, the need to look through lengthy differential diagnoses, and potential increased time and financial costs associated with exploring those options. Essentially, physicians have found it unrealistic to integrate these systems into their everyday workflow.

Although much work remains to be done, there is reason to believe that a well-designed system can be created that uses data from electronic medical records without requiring additional data entry, provides a more filtered output, and links directly to condensed knowledge sources or the next steps in the diagnostic algorithm. Even as computer intelligence grows in leaps and bounds (witness the supercomputer Watson on the television show Jeopardy!), the proposition remains that decision-support systems will exist to supplement but not replace the clinician's reasoning. To be adopted, they must prove to be safe, effective, cost-effective, and convenient in everyday practice.

Diagnostic Checklists

It can be argued that if there is a checklist for placing central venous catheters, there should be a checklist for our most critical procedure: diagnosis. In 2011, Ely et al proposed such a diagnostic checklist.23 The checklist starts with the rudiments of patient assessment—obtaining a medical history, performing a physical examination, and forming a differential diagnosis—but then distinguishes itself from routine practice by inserting two final steps in every encounter: taking a "diagnostic time-out" and embarking on a follow-up plan. The time-out explicitly confronts any shortcomings of a diagnostic encounter by asking a series of reflective questions:

  • "Was I comprehensive?"
  • "Did I consider the inherent flaws of heuristic thinking?"
  • "Was my judgment affected by any other bias?"
  • "Do I need to make the diagnosis now, or can I wait?"
  • "What is the worst-case scenario?"

This checklist encompasses the key elements of metacognition introduced in an earlier section and concludes with a follow-up plan that captures the safety and learning benefits of feedback already discussed.

The general checklist is supplemented with a syndrome-specific checklist, which is a much simpler form of decision support that requires the input of only one data point: the chief complaint. The lead author has compiled checklists for 46 common but diagnostically challenging presentations (eg, tachycardia, ankle pain) and suggests in an accompanying online video that the list could be used in real time with the patient to ensure that common and "do not miss" diagnoses are considered ( The authors conclude with a very forthright and thorough discussion of the limitations of their proposal, including the need for evidence to promote the adoption of checklists in clinical settings.

Shifting from Continuing Medical Education to a Learning Community

Increasingly, health care systems are making investments to improve the quality of health care delivery, but within that larger goal, relatively little investment is made in the continuous quality improvement of the diagnostic skills of their practitioners. The traditional reliance on continuing medical education requirements and the passage of time on the job are both inadequate.24 Institutional support for time, workforce, and information technology resources should be devoted to transforming the workplace from a production facility to a learning community that produces.25 Examples include groups that do practice-based inquiry,26 electronic medical record systems that automatically close feedback loops (eg, the clinician who suspects that a murmur is aortic stenosis can receive an e-alert of echocardiographic results when they are available), and allocation of time to use a diagnostic checklist.


Diagnosis is the clinician's most critical procedure but has eluded the same degree of scrutiny and innovations in improvement achieved for central venous catheters or medication administration. We have every reason—including our professional identity, patient safety, and risk mitigation—to face that challenge. A comprehensive review of diagnostic errors is provided in a supplement to the May 2008 issue of the American Journal of Medicine (see, and the Diagnostic Errors in Medicine conference (, now in its fourth year, continues to convene to address the study and remediation of this important problem. We fully support the call for more research on clinician-level and systems-level interventions to increase diagnostic accuracy.

Disclosure Statement

The author(s) have no conflicts of interest to disclose.


Katharine O'Moore-Klopf, ELS, of KOK Edit provided editorial assistance.

1.    Newman-Toker DE, Pronovost PJ. Diagnostic errors—the next frontier for patient safety. JAMA 2009 Mar 11;301(10):1060–2.
2.    Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003 Jun 4;289(21):2849–56.
3.    Weeks WB, Foster T, Wallace AE, Stalhandske E. Tort claims analysis in the Veterans Health Administration for quality improvement. J Law Med Ethics 2001 Fall–Winter;29(3–4):335–45.
4.    Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005 Apr;39(4):418–27.
5.    Dhaliwal G. Clinical decision making: understanding how physicians make a diagnosis. In: Saint S, Drazen J, Solomon C, editors. New England Journal of Medicine: clinical problem solving. New York: McGraw-Hill Professional; 2006. p. 19–29.
6.    Gladwell M. Blink: the power of thinking without thinking. New York: Little, Brown; 2005.
7.    Goel V, Dolan RJ. Explaining modulation of reasoning by belief. Cognition 2003 Feb;87(1):B11–22.
8.    Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009 Aug;84(8):1022–8.
9.    Elstein AS, Schulman LS, Sprafka SA. Medical problem solving: an analysis of clinical reasoning. Cambridge, MA: Harvard University Press; 1978.
10.    Kassirer JP, Wong JB, Kopelman RI. Learning clinical reasoning. 2nd ed. Baltimore, MD: Lippincott Williams & Wilkins; 2009.
11.    Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med 1980 May 15;302(20):1109–17.
12.    Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009 Sep;14 Suppl 1:37–49.
13.    Dhaliwal G. Going with your gut. J Gen Intern Med 2011 Feb;26(2):107–9.
14.    Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007 Oct;82(10 Suppl):S109–16.
15.    Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med 2008 May;121(5 Suppl):S38–42.
16.    Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008 May;121(5 Suppl):S2–23.
17.    Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med 2005 Jan 18;142(2):115–20.
18.    Bereiter C, Scardamalia M. Surpassing ourselves: an inquiry into the nature and implications of expertise. Peru, IL: Open Court Publishing; 1993.
19.    Ericsson KA, Charness N, Feltovich PJ, Hoffman RR, editors. The Cambridge handbook of expertise and expert performance. New York: Cambridge University Press; 2006.
20.    Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev 1993 Jul;100(3):363–406.
21.    Groopman J. How doctors think. New York: Houghton Mifflin: 2007.
22.    Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med 2003 Jan;41(1):110–20.
23.    Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med 2011 Mar;86(3):307–13.
24.    Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance. A systematic review of the effect of continuing medical education strategies. JAMA 1995 Sep 6;274(9):700–5.
25.    Frankford DM, Patterson MA, Konrad TR. Transforming practice organizations to foster learning and commitment to medical professionalism. Acad Med 2000 Jul;75(7):708–17.
26.    Sommers LS, Morgan L, Johnson L, Yatabe K. Practice inquiry: clinical uncertainty as a focus for small-group learning and practice improvement. J Gen Intern Med 2007 Feb;22(2):246–52.


Click here to join the eTOC list or text ETOC to 22828. You will receive an email notice with the Table of Contents of The Permanente Journal.


2 million page views of TPJ articles in PubMed from a broad international readership.


Indexed in MEDLINE, PubMed Central, EMBASE, EBSCO Academic Search Complete, and CrossRef.




ISSN 1552-5775 Copyright © 2021

All Rights Reserved