Logout successfull!

Medical Decision Making – Where Intuition and Evidence Meet

 

by Dr. Niklas Keller

Simply Rational GmbH, Berlin, Germany

We humans are not very good at making decisions. At least that is the classical view. We oversimplify complex matters, are constantly overwhelmed by too much information, and suffer from selective attention.1 Our intuitive decisions are subject to systematic and predictable biases causing deviations from the rationally optimal choice.2,3 These kinds of cognitive traps can influence how safety and medical risk are perceived and how medical decisions are made.

Everyone has biases. One much-studied example of a bias that has been shown significantly to impact medical decision-making is base rate neglect.4,5 For example, when asked about the likelihood of a disease given a positive test result (positive predictive value), physicians often equate this with the sensitivity of the test and tend not to think of the prevalence of the disease as a factor. Consequently, the likelihood of a patient having a disease given a positive test result is often over-estimated.6 This effect can be observed especially in the context of screening, since in screening, a population with very low disease prevalence is tested. How can evidence-based decision making be possible when human minds can find it difficult to convert this evidence into better decisions?

Base Rate Neglect

Forgetting the underlying frequency (base rate) of an event, e.g., the prevalence of a disease, when making decisions in the light of new individuating evidence, like the results of a medical test.4,5


The classical view of human decision making is largely based on the 'Heuristics & Biases' research program founded by Daniel Kahneman and Amos Tversky in the late nineteen sixties.7 This program locates the difficulties of sub-optimal judgement and decision making largely in the minds of decision makers and, critically, considers biases to be hardwired in human brains.8 Subsequently, Kahneman thought that humans would not be able to 'de-bias' themselves: Like visual illusions, the knowledge of a bias does not protect one from falling into its trap.

Decision making is not one size fits all

In the last thirty years a more nuanced and optimistic picture has emerged: decision making is best described as a pair of scissors where one blade is the structure of the task environment (the decision-ecology) and the other blade the strategy of the decision maker.9 Only when the two are aligned can the scissor 'cut' the problem effectively. There is no 'universally rational' approach to decision-making. Instead, any strategy is only as good as its adaptation to the specific decisional context, i.e., its ecological rationality.10,11 Consequently, one can improve decision making by either increasing the repertoire of strategies or competencies available to decision makers or by changing the decision environment. For example, the reason why the Romans could not multiply or divide in their heads was not because they had 'multiplication bias', but because the Roman numeral system (i.e., the information environment) does not lend itself to such operations.

The same applies to base rate neglect: quite contrary to being hard-wired, as proponents of the Heuristics & Biases approach, base rate neglect practically disappears when critical information (sensitivity, specificity, and prevalence) is presented in the form of natural frequency trees6 or if people are taught to convert conditional probabilities (such as sensitivity and specificity) into natural frequencies in their heads.12 Such skills are now taught at medical schools in Germany (e.g., Charité) and across the world (e.g., Oxford) and have improved probabilistic test interpretation as a consequence.6,13

This research has also shown that heuristics, the cognitive shortcuts, or simple rules of thumb that people use to make decisions, do not only or even mostly lead to biases and sub-optimal decisions. Instead, it was demonstrated in many real-world scenarios outside of psychologists’ laboratories, that intuitive heuristics can outperform far more complex and information-intensive strategies, even the latest AI technologies.14,15,16,17 The discovery of these so-called "less-is-more-effects" (that one can make better decisions with less information and complexity) is seen as one of the most important findings in decision science in the last 30 years.18

How to make a good decision

Gerd Gigerenzer, former Director of the Max-Planck Institute for Human Development in Berlin is the main proponent of the 'Smart Heuristics' approach. The key finding is: if there is a good understanding of the problem and a stable environment, then a problem can be quantified well, and more data and more complexity will lead to better outcomes. In this situation, simplifying heuristics will always be second-best. However, if there is a situation in which there is an incomplete understanding of the problem, or there is an unstable, dynamic environment, or good information is simply not available, then simplicity can lead to better outcomes. Here, the simple heuristics that underlie human intuitions can outperform AI because they focus on only the most relevant pieces of information and ignore the rest. This makes them less susceptible to the uncertainties and fluctuations of an uncertain world.20

The 'Smart Heuristics' approach 

"If risks are known, good decisions also require intuition and smart rules of thumb"19


In an age of Evidence-Based Medicine (EBM), medical professionals are caught somewhere in the middle. On the one hand, new evidence is generated at an increasing rate and for many situations, high-quality data exists to inform medical decision-making. The data on complication rates for different material components in THA presented in this issue are an example: taking the total number of revisions across all side-effects and patient types, data suggest that ceramic components offer better outcomes than their metal counterparts. This is what Gigerenzer would refer to as "risks that are known".

The important thing with known risks is to communicate them in ways that are transparent and understandable to decision makers, including both experts and laypersons.21 The fact-box format shown in the figure is such a way of transparently presenting the benefits and harms of medical treatments and represents current best-practice in risk communication.22,23 On the other hand, despite the available evidence, medical decision-making continues to be beset by a myriad of "risks that are unknown". How far is the scientific evidence applicable to the individual patient? What about new technologies and pharmaceutical interventions for which no or comparatively little evidence yet exists? In such situations, clinical intuition will continue to be a necessary and important part of medical decision-making. Necessary, because there simply is no data available for these questions, and important, because the research has shown that under these conditions, human expert intuition in fact does an excellent job and will not be easily replaced.

The (un)reliability of intuition?

It is also important to note however, that human intuition is only as good as the information environment, in which it has developed. If certain types of information and feedback about the outcomes of a decision-maker’s actions systematically do not reach them, then their intuition will be skewed. The asymmetric feedback about false-positives/over-diagnosis vs. false negatives/under-diagnosis is one of the factors that influences many medical professionals’ intuition to "better treat too much than too little", termed intervention-bias.24 Note that, contrary to the traditional view, this bias is not hardwired into human minds but rather the result of an adaptive mind attuned to a biased information environment. A similar effect can be observed if side-effects of treatments systematically differ with respect to how long they take to develop and whether they surface within the same specialty as the one responsible for the original intervention.

The data used for this analysis were obtained from the National Joint Registry ("NJR"), part of the Healthcare Quality Improvement Partnership ("HQIP"). HQIP, the NJR and/or its contractor, NEC Software Solutions (UK) Limited ("NEC") take no responsibility (except as prohibited by law) for the accuracy, currency, reliability and correctness of any data used or referred to in this report, nor for the accuracy, currency, reliability and correctness of links or references to other information sources and disclaims all warranties in relation to such data, links and references to the maximum extent permitted by legislation including any duty of care to third party readers of the data analysis.

The summary implant reports are available upon request: a.porporati@ceramtec.de.

 

The 'hot-stove' effect: The perceived unsafety

Recent research further indicates that human beings react very differently to information presented in written format ('Decisions from Description' such as a fact-box) than to information they have personally experienced ('Decisions from Experience', such as complications directly experienced or even anecdotal evidence from colleague25). If human beings experience a rare event, they are likely to heavily overweigh its likelihood, no matter the descriptive evidence presented. This is known as a 'hot-stove' effect, i.e., the avoidance of actions for which one has experienced negative outcomes even only once in the past. It is very difficult to overcome such effects with mere descriptive information. However, formats have been developed, which allow professionals to experience the frequency of events in simulated environments. Such interventions can potentially over-come hot-stove or other experience-based effects and thus positively impact medical decision-making in the direction of the best available evidence.26 A smart communication of the available evidence can therefore not just be told to people - in some cases, it must be experienced, either through exchanges with colleagues or in simulated environments which reflect the true underlying statistical frequencies of events.

This is known as a 'hot-stove' effect, i.e., the avoidance of actions for which one has experienced negative outcomes even only once in the past. It is very difficult to overcome such effects with mere descriptive information.

Integrating medical evidence and clinical intuition

Even in today’s world of Evidence-Based Medicine, good decision making will continue to require both medical evidence and the clinical intuition of experienced physicians. Generally, it is difficult for humans to switch off their intuition completely. This makes it ever more important that possible biases and asymmetries inherent in learning and information environments are critically reflected. Otherwise, those biases will be mirrored by the human mind, often unnoticed, and this may negatively impact clinical intuitions and the decisions based on them.

References

  1. Kahneman D. Thinking, Fast and Slow. Farrar, Straus and Giroux Publishing; 2011
  2. Wilke A, Mata R. Cognitive bias. In: Ramachandran VS, ed. The Encyclopedia of. Human Behavior, Vol. 1. Academic Press; 2012:531-535.
  3. Gilovich T, Griffin D, Kahneman D. Heuristics and biases: The psychology of intuitive judgment. Cambridge University Press; 2002.
  4. Bar-Hillel M. The base-rate fallacy in probability judgments. Acta Psychol. 1980;44(3):211-233. doi:10.1016/0001-6918(80)90046-3.
  5. Welsh M, Navarro D. Seeing is believing: Priors, trust, and base rate neglect. Organ Behav Hum Decis Process. 2012;119(1):1-14. doi:10.1016/j.obhdp.2012.04.001
  6. Gigerenzer G, Gaissmaier W, Kurz-Milcke E, Schwartz LM, Woloshin S. Helping doctors and patients make sense of health statistics. Psychol Sci Public Interest. 2007;8(2):53-96. doi:10.1111/j.1539-6053.2008.00033.x.
  7. Kahneman D, Tversky A. On the psychology of prediction. Psychol Rev. 1973;80(4):237-251. doi:10.1037/h0034747.
  8. Ariely D. Predictably Irrational: The hidden Forces that Shape our Decisions. HarperCollins Publishers; 2008.
  9. Simon AH. Invariants of human behaviour. Ann Rev Psychol. 1990;41:1-19. doi:10.1146/annurev. ps.41.020190.000245.
  10. Goldstein DG, Gigerenzer G. Models of ecological rationality: The recognition heuristic. Psychol Rev. 2002;109(1):75-90. doi:10.1037/0033-295x.109.1.75.
  11. Todd PM, Gigerenzer G. Ecological Rationality: Intelligence in the World. Oxford University Press; 2012. doi:10.1093/acprof:oso/9780195315448.001.0001.
  12. Gigerenzer G, Hoffrage U. How to improve Bayesian reasoning without instruction: Frequency formats. Psychol Rev. 1995;102(4):684-704. doi:10.1037/0033-295X.102.4.684.
  13. Jenny MA, Keller N, Gigerenzer G. Assessing minimal medical statistical literacy using the Quick Risk Test: a prospective observational study in Germany. BMJ Open. 2018;8(8):e020847. doi:10.1136/bmjopen-2017-020847.
  14. DeMiguel V, Garlappi L, Uppal R. Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy? Rev Financ. 2009;22:1915-1953. doi:10.1093/rfs/hhm075.
  15. Wübben M, v. Wangenheim F. Instant customer base analysis: Managerial heuristics often get it right. J Mark. 2008;72:82-93.
  16. Gigerenzer G, Goldstein DG. Reasoning the fast and frugal way: Models of bounded rationality. Psychol Rev. 1996;103(4):650-669. doi:10.1037/0033-295x.103.4.650.
  17. Gigerenzer G, Gaissmaier W. Heuristic decision making. Annu Rev Psychol. 2011;62:451-482. doi:10.1146/annurev-psych-120709-145346.
  18. Brighton H, Gigerenzer G. Homo heuristicus: Less-is-more effects in adaptive cognition. Malays J Med Sci. 2012;19(4):6-16.
  19. Gigerenzer G. Risk savvy: How to make good decisions. Viking Publishers; 2014.
  20. Gigerenzer G, Brighton H. Homo heuristicus: Why biased minds make better inferences. Top Cogn Sci. 2009;1(1):107-143. doi:10.1111/j.1756-8765.2008.01006.x.
  21. Gigerenzer G, Muir-Gray JA. Better doctors, better patients, better decisions: Envisioning health care 2020. The MIT Press; 2011.
  22. Brick C, McDowell M, Freeman ALJ. Risk communication in tables versus text: A registered report randomized trial on 'fact boxes'. R Soc Open Sci. 2020;7(3):190876. doi:10.1098/rsos.190876.
  23. McDowell M, Gigerenzer G, Wegwarth O, Rebitschek FG. Effect of tabular and icon fact box formats on comprehension of benefits and harms of prostate cancer screening: A randomized trial. Med Decis Making. 2019;39(1):41-56. doi:10.1177/0272989X18818166.
  24. Foy AJ, Filippone EJ. The case for intervention bias in the practice of medicine. Yale J Biol Med. 2013;86(2):271-280.
  25. Hertwig R, Wulff DU. A description-experience framework of the psychology of risk. Perspect Psychol Sci. 2022;17(3):631-651. doi:10.1177/17456916211026896.
  26. Wegwarth O, Ludwig WD, Spies C, Schulte E, Hertwig R. The role of simulated-experience and descriptive formats on perceiving risks of strong opioids: A randomized controlled trial with chronic noncancer pain patients. Patient Educ Couns. 2022;105(6):1571-1580. doi:10.1016/j.pec.2021.10.002.