Ernst Strüngmann Forum

 

Better Doctors, Better Patients, Better Decisions

Envisioning Healthcare 2020

Gerd Gigerenzer and Muir Gray, Chairpersons

October 25–30, 2009

Frankfurt am Main, Germany

Program Advisory Committee

Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin, Germany
Sir J. A. Muir Gray, Programme Director, UK National Screening Committee and National Electronic Library for Health, Oxford, United Kingdom
Günther Ollenschläger, Professor of Medicine and Director of the Ärztliches Zentrum für Qualität in der Medizin, Berlin, Germany
Lisa Schwartz, Professor of Medicine and Community and Family Medicine, Dartmouth Medical School; Co-Director of the White River Junction Outcomes Group, Hanover NH, United States
Steven Woloshin, Professor of Medicine and Community and Family Medicine, Dartmouth Medical School, Hanover NH, United States

Goal

To analyze systematically the scope of the problem, its causes, and the ways in which cognitive, medical, and economic sciences might contribute to improve health literacy.

Top of page

Justification

In World Brain (1938), H. G. Wells predicted that for an educated citizenship in a modern democracy, statistical thinking would be as indispensable as reading and writing. At the beginning of the 21st century, we have succeeded in teaching almost everyone reading and writing, but not statistical thinking – how to understand risks and uncertainties in our technological world. This forum addresses the problem of literacy in health care, from how research evidence is generated to how it is modified by the media and understood by doctors and patients who try to make sense of health statistics. Scientific and public policy studies on health have paid relatively little attention to people's health literacy, while technological, bureaucratic, and economic questions continue to dominate health policies. The few existing studies indicate that many patients, physicians, and political decision makers do not have the skills to evaluate the pros and cons of new technologies and do not even seem to notice or care. This state of affairs leads to unnecessary physical and emotional suffering of patients, promotes inefficiency in health care, and wastes money through overtreatment and overmedication. The goal of the proposed forum is a systematic analysis of the scope of the problem, its causes, and the possible contribution of cognitive, medical, and economic sciences to improve health literacy. We illustrate the problem with three case studies from the UK, Germany, and the US.

Scared Patients. Since the introduction of the contraceptive pill in the 1960s, British women have gone through several "pill scares." In the mid-90s, the press reported a warning by the UK Committee on the Safety of Medicines that third-generation oral contraceptive pills containing desogestrel or gestodene increased the risk of venous thromboembolism twofold, that is, by 100%. Many women reacted with panic and stopped taking the pill, which led to unwanted pregnancies (Jain et al., 1998) and resulted in an estimated 13,000 abortions above normal. The study cited in the news had actually shown that out of every 7,000 women who took the second-generation pill, 1 had a thromboembolism, and among women who took the third-generation pill, this number increased from 1 to 2. The difference between a relative risk increase (100%) and an absolute risk increase (1 in 7,000) was not known to these poor women. Both citizens and the pharmaceutical industry suffered, while the journalists profited from a front-page story.

Innumerate Physicians. In the US and Europe, women between 50 and 70 are encouraged to participate in screening for breast cancer. Mammography is not a highly reliable test, so it is important that physicians inform participants that out of every 10 women who test positive, only 1 actually has cancer. A positive mammogram is like an activated car alarm – usually a false alarm. Yet most physicians do not seem to know the relevant medical studies. According to studies with hundreds of gynecologists, radiologists, and other specialists, the majority believes that out of every 10 women who test positive, 8 or 9 have cancer (Gigerenzer et al., 2008; Hoffrage et al., 2000). Consider what unnecessary fear these physicians cause in patients with a positive test. Even when physicians were given the relevant numbers, such as a prevalence of breast cancer (1%), a sensitivity (90%), and a false positive rate (9%), most were confused by percentages. To this day, effective training in statistical thinking in medical schools is barely existent. Yet it is indispensable, given that the division of labor, time pressure and the absence of statistical records make learning from experience difficult in medicine.

Trust contra Evidence. Lack of statistical thinking is not the only cause of health illiteracy: Trust and paternalism are another. Berg et al. (2008) studied the decisions for or against PSA screening in a group that promotes rational thinking and cost-benefit analyses: neoclassical economists. Medical institutions such as the National Cancer Institute and the U.S. Preventive Services Task Force explicitly recommend that instead of automatically undergoing PSA screening, men should weigh costs and benefits individually in order to decide whether or not to take the test. The reason is that the test's benefits are not proven, but its harms are (such as incontinence and impotence following surgery after a positive test). Yet 95% of economists said that they had not consulted any medical source, and two thirds answered that they had not weighted pros and cons but simply followed the recommendation of their doctor (or wife). The emotional heuristic "trust your doctor" seems to dominate the decisions of even those trained to think statistically.

These three cases illustrate the phenomenon of health illiteracy and some of its causes. In health care, lack of understanding numbers (innumeracy) is generally presented as a problem of uneducated or poor patients. That educated patients also rarely ask questions about benefits and harms of treatment is less known, while the lack of risk literacy among physicians has gone mostly unnoticed and has yet to be understood in its full implications. The proposed forum will discuss the psychological, legal, and institutional aspects of health literacy. Along with trust, the illusion of certainty (for instance, every second German citizen believes that a screening test result is certain) inhibits statistical thinking.  The German law states that patients have a right to information about their state of health, whether full or limited, written or oral, and assumes that physicians understand the risks, thereby neglecting the discrepancy between normativity (what the law assumes) and normality (many physicians' innumeracy). Health institutions often see their mission as fostering compliance rather than comprehension, publishing information brochures that omit medical results if they conflict with that goal. Under these circumstances, the ideals of informed consent and shared decision making cannot be achieved.

This Forum will focus on health literacy in preventive medicine; that is, in situations where patients have no symptoms or pain that might conflict with deliberate thinking. We will approach the questions from different theoretical perspectives, including health economics, game theoretical analysis, the psychology of decision making and risk, an institutional analysis of the conditions under which physicians work and feel liable to malpractice suits, and an analysis of the emotional relationship between physician and patient as well as the degree to which it interferes with patients' active search for information.

Top of page

Group 1: How can useful research evidence for clinicians and patients be generated?

Facts are generated rather than simply found. The goal of Group 1 is to analyze the generation of evidence up to what is reported in medical journals, to what extent this process results in useful evidence for clinicians and patients, and how the process can be improved. The first background paper sets the research agenda, formulates the questions being asked, and outlines the role of funding in the choices involved. The second background paper addresses the issue of reporting results comprehensively. What standards exist for publishing results, and how can these be improved? The final background paper examines how results are reported, and how the framing of results fosters (mis-)understanding. For instance, journals often allow benefits of treatments to be reported as relative risk reductions (big numbers), whereas harms are reported as absolute risk increases (small numbers). How can paying subscribers induce journal publishers to present evidence in a more transparent and less biased way?


Group 2: What is the extent of illiteracy in evidence-based healthcare, and what can be done to reduce it?

Group 2 picks up where Group I left off, that is, on studying how the published reports are actually understood by those who disseminate it and what message reaches the final addressees, the doctor and the patient. At issue is a striking phenomenon, collective statistical illiteracy in health care. Three background papers address the extent, causes, and remedies of illiteracy in health care among citizens, journalists, and health care providers, respectively. The fourth background paper concerns the ideal of shared decision making. In which countries do patients have sufficient medical knowledge to make informed decisions about prevention and screening? In the 1970s, psychologists argued that the limited ability to think statistically was due to the cognitive limitations of our brains, whereas beginning in the mid-90s, it was demonstrated that many apparently hard-wired errors are caused by misleading external representations of risk (such as relative risks and conditional probabilities) and that insight can be achieved by transparent representations. In addition, it is becoming clear that emotional reasons – the illusion of certainty and patients' trust in their doctors – conflict with evidence-based thinking. Possible institutional causes include the general lack of statistical education in our societies, the specific lack of training in medical school and physicians' further training, and the conflicts of interest some medical institutions have between their own goals and transparent patient information. The goal of Group 2 is to analyze the psychological and institutional causes of illiteracy in healthcare and to promote an understanding of their interplay.


Group 3: How can the least biased evidence be delivered?

Major health institutions are subjected to forms of censorship in their dissemination of evidence, due to conflicting political and industrial interests. For instance, an analysis of leaflets handed out by representatives of the pharmaceutical industry to German physicians showed that the reported facts were systematically incorrect or could not be verified in 92% of the cases. The goal of Group 3 is to explore the opportunities of improving the flow and quality of evidence, given that understanding the causes of illiteracy, as explored in Groups 1 and 2, does not guarantee a solution. For instance, if statistical illiteracy is caused by limitations in our cognitive capacity, there is little to be done. In contrast, if it is caused by confusing representations of risk, one solution would be that brochures, journals, and physicians use transparent representations instead. The first background paper concerns the role of patient group lobbying on the Internet or via other media. Do these efforts help communication in more comprehensive and transparent ways? Do patient groups in different countries work with systematically different approaches? The second paper addresses direct drug-to-consumer advertising. How could the bias in these messages be decreased in a competitive market? The third paper concerns the vision of best treatment and the worldwide dissemination of available evidence. Missed opportunities are the topic of a fourth background paper that deals with how transparent evidence could be conveyed in the everyday business of writing letters to patients, lab reports, or prescriptions. The fifth and final background paper compares healthcare systems in different countries in terms of how they actually help patients and health care providers make intelligent decisions about preventive services. Representative surveys of citizens in European countries, for instance, showed that the public systematically overestimates the benefits of PSA and mammography screening. Who puts these false beliefs into the minds of the public, and can anything be done to correct them? If heuristics such as "trust your doctor" or "more treatment is always better" prevail over evidence-based decisions on health issues, the policy implications may involve revising basic principles in society, including a revision of our educational systems so that statistical thinking is taught beginning in elementary school. Can statistical literacy in health and trust be reconciled?


Group 4: How will health care professionals and patients work together in 2020?

At the 2007 International Shared Decision Making Conference, the question was raised whether physicians and patients outside of the conference would ever want informed patients. Few physicians seem to prefer patients who ask questions, and few patients want the responsibility. Group 4 will envision what health care might look like some 10 y result, changing healthcare systems. A second paper investigates the potential for change in professional schools, asking: Can medical schools be changed? In which healthcare systems is the answer more likely to be in the affirmative, and why? The third background paper looks at the potential of change in the mentality of the patient. Will more than 5% of patients ever want to be responsible for making decisions? What happens to the placebo effect when patients become more informed? What legal and financial incentive structures can replace defensive decision making with shared decision making? And finally, what would a modern society – in which statistical thinking is as natural as reading and writing – look like?

Top of page

References

Berg, N., Biele, G., & Gigerenzer, G. (2008). Economists surveyed about PSA: Consistency versus accuracy of belief. Unpublished manuscript, University of Texas, Dallas.

Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M., & Woloshin, S. W. (2008). Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest, 8, 53–96.

Hoffrage, U., Lindsey, S., Hertwig, R., & Gigerenzer, G. (2000). Communicating statistical information. Science, 290, 2261–2262.

Jain, B. P., McQuay, H., & Moore, A. (1998). Number needed to treat and relative risk reduction. Annals of Internal Medicine, 128, 72–73.

Top of page