“Because rolling dice, asking for divine intervention and taking wild stabs at the problem don’t work”: The use of evidence in humanitarian response
Authors: Dell Saulnier, Claire Allen, Anneli Eriksson, Ben Heaven Taylor
Generating and using scientific evidence in humanitarian contexts is not easy. High quality research evidence requires foresight, time, money, human resources, collaboration and buy-in from numerous actors including participants, operational agencies, funders, donors, and researchers. It is also well known that there are many gaps in evidence across humanitarian practice (Blanchet et al 2017; Clarke et al 2014) and, even if available evidence is applicable across different contexts or settings, there is no guarantee that it will be noticed or used. Too often, research findings are presented only to others in the research community (via academic journals), are complex to interpret, or are inaccessible to people who might benefit from them. When faced with uncertainty about the research process and findings, as well as the interpretation of the results, it is no surprise that research evidence is often neglected as decisions are made about humanitarian response (Ager et al 2013). There is a double gap: a gap in evidence and a gap in the use of evidence even when it does exist.
Yet, when robust evidence is available, humanitarian programs that build on it are likely to do better than those that don’t. Outcomes can be better, programs more effective, and funds spent more wisely on interventions that are proven. Research should help inform decision makers who are designing new interventions and programs, which in turn should generate data for evaluation and analysis, further strengthening the impact. In short, using evidence often means getting plenty of “bang for your buck”, which raises the questions: why do some decision makers choose to use evidence while others don’t? How can this divide be bridged?
We at Karolinska Institutet and Evidence Aid conducted a short survey to find out more. We wanted to see how, when, and why decision makers in humanitarian response use scientific, peer-reviewed evidence to make decisions. We defined evidence as “information from research studies, done for the purpose of answering a question and in a systematic and transparent way, that often includes examination of its quality by experts who were not involved in the research” and, after sharing the survey across our respective networks, received 47 full replies which are summarised here.
Unsurprisingly, decision makers told us that they wanted to use evidence that was contextually relevant and informed by field experience. They used evidence to maximize the impact of programs and aid, to reassure decision makers and stakeholders that good decisions are being made, and to ensure that beneficiaries benefit to the greatest degree from program content while being protected from poor quality decisions or programs. However, although nearly all respondents felt able to personally assess the quality of the evidence they used, they often relied on their trust in peers, experts, colleagues, and organizations to help them decide whether information was evidence or not.
Our survey builds on earlier work to show that researchers still need to make greater efforts to summarize their findings, explain the purpose of their research, and show how the methods can be used and the findings applied. Research findings need to be disseminated to a wider audience of agencies and decision makers, via organisations like Evidence Aid. Research summaries need to be translated into the languages of users and provide access to the original research, ideally free at the point of use. There needs to be more and better interaction between research users and research generators to design studies that decision makers will judge relevant and useful. The various actors need to work together to guide research agendas and research funding. The right questions will then be asked, at the right time, and with the right contextual relevance; and the right resources will be available.
Use of scientific evidence to inform programming and practice should be standard in the humanitarian sector. Our survey suggests that decision makers are aware of the benefits and challenges of using evidence in humanitarian response, but that more remains to be done to overcome the paucity of robust evidence, and improve access. We believe that researchers, agencies and organizations need to join together to determine the best path forward, to pose the question “What can we do for you?” and to provide evidence to answer it.
References
Blanchet K, et al. Evidence on public health interventions in humanitarian crises. Lancet 2017;390:2287-96.
Clarke M, et al. What evidence is available and what is required, in humanitarian assistance? 3ie Scoping Paper 1.New Delhi: International Initiative for Impact Evaluation (3ie). 2014.
Ager A et al. Strengthening the evidence base for health programming in humanitarian crises. Science 2014;345:1290-2.
About the authors:
Dell Saulnier is a PhD student in public health sciences. Her research at Karolinska Institutet is on health systems in disasters, focusing on health needs and resilience. She has a background in epidemiology, clinical research, and global health.
Anneli Eriksson is a registered nurse specialized in anesthesia care, with a Masters’ degree in International Health. She presently works as project coordinator at Karolinska Institutet with a research team focusing on health-care in disasters. Anneli coordinates research support and develops and coordinates courses in international disaster medicine and health-response. In 1995 Anneli started her work with Medecins Sans Frontieres (MSF) and has since then held several positions in field-missions in various countries, and at office level. Most recently in 2018 she worked in DRC, during the ongoing Ebola outbreak in North Kivu.
Claire Allen has been the Operations Manager of Evidence Aid since 2011. She has worked in all aspects of managing the organization, but with a particular focus on the areas of relationship and knowledge management.