Annie Vincent, Carolyn Huang, Tanvi Lal
As per current estimates, almost 765 million people are affected by food insecurity worldwide. Chronic vulnerability, hunger, and water insecurity have been compounded by climate change, humanitarian crises, and COVID-induced shocks and pressures. USAID, through the Bureau for Resilience and Food Security (RFS), works through dedicated centers to improve program impacts on these critical issues and accelerate the progress on building resilient communities and countries. To strengthen this work, 3ie has joined hands with the bureau for the RFS Evidence Aggregation for Programmatic Approaches Project (REAPER) in collaboration with partners at the Massachusetts Institute of Technology (MIT) and University of Notre Dame. Together, we are exploring ways to automate evidence screening and accelerate the production of evidence gap maps with an aim to support the bureau’s strategic and programmatic approaches.
To understand this project better, 3ie’s Annie Vincent, Carolyn Huang, and Tanvi Lal asked key partners and collaborators across the partner organizations to share insights.
Note: Since these interviews were conducted, the conflict and humanitarian situation in Ukraine has created ripple effects on global agricultural supply chains and markets. These conversations have been lightly edited for clarity.
1. What was the motivation behind this project?
Chris Hillbruner (Division Chief, Analysis and Learning – RFS): We are interested in finding ways to further support the practice of evidence-based decision-making. One of the first steps to achieving this is to map the evidence that underpins the technical approaches that we are promoting. We have done some very focused evidence mapping efforts in the past, but we lacked a Bureau-wide mapping that uses a common approach and presentation. The REAPER1 activity will set a foundation for the Bureau’s evidence work, allowing us to look across the current state of the evidence and make a variety of decisions about how to better drive our technical support and investments with evidence.
2. What will the maps focus on?
Carolyn Huang (Project lead, 3ie): We are working on four maps: on agriculture, nutrition, resilience and water, sanitation and hygiene (WASH). While 3ie has previously worked on EGMs in these sectors, we are developing the first systematic evidence gap map of resilience effectiveness studies. This map will examine multi-level interventions that address capacity to mitigate, adapt to, and recover from shocks and stressors. This evidence base intersects with humanitarianism and development and presents a critical multidimensional understanding of wellbeing. The WASH systematic map is also the first of its kind and essentially maps outcomes to outcomes.
3. What is different about the WASH map?
Sridevi Prasad (Senior Research Associate, 3ie): We discovered that the majority of WASH research focuses on the impact of WASH interventions on health-related outcomes. However, there is a limited but rising body of evidence that better WASH programming can affect long-term social and economic results. We are trying to answer the question: “What is the relationship between improving WASH impact and long-term, systemic change?” We are now working on a novel methodology for mapping intermediate to long-term outcomes.
We are excited to see how this informs RFS’s Center for Water Security, Sanitation, and Hygiene programming and investment decisions and contributes to the global evidence base in the WASH sector and the field of evidence mapping.
4. Can you talk a bit about the collaboration behind this project and some of the project’s distinguishing features that position it for success?
Carolyn: We’ve been fortunate to have a strong evidence champion in RFS — they understood what kinds of policy questions they wanted evidence to support. Within each organization, success hinges on the capacity to come to a shared understanding about the challenges to incorporating evidence. That, I believe, makes any attempt to bring evidence to policy successful and has been one of the most significant and rewarding aspects of this partnership.
We are also working with researchers from MIT’s D-Lab and Department of Mechanical Engineering and the University of Notre Dame on new and innovative ways to use machine learning. While 3ie uses widely-accepted evidence synthesis methods, we are excited to try something new.
Mark Engelbert (Evaluation Specialist & Lead, Development Evidence Portal, 3ie): This project stands out for its use of machine learning and attempts to push the boundaries of how it is employed in the evidence synthesis and evidence mapping processes. If we can do it efficiently, we will be able to accomplish things that are truly new.
5. Where is machine learning being used in this project?
Jaron Porciello (Associate Director for Research Data Engagement, Department of Global Development, Cornell University): We use machine learning and artificial intelligence to improve and accelerate the screening process. Different machine learning models can come into play at different stages of evidence synthesis. We train the models to make it easier for screeners to go through and analyze them fast.
Dan Frey (Professor of Mechanical Engineering and Research Director at MIT): There is still technology to be evaluated where we’re doing a meta-analysis of published impact evaluations and systematic reviews to see where there is evidence and lack of it. 3ie has been doing this for a long time, but we’re hoping to add some value through machine learning. This represents a tremendous chance to reduce the thousands of articles that may exist in the field to a reasonable amount that we can assess.
6. Apart from its potential to support screening, what are the other benefits of using AI and machine learning?
Chris: The main challenge we are trying to address with machine learning is the cost and the time it takes to do evidence aggregation and synthesis. At the same time, the amount of evidence that is available is growing exponentially. So, the importance of doing these activities is getting increasingly important.
7. We know it’s early days, but can you tell us how the field of machine learning may evolve in the future?
Jaron: People have started to get excited about the fact that there is a more practical use for artificial intelligence and machine learning than has been presented before. It has been very theoretical or maybe it has been used in medical diagnostics. What are we doing to make sure that machine learning can be used for structuring unstructured data? 80% of the world’s information is unstructured photographs and texts. I think it is very exciting and there is a lot of practical application for it.
8. What can we expect to learn from this project? Do you already know how this work might be useful?
Chris: The maps can help us think about what impact evaluations or evidence syntheses the Bureau might want to support. Or, in cases where evidence and synthesis already exist, we can use these maps to make adjustments to the technical approaches we are promoting. Having these maps for all of our technical areas allows us to have those conversations about evidence gaps, synthesis needs, and approaches at a strategic Bureau level, as opposed to sector-specific areas. This is especially crucial given the Bureau’s big-picture goal of jointly leveraging our four major technical areas to drive declines in poverty, hunger, malnutrition and water insecurity. This work is a foundation, but I’m hopeful it will act as a springboard for a number of other evidence efforts that will help the Bureau.
Jaron: The opportunities depend on how 3ie conducts its processes and how others can obtain access to this information and skills through various platforms and tools developed by the machine learning team. Some of our MIT programmers had never heard of evidence synthesis prior to this endeavor. This is something they are learning from the ground up. As a result, not only are there opportunities to produce tools, but also opportunities for the researchers involved. It demonstrates the types of collaborations that are available, as well as what it looks like from a 3ie viewpoint to be able to break down processes at such a granular level and create new tools and techniques.
9. What is the status of the four EGMs? What are some key outputs we can look forward to in the coming months?
Carolyn: The EGMs are at various stages of development. We have had consultations with RFS, external experts, and finalized the scope of the intervention-outcome frameworks of the EGMs. We’re simultaneously screening the literature and iterating on machine learning models with MIT and Notre Dame. We should have some preliminary results soon and we hope to make final reports and EGMs public by September.