Do financial services improve the well-being of poor people? For over a decade, the financial inclusion community has sought to answer this question by developing methods to rigorously measure the average impacts of financial services on large groups of people. But what was intended to bring clarity to the impact debate has instead kept it alive and unresolved, as a large number of studies offer mixed — seemingly contradictory — results without a theory of change to make sense of them or extract actionable lessons for policy makers. We can settle the debate if we focus on one thing: context.
Most of today’s studies do not distinguish between the contextual conditions that can lead to the positive, neutral or negative impact observed across studies. For example, there is evidence from Malawi showing that accounts can increase savings for farmers and that this translates into increased agricultural output and household expenditures. When other researchers replicated this field experiment in Uganda, Malawi and Chile, however, they found no such evidence. Neither study examines contextual factors that could explain their results or point to larger lessons about why financial services had different impacts in different places. As this example illustrates, the unintended consequence of focusing on average impacts for over a decade is that we have neglected efforts to understand the context in which people use financial services. This has resulted in a plethora of studies that have limited use for policy makers.
If we want to understand how financial services can create opportunities for the poor, we need to stop generalizing impacts without considering context. When we focus on averages, we get few insights into the impact mechanisms at play. If we were to pay more attention to the context in which individuals use financial products and services, we could better understand not only what works, but also why, and thereby support financial inclusion programs that provide greater value for different client segments.
In an effort to understand how context may explain the contradictory impact estimates evident in the literature, CGAP conducted a review of over 100 recent financial inclusion impact studies. We found a notable lack of contextual background and description:
- Less than 26 percent of the studies provided a clear theory of change upfront to support the logic of the hypotheses being tested.
- Less than 30 percent of the studies included adequate contextual background on community conditions, individuals and product characteristics that might be associated with positive or negative outcomes in well-being.
- Just 20 percent of studies used contextual background to explain impact estimates obtained. Contextual background could refer to literacy rates, road infrastructure, intrahousehold relations, market access, labor market opportunities or the capture of benefits by the elite, when explaining results.
If we do not document context, it becomes difficult to compare studies across settings. As Angus Deaton and Nancy Cartwright point out, randomization does not equalize everything in a specific context, and it certainly does not remove the need to think about observed and unobserved contextual variables that may drive results. Researchers generalize to help policy makers interpret information and make decisions, but we also have to make an effort to clarify the theory of change that underlies our expected impact, understand if we are asking the most pertinent questions and provide context to explain impact estimates.
Considering contextual variables in a systematic and comparable way across studies is not a trivial exercise. It requires academics and donors to invest in developing new and better methods. We are starting to see progress. For instance, in the 2017 study “Labor Markets and Poverty in Village Economies,” Oriana Bandiera et al. explain the contextual characteristics used for hypothesizing and testing for impact. They find that disaggregating the customer sample by income and documenting local labor markets is crucial for explaining the observed impact.
Rachael Meager’s 2019 study on average impact and heterogeneity across seven randomized microcredit evaluations shows how the scrutiny of context can result in highly relevant policy and research lessons. Meager finds that 60 percent of the observed heterogeneity is due to variation in the sampling techniques used by researchers, which highlights the importance of consistency in experimental setups. In addition, by contrasting individual characteristics among customers, she finds that the use of lower-cost microloans had large positive outcomes only for women who had previous business experience, indicating useful preconditions that can predict positive impact.
Yet another encouraging example is a recent study by Eva Vivalt. By comparing NGO-implemented versus government-implemented programs in several developing countries, Vivalt provides evidence of how the characteristics of the organization that implement development programs can help explain the magnitude of impact observed.
Mixed-method approaches are valuable to understand the sometimes subtle differences between treatment and comparison groups that can drive impact. As Naila Kabeer argues, most quantitative data collection methods are unsuitable for collecting sensitive information on topics like intrahousehold relations or sociocultural factors that limit people’s access to services. Incorporating qualitative methods, such as observational data and key informant interviews, can provide additional insights for financial inclusion interventions.
Donors and researchers should focus on methodological innovations that allow for the systematic collection of comparable contextual variables. This would yield more actionable insights for policy makers and support financial inclusion policies that are more efficient and improve poor people’s well-being.