Chapter 6: In Search of Attribution – Our Impact Evaluations

With the approval of the Development Effectiveness Framework (DEF) in 2008, the IDB set high standards for measuring results. To ensure that we are choosing to do the right things, the IDB uses a Development Effectiveness Matrix from the outset of a project to assess the problems being addressed and make sure the proposed solution is both evidence-based and the right fit (see Chapter 2). Once this strategic analysis has ensured the suitability of the intervention, the IDB’s Project Monitoring and Project Closure Reporting Systems track the project’s outputs and results to make sure its intended outputs and value is delivered (Chapters 3 and 4).

The IDB uses result indicators as part of project monitoring to determine before-and-after trends. For example, if a project was designed to upgrade a school’s curriculum in order to improve the teaching of pre-math skills, do we see an upward trend in test scores of students enrolled in the program? If so, we can get a sense that we are doing things right.

Of course, there are a multiplicity of factors beyond changing how pre-math skills are taught that could affect students’ scores. For example, the children’s nutritional intake may have improved, allowing for improved academic performance. To really distinguish whether an IDB-supported intervention is the reason for a positive trend, we need a rigorous impact evaluation. Such evaluations allow us to verify that indeed we have selected to do the right things and are doing them right. Impact evaluations show that observed positive trends or development results are because of, or attributable to, the IDB-supported project. There are various methodologies for conducting impact evaluations, including the randomized evaluation in which potential beneficiaries are randomly assigned to a treatment or control group, and only the former group participates in the program. Assigning participants randomly ensures that the groups are the same on average. Thus, any positive effects due to factors that are external to the program would be true across the treatment-control divide, and can therefore be disentangled from the true program effects.

While randomization is the most rigorous in evaluating attribution, sometimes it is not appropriate due to design constraints. Therefore, choosing the right impact evaluation methodology involves considering conditions on the ground, the project’s size and budget, and the nature of the intervention itself. Fortunately, quasi-experimental methodologies can be employed to construct a control group in alternative ways that do not involve randomizing. Propensity score matching and discontinuity in regression designs are two examples of quasi-experimental methodologies.

In propensity score matching, beneficiaries are matched into the control group based on observable characteristics such that in the end the control group is very similar to the treatment group. The shortcoming of this methodology compared with randomization is that unobservable or unrecorded characteristics will not necessarily be the same on average across groups. For example, in an education program evaluation, student IQ might still differ in the absence of randomization.

Similarly in a regression discontinuity design, a control group is constructed in the absence of randomization. In this case, there is always some discrete program requirement for receiving the treatment. For example, it could be an age threshold of 70 and a maximum income for receiving a pension benefit. A control group can then be selected from those individuals that meet the income requirement but are marginally ineligible to receive the program as they are not quite yet 70 years old.

The section that follows presents 12 stories about impact evaluations completed by the IDB in 2014 that show whether projects have accomplished the desired development results. These evaluations increase the transparency and accountability of Bank-financed projects and serve to disseminate the knowledge that stems from them. In addition, substantiating that the project’s results correspond to its design is critical for designing future project interventions.

Since the DEF was put in place, the number of impact evaluations of components of IDB sovereign guaranteed (SG) projects has steadily increased (refer to Figure 6.1 below). Amongst the portfolio of IDB evaluations related to an IDB SG project, 48 were approved in 2014, compared with only 9 such evaluations approved over the nine-year period preceding the implementation of the DEF. In other words, the DEF has effectively institutionalized project impact evaluations at the IDB. In all, 274 impact evaluations of IDB projects have been conducted since the DEF was implemented in 2008. Since 2010, an average of 49 projects with a programmed impact evaluation are approved per year, this trend is expected to continue.

figure-6.1

Notably, not all projects approved by the IDB’s Board of Directors require impact evaluations. Moreover, since evaluations are costly, it is important to direct those resources toward where they are most productive. Large projects where accountability is critical merit evaluations, as do projects in which substantial knowledge gaps have been identified, since the effectiveness of those projects depends upon determining the extent to which those gaps have been reduced. Pilot projects that could eventually be scaled up also require evaluations.

However, if the approved project is based on an intervention that has been amply analyzed and evaluated, and if knowledge gaps have not been identified, an impact evaluation is not necessary unless there is some new approach that is being tested. That’s not to say that these projects go unevaluated. In these cases, other mechanisms are used to track project results and verify project achievements, and those findings are then included in the Project Completion Report.

In addition to evaluating the impact of its own projects, the IDB is often approached for its evaluation expertise to evaluate interventions of our strategic partners. Nearly a quarter of the IDB’s impact evaluation portfolio is devoted to such external evaluations. If we include these outside evaluations where the IDB has served as a knowledge partner, the IDB has an impact evaluation portfolio of 378 evaluations to date. To give just one example, the Inter-American Partnership for Education Program asked the IDB to evaluate its training program to make its English-language educators more effective (read the story “Opening the English World for Native Spanish Speakers).

In terms of the status of the IDB’s entire impact evaluation portfolio, Figure 6.2 shows that as of December 2014, 43 percent of all evaluations were in the design stage and 15 percent had been concluded. About 10 percent of the total (39 evaluations), have been cancelled. Difficulties and unforeseen circumstances throughout program implementation have been the main reasons for cancellation. For example, some projects were delayed in implementation or significantly restructured. Other projects were cancelled prior to implementation. In some cases a decision was taken not to perform a project impact evaluation, but rather to use an alternative evaluation such as a before-and-after comparison or an ex-post cost-benefit analysis to report on a project’s results.

figure-6.2

Traditionally, it is the social sector that better lends itself to impact evaluations because with social interventions it is usually possible to put together a treatment group of beneficiaries and a control group of non-beneficiaries who are the same on average. Thus, as of December 2014 half of the impact evaluations of IDB projects were classified under the Bank’s sector priority of Social Policy for Equity and Productivity (Figure 6.3).

figure-6.3

However, in its efforts to push forward the knowledge of what works for development, the IDB has increasingly engaged in recent years in evaluations in more challenging sectors. Figure 6.4 shows that, as a proportion of the total number of impact evaluations, evaluations of the IDB’s social interventions are declining and those of other sectors are increasing. In large part this is due to more impact evaluations of projects approved under the sector priority of Institutions for Growth and Social Welfare. Two of the stories below are about impact evaluations in sectors where they are not traditional. Moreover, Box 6.1 highlights advances toward evaluating private sector work in the Opportunities for the Majority Sector.

figure-6.4

Box 6.1

Advancing Impact Evaluations for the Private Sector

Despite the implementation of operationally successful base of the pyramid (BoP) business models globally over the last decade, there is still scarce empirical evidence to support a causal relationship between these private sector interventions and enhanced living standards. There is, therefore, a need for undertaking rigorous evidence-based evaluations to capture the development effectiveness of private sector interventions and disseminate their lessons learned among the general public and private sector actors.

The Opportunities for the Majority (OMJ) sector projects reach people at the BoP through the provision of quality goods and services that allow them to overcome the market failures that have excluded them from the formal system, thus imposing a costly poverty penalty.

In 2013, OMJ obtained a Technical Cooperation grant to carry out a series of impact evaluations studies to determine the development effectiveness of a sample of its projects. These evaluations will contribute to learning about the impact of the operations on enhanced living standards of target beneficiaries.

The results derived from the impact evaluations will guide OMJ in the design of its future operations and will help the sector in making better-informed strategic decisions. The results will also help position BoP business models as an effective, sustainable alternative for development among businesses and financial institutions already serving the BoP in Latin America and the Caribbean and those who are starting to get interested in serving this segment.

The sample of projects is diverse in terms of countries of origin, type of services provided, and businesses models. Furthermore the projects were selected based on the willingness of the clients to conduct an impact study, the availability of baseline data and a reliable control group and the feasibility of a robust methodological approach.

OMJ has three ongoing impact evaluation studies in the areas of education (Colegios Peruanos: Quality Private Education for Emerging Social Classes in Peru—PE-L1120); access to housing and home improvement financing (Visión Banco: Habitat for Humanity’s Improved Housing Program for Low-Income Families in Paraguay—PR-L1057); as well as access to productive and consumption credit (Social Financing Program: Empresas Públicas de Medellín-Unidad de Negocios Estratégicos (EPM-UNE)—CO-L1080).

Read More

Evaluating projects across the full range of sectors, including those where impact evaluations are non-traditional, is important because it allows us to learn what works for development, including in areas where the most effective approaches are less understood. It is also important that the IDB, in partnership with our borrowing countries, continue to promote the evaluation of areas where there is already some consensus and expertise, but where still more knowledge can be gained. For example, one of the stories featured below is about the evaluation of a conditional cash transfer program in Honduras. Despite a large number of evaluations focusing on different dimensions of conditional cash transfer programs since the inception of Mexico’s Oportunidades, the project team realized that the necessity of incurring large costs to verify conditionalities in order to trigger changes in human capital investment had not been analyzed and could provide valuable lessons for a better design of these types of programs. It is just that type of knowledge gained from impact evaluations—in sectors both old and new—that will continue to foster better development results in the years ahead. Such evaluations are the most rigorous way for the IDB to ensure that our interventions are having the desired development results and that these are indeed attributable to the IDB project.


Read the 12 stories about impact evaluations completed by the IDB in 2014