Do Productive Development Policies Work for Micro, Small, and Medium-sized Enterprises?
Micro, small, and medium enterprises (MSMEs) in Argentina accounted for more than half of the gross domestic product (GDP) and three-quarters of all jobs in the second half of the 2000s. However, evidence suggests that market and coordination failures have threatened the productive potential of these firms.
Among the many challenges faced by Argentine MSMEs, the most critical have been the shortage of qualified and affordable professional technical services, weak management capacity, and a lack of skills to prepare investment projects. In addition to these problems the lack of coordination among the MSMEs themselves, and difficulties in accessing credit, have made the challenges even more complex.
To address these issues, the Argentine government obtained a $50 million IDB sovereign guaranteed loan to support the MSMEs Credit Access and Competitiveness Program, (PACC as per its initials in Spanish) in 2007. The main objective of the program was to improve the competitiveness of MSMEs by providing co-financing for individual technical assistance in order to reduce or eliminate barriers to their growth and increased productivity. One element of the program was to provide grants to MSMEs in order to mitigate the effects of the various market and coordination failures they face.
The PACC contained the necessary ingredients to foster the competitiveness and productivity of Argentine MSMEs. However, measuring the effectiveness or the impact of these types of program, which are commonly known as productive development policies, has long been a methodological challenge for evaluators.
Two such challenges stand out.
The first challenge is the demand-driven nature of this type of program. The beneficiary firms are typically better off than firms that do not apply to or benefit from the program. So it is often difficult to find nonbeneficiary firms to make valid comparisons and assess whether the program was effective. The second methodological challenge is related to the fact that many of the effects of productive development policies show up a number of years later (between three and five years). Thus, measuring those effects requires collecting data several years after implementation of the policies, which in turn often implies expanding the time frame of the evaluation beyond the mandate of those who designed the policies being evaluated.
Was the MSMEs Credit Access and Competitiveness Program (PACC) Effective?
In spite of the methodological challenges, the evaluation confirmed that beneficiary firms, before participating in the program, were on average better off than the rest of the firms. The results also show that the PACC had a positive and significant impact on beneficiary firms in comparison with a control group with similar ex-ante characteristics. This was demonstrated in particular by the firms’ growth measured by their number of employees (an increase of 5 percent), probability of exporting (6 percent), and volume of exports (6 percent). At the same time, PACC beneficiary firms had a greater survival rate (1.5 percent) than firms in the control group. The program also had a positive and significant impact on the productivity of firms, as measured by average wages (1 percent).
In addition, the evaluation found heterogeneous effects between firms in different sectors as well as between different types of projects that were co-financed. For example, the effects on export performance come mainly from beneficiary firms in the manufacturing sector. On the other hand, for firms in both the manufacturing and services sectors, the most effective mechanism to increase productivity was support for improving the quality of processes and services.
Finally, the study concluded that the greatest benefits of the program were linked to the first time the firm-project received program support. Those benefits diminished with additional support until they reached a point where there was no additional benefit.
Why Is It Difficult to Evaluate Such Programs?
To correctly estimate the impact on those firms that participate in the program versus those that do not, it is critical to have a balanced sample of firms with similar (observable and unobservable) characteristics. In other words, it is necessary to have a comparison (or control) group identical ex ante to the beneficiary group. The evaluation used administrative records from the PACC provided by the Small and Medium-sized Enterprise Secretariat (Secretaria de la Pequeña y Mediana Empresa, SEPYME) together with a panel database on the universe of formal firms in Argentina constructed by the Observatory of Employment and Entrepreneurial Dynamics (Observatorio del Empleo y la Dinámica Empresarial, OEDE).
Because the support provided by the PACC was not randomly assigned – that is, the firms that received support applied for it, presented a project and met certain eligibility criteria – the rest of the firms (nonparticipants) were not necessarily comparable with the beneficiaries. In other words, there was selection bias.
To estimate the effects of the PACC on the firms’ performance, the evaluation used two econometric methods. The aim of the first method was to use propensity score matching to identify among a sample of nonbeneficiaries, those firms with observable characteristics similar to those of beneficiary firms. In other words, based on the information available, a “clone” nonbeneficiary firm was found for each beneficiary firm. Once that sample was obtained – and to eliminate potential differences in characteristics not observable to evaluators, such as entrepreneurial spirit, management capacity, or growth potential – a second econometric method, a lagged dependent variable model, was used to control for firm performance before entering into the program.
Despite the methodological challenges faced by the evaluation team, this evaluation yielded valuable insights on the effectiveness of the PACC. Today, an increasingly number of productive development policy programs are finding methodological ways to rigorously measure whether projects work.
Continue reading Chapter 3 - Evaluating Projects to Enhance Learning and Policy Making