The Great Opportunity and Challenge of MSD Ex-Posts: 4 Takeaways from a Rapid Harvest of Insights
This post is written by Ben Fowler and Benjamin Simmons-Telep. Ben Fowler is co-founder and principal at MarketShare Associates and the strategic technical advisor for Feed the Future Market Systems and Partnerships (MSP) Activity Monitoring and Evaluation (M&E) and Collaborating, Learning and Adapting (CLA) Learning Stream. Benjamin Simmons-Telep is the Monitoring, Evaluation and Learning (MEL) manager for MSP.
Market Systems Development (MSD) is an approach to economic development that is gaining increasing traction within USAID. However, we must ask ourselves a key question: is its promise of creating more sustainable and scalable impact through deep systems change the reality?
Evidence to date primarily comes from a program’s period of performance (the Building Effective and Accessible Markets (BEAM) Exchange, for example, produces an annual MSD evidence review). However, the best way to answer this question would be to go back years after an MSD program has ended (i.e., “ex-post”) to evaluate its enduring results. Despite this being the ideal approach, there have been few attempts to do so, leaving existing evidence too sparse to draw definitive conclusions.
USAID, through the Feed the Future MSP Activity, is setting out to change this by supporting a series of ex-post evaluations of MSD programming. These evaluations will build the evidence base across a range of contexts, while also contributing to improved methods for conducting future MSD ex-post evaluations.
Background: Some of USAID’s first ex-post evaluations of market facilitation programs happened in 2015 in Cambodia and Zambia (both five years ex-post) under the Leveraging Economic Opportunities contract. Since then, a lot has been learned around conducting ex-post evaluations. USAID has subsequently published a discussion note on conducting ex-post evaluations while USAID’s Expanding the Reach of Impact Evaluations (ERIE) mechanism, the International Labour Organization (ILO) and Foreign, Commonwealth and Development Office (FCDO) have all funded ex-post evaluations of market facilitation projects.
There is a considerable body of literature that describes how best to conduct ex-post evaluations for development programs, but very little that speaks to the unique needs and context of MSD programs. This is problematic, especially given the increasing adoption of MSD approaches by USAID and implementers. MSD is also already a particularly challenging approach to evaluate for reasons including: the necessary continual adaptations to intervention strategy throughout the life of the program, the fact that imitation and system-level changes can contaminate nontreatment groups and the reality that treatment can vary substantially over time and space.
To address this knowledge gap, we are innovating and adapting ex-post research methods to better fit MSD programs while simultaneously building the MSD evidence base.
To better inform our efforts, we conducted a rapid review of 23 documents and interviewed key informants, including evaluation specialists from USAID’s Center for Agriculture-Led Growth; Bureau for Policy, Planning and Learning; and the Private Sector Engagement (PSE) Hub. We also spoke with experts from the Springfield Centre and the Expanding the Reach of Impact Evaluations (ERIE) Consortium and engaged with authors of the private sector partnerships-focused Enduring Results 3.0 study. This yielded four key considerations that MSP thinks are important when funding and designing ex-post evaluations for MSD programs:
1) Put systemic change at the forefront
While it is important to look at whether scalable results for end beneficiaries continue to endure, an MSD ex-post evaluation must examine whether desired systemic changes — fundamental shifts in network structure or behavioral norms — have lasted. Evaluating system changes — not traditional development results (i.e., income or numbers “reached”) — was in fact the sole focus of a 10-year ex-post evaluation of the ILO’s Enter-Growth program in Sri Lanka, with another similar ex-post evaluation underway in Pakistan. Systemic change will also be an element assessed in USAID’s upcoming MSD ex-post evaluations through MSP.
It is less important that specific business models and relationships remain exactly as they were at the end of a program. The disappearance of specific business models or relationships should not be automatically considered evidence of limited sustainability, but perhaps the natural evolution of a market system if the desired system functions still endure. The continuation and, ideally, improvement in system functioning for the target group is the more important focus for MSD. For example, the ex-post evaluation of USAID’s Micro, Small and Medium Enterprises (MSME) Activity in Cambodia found that the business model originally supported for linking swine farmers to information on applying inputs had been adapted in multiple ways by different businesses. In many cases, these approaches looked nothing like the original model, yet continued to deliver quality information and products.
In order to effectively assess improved system functioning, evaluators must be clear on the definition of sustainability. In an ideal situation, improved system functioning would be documented at the end of a program, such as through a systems-level endline study. However, in most cases, these studies do not exist, which makes it difficult to determine whether a system’s poor performance is evidence of the unsustainability of certain changes, or whether those changes never happened at all. This recent blog post provides one take on how to think of sustainability in MSD programming.
2) Carefully set the evaluation boundaries
While defining the focus is important for any ex-post evaluation, the relative complexity of MSD program design — combined with the data gaps produced by a facilitative approach where implementers are working with local actors instead of target groups, so they lack direct contact — require that MSD ex-post evaluations build careful and thoughtful parameters around what exactly to evaluate. An additional layer of complexity is added by the fact that MSD programs typically consist of many interventions across multiple sectors and geographies, making it infeasible to examine every parameter possible. One solution is to focus on groupings of interventions or change areas designed to achieve a single objective, such as improving the performance of a function, like skill provision and acquisition. This approach was successfully used in evaluating the Enter-Growth program.
3) Adequately plan for and resource the effort
MSD ex-post evaluations can be especially resource- and time-intensive. To capture unanticipated and complex systems-level change, evaluators must take a flexible and adaptive approach to measurement and pursue new leads as they materialize, almost like investigative journalism. This can be unpredictable and time consuming, which planning efforts must capture. Furthermore, the adaptive approach used by MSD programs means that additional resources may be required to (re)construct a results chain/theory of change if it wasn’t originally created for the area of focus or became out of date over time.
Another important consideration is the amount of time to wait before conducting an ex-post study. While USAID’s Automated Directives System (ADS) 201 defines “ex-post” as anything at least one year after a program has ended, there are trade offs on how long to wait — with important lessons both early and late. As one ERIE researcher observed, a faster ex-post allows an initial look at how well system functions are being performed; a benefit of this approach is that participants are likely to have clearer and more accurate memories, and early indications of success of a program’s “phase out” can be assessed. However, waiting longer can also allow evaluators to better explore long-term outcomes and system evolution. The Enduring Results 3.0 study, for example, found significant differences in results for partnerships assessed more than three years ex-post compared to those assessed more recently. When possible, planning should begin prior to project closure to facilitate easier collection of contact information and participant permission. This will also ensure that the program strategy and the endline performance of the target systems and groups is well documented.
4) Embrace diverse approaches to construct the counterfactual to assess contribution
Evaluating systemic change requires using approaches that establish contribution rather than attribution. To do so, new technology and better access to data are reshaping options for constructing the counterfactual to a development program (e.g. what would have happened otherwise?). However, many new approaches are not particularly well suited to MSD programs, given the diffused geography over which a market system operates and changes are experienced. Quasi-experimental designs may not be able to establish (naturally or synthetically) a reliable “system-level” counterfactual, something ERIE’s experience supports. That said, there are some approaches to research design that may be better adapted to MSD interventions and can help detangle external influences. These include:
- Contribution analyses: exploring the extent to which a certain intervention or interventions contributed to a well-documented, system-level outcome. As demonstrated in Enter-Growth’s evaluation approach, this involves starting with an outcome and trying to assess the contributing factors.
- Sensitivity analyses: mapping the degree to which outcomes may change in the presence of unobserved characteristics. ERIE’s Guide for Planning Long-Term Impact Evaluations references sensitivity analysis as a possible avenue.
An important consideration in assessing contribution is to take steps to avoid confirmation bias, in which results are interpreted as confirming a researcher’s preexisting conclusions. The system-level focus of MSD projects complicates this, as a system’s evolution cannot be reliably predicted, and so whether subsequent changes are desirable or not requires analysis. The Cambodia MSME ex-post did this by identifying ex-ante, what outcomes might be observed against its evaluation questions and how those outcomes might be interpreted. Many possible outcomes could be interpreted as either supporting or challenging sustainability; the evaluators pre-identified what method the research approach would use to draw conclusions.
MSP will be looking to apply these lessons in the series of ex-post evaluations of MSD programs under design later in 2021. These evaluations are part of a broader set of evidence-building and learning initiatives around MEL and CLA in MSD and PSE, as part of MSP’s multifaceted Learning Agenda.
If you are a USAID Mission or implementer interested in conducting an ex-post evaluation of an ongoing or completed MSD activity, contact Kristin O’Planick at email@example.com
If you are interested in staying in the loop on the MSD ex-post evaluations and other MSP technical learning and resources, sign up for the forthcoming MSP newsletter.
The Feed the Future MSP Activity is advancing learning and good practice in MSD and PSE within USAID, USAID partners and market actors. For more information, access to technical resources and opportunities to engage, visit www.agrilinks.org/msp.