Assessing viability and building relationships

Authors
Elisabeth O’Toole
Summary

This resource guides researchers through background research and early discussions with a program implementer who has expressed interest in a randomized evaluation, and with whom a partnership seems potentially viable. It provides guidelines for researchers to conduct early conversations with twin goals: to build strong working relationships and to assess whether a randomized evaluation is feasible and appropriate for the partner’s context. 

Introduction

Early discussions with partners serve dual purposes: (1) gathering enough information to assess the practical and statistical feasibility of a randomized evaluation, and (2) establishing strong working relationships with key stakeholders. By using initial conversations to discuss research questions, outcome measures, goals, and priorities, researchers and partners can jointly assess whether a randomized evaluation is in their best interests.1

Early conversations with a potential implementing partner

Before diving into the details of a potential evaluation, conduct a small amount of background research to prepare in advance to address the partner’s potential concerns. For example, being prepared to use the same language as the partner (e.g., understanding whether a partner refers to the people they serve as patients or clients and using that term rather than participants), can help to foster trust and mutual respect.

Consider the following suggestions for these first conversations:2

  • Generate trust by starting conversations from the partner’s needs and constraints, rather than from the “ideal” research design. Ask key stakeholders, program managers, and frontline service providers about their program, what they hope to learn, and what challenges they face. Such conversations will provide information needed to refine the research question and gather information that pertains to study feasibility and design. For example, discussions about the number of individuals served over a certain timeline can serve dual goals of informing power calculations and better understanding the program and partners. 
  • Be sure the partner’s goals for an academic partnership align with what researchers can promise. For example, if a partner is hoping to “prove” that some aspect of their program works, clarify that you cannot guarantee positive results. If a partner places a high priority on an underpowered outcome, be sure to explain the dangers of running an underpowered evaluation.3 From this conversation, gauge how the partner might use or react to the results from an evaluation—whether those results are positive, negative, neutral, or inconclusive—and be sure they understand your intention to publish results regardless of the outcome. Note that there may be fruitful cases where researchers’ and partners’ goals are not exactly the same, but for which the marginal cost of meeting both is very small. However, identifying a design that is both feasible and addresses the partners’ and researchers’ goals may be an iterative process. 
  • Explain the basic concept of a randomized evaluation, potential opportunities for random assignment, and related studies or random assignment possibilities. This conversation will help to assess the potential partner’s willingness to incorporate random assignment into their operations and identify potential challenges.4
  • Note any time constraints the partner may be facing. Ensure that a potential partner understands the length of the research process and likely duration of their involvement. If the partner has never participated in academic research before, the timeline may be much longer than they expect.
  • Establish an equal partnership. Demonstrating a willingness to design research that is appropriate to the context and minimally disruptive to the partner’s operations, while also demonstrating respect for the partner’s insight and contextual knowledge, may alleviate concerns about the effect of the research on the community and partner.5 This view of partnership is important for the full study team to internalize and return to throughout the study.
  • Identify a champion within the organization. This may be the initial point of contact or someone with a particular interest or background in research. For example, a senior-level official who is intrinsically motivated to understand whether and how their program works can assist in developing a high-level vision for the partnership, implementing the partnership, facilitating support at all levels (including lateral buy-in), and helping ensure project sustainability (Carter et al. 2018).6

With this groundwork, researchers and partners can begin to discuss what a randomized evaluation might look like in their context, keeping in mind that random assignment can take many forms7 depending on the research question and the needs of all stakeholders. Prospective partners may not have the research or technical background to understand benefits or costs of an evaluation design, nor which research design is most suitable for their goals.8 Researchers can provide information to support partners in making informed decisions about their participation in a randomized evaluation and the proposed study design.

Detailed background research

If initial discussions with the potential partner seem promising, researchers can begin to think in more detail about potential research questions and design. In addition to literature review, background research may include continued discussions with the potential partner. These discussions, desk research, site visits, or focus groups can illuminate: 

  • How the program operates in practice. This includes whether services are delivered with fidelity to the design, the extent of variation (formal and informal) in program implementation, the history of changes to the program, and potential future changes. This also illuminates where random assignment could plausibly and ethically be carried out, which people can be recruited and consented into the study, and what potential threats to the research design exist. 
  • Program and partner details. This includes growth trajectory, sample size, time frame in which people are served, retention rates, and data capabilities. Gathering information on relevant programmatic facts within the first few conversations is an encouraging sign of a partner’s interest in collaborating on an evaluation and of their ability to provide relevant data. These details can inform whether a partner is likely to have the technical and logistical capacity to implement the program, reach enrollment targets, and provide data needed for an evaluation. 
  • Key personnel and other stakeholders. These may be within the organization or external parties such as funders or government partners. Gather information on their roles, involvement in decision-making, familiarity with research, and relation to one another.
  • A timeline of major changes that may affect the program, such as:
    • A similar program that may be implemented in the same service area
    • Changes to data systems or processes
    • Changes in program funding
  • Potential implications for the community or individuals from study implementation, including:
  • Representativeness of the community context or study population. Communities or partners willing to adopt a program for a randomized evaluation may respond differently to the program than those who are not (Allcott 2015). 
  • Potential sources of concern about research or a randomized evaluation. Understanding the community’s relationships with universities, research, and researchers—particularly any other ongoing research in the area, any perception of high fatigue from prior surveys, or any history of research that the community perceived as harmful—can help researchers plan for potential challenges or concerns from partners. Concerns may especially involve groups whose circumstances have made them vulnerable to manipulation or from pre-existing controversy over a program or political decision.

Equipped with this information, researchers and partners can assess the fit of a randomized evaluation. If the partnership seems viable, researchers can use this information to tailor their approach to communicating with the partner, to offer trainings or resources as appropriate, and to design a rigorous and practical randomized evaluation.

Acknowledgments

We are grateful to Todd Hall, Emma Rackstraw, Sophie Shank, and Louise Geraghty for their insight and advice. Jacob Binder copyedited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures. Any errors are our own.   

1.
A randomized evaluation—or any type of impact evaluation—may not be the right fit for the needs of some potential partners. To help these partners understand different types of evaluation, pages 12-13 of J-PAL’s Introduction to Evaluations and J-PAL’s What Is Evaluation? lecture provide approachable introductions to different types of research questions and evaluation methods. Additionally, Mary Kay Gugerty and Dean Karlan discuss the appropriateness of randomized evaluations in different situations in Ten Reasons Not to Measure Impact—and What to Do Instead.
2.
For more resources on establishing ongoing communications and sharing results, see J-PAL’s resources “Formalize research partnerships and establish roles and expectations” and “Communicating with a partner about results.”
3.
J-PAL’s resource “The risk of an underpowered randomized evaluation” lays out why an underpowered evaluation may consume substantial time and monetary resources while providing little useful information.
4.
J-PAL has developed resources to help facilitate these conversations, including publications titled Why Randomize?,  Real-World Challenges to Randomization and Their Solutions, and Common Questions and Concerns about Randomized Evaluations.
5.
J-PAL North America’s "Designing and iterate implementation strategy" outlines steps and questions for consideration after research partnerships have been formalized. In particular, it highlights ways to identify particular aspects of the partner’s program to consider when in this design phase—many of which can be identified in these early conversations.
6.
This resource discusses qualities and benefits of identifying a strong senior-level champion in a partner organization. It details specific roles, conversations, and tasks to facilitate a successful partnership—specifically in government organizations.
7.
This resource is a partner-oriented guide illustrating different potential forms of randomization—including phase-in, rotational, and encouragement designs. It describes plans for evaluations in cases of entitlement programs and where resources exist to extend the program to everyone in the study area, as well as in cases where access to the program is guaranteed for a portion of the population.
8.
J-PAL offers many capacity-building resources to potential implementing partners.
9.
This resource provides a nuanced discussion of ethics in randomized evaluations.
10.
This resource provides suggestions for alternative strategies in cases where partners have concerns about randomization, especially with specific populations. Researchers can demonstrate their recognition of ethical or practical challenges by presenting alternative randomization strategies.
Additional Resources
  1. “Evaluating Financial Products and Services in the US: A Toolkit for Running Randomized Controlled Trials.” 2015. Innovations for Poverty Action. November 10, 2015. https://www.poverty-action.org/publication/evaluating-financial-products-and-services-us-toolkit-running-randomized-controlled.

    Page 18 of IPA’s Evaluating Financial Products and Services in the US: A Toolkit for Running Randomized Control Trials provides more questions determine whether a randomized evaluation is right in a particular context.

  2. Glennerster, Rachel. 2017. “Chapter 5 - The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” In Handbook of Economic Field Experiments, edited by Abhijit Vinayak Banerjee and Esther Duflo, 1:175–243. Handbook of Field Experiments. North-Holland.

    For more information about whether a randomized evaluation is right for a given program or partner, see Section 1.2 of Rachel Glennerster’s The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency (available as a part of a Handbook of Economic Field Experiments, published at Elsevier).

  3. Heard, Kenya, Elisabeth O’Toole, Rohit Naimpally, and Lindsey Bressler. 2017. “Real-World Challenges to Randomization and Their Solutions.”

    For more information about solutions to practical challenges to randomization, as well as some case studies, see J-PAL North America’s Real World Challenges to Randomization and Their Solutions.

  4. What is the Risk of an Underpowered Randomized Evaluation?” J-PAL North America.

    This resource outlines why an underpowered randomized evaluation may consume substantial time and monetary resources while providing little useful information.

  5. Gugerty, Mary Kay, and Dean Karlan. n.d. “Ten Reasons Not to Measure Impact—and What to Do Instead.” Stanford Social Innovation Review. Accessed October 2, 2018.

    In their book The Goldilocks Challenge, and summary article in the Stanford Social Innovation Review, Mary Kay Gugerty and Dean Karlan discuss the appropriateness of randomized evaluations in different situations and alternative strategies where appropriate.

  6. Glennerster, Rachel, and Shawn Powers. 2016. “Balancing Risk and Benefit: Ethical Tradeoffs in Running Randomized Evaluations.” The Oxford Handbook of Professional Economic Ethics, April. https://doi.org/10.1093/oxfordhb/9780199766635.013.017.

    See Rachel Glennerster and Shawn Powers’s Balancing Risk and Benefit: Ethical Tradeoffs in Running Randomized Evaluations for a detailed discussion about ethics in randomized evaluations.

  7. J-PAL video resource: Why Randomize?

Allcott, Hunt. 2015. “Site Selection Bias in Program Evaluation.” The Quarterly Journal of Economics 130 (3): 1117–65. https://doi.org/10.1093/qje/qjv015.

Carter, Samantha, Iqbal Dhaliwal, Julu Katticaran, Claudia Macías, and Claire Walsh. 2018. “Creating a Culture of Evidence Use: Lessons from J-PAL Government Partnerships in Latin America.” J-PAL LAC.