Knowing what works is necessary—but not sufficient—to achieving impact at scale. Put simply, implementation makes or breaks impact—and unfortunately, we know woefully little about what distinguishes effective and ineffective implementation. Always a challenge, implementation is even more so in a transformative scale context as strategies veer away from hands-on, direct work and towards influence and enabling others to spread practices. In this blog, my colleague Annaliese Calhoun shares pragmatic guidance from Allison Metz, director of the National Implementation Research Network, which is pioneering thinking about the ingredients of effective implementation. — Jeff Bradach
A growing number of social programs have cleared the hurdle of proof that they work as intended. But just because a program works doesn’t mean others can effectively adopt it. In fact, program developers often find it’s surprisingly hard to help other organizations successfully implement a new approach to an old problem—a major barrier to scale.
How do we achieve impact at a scale that meets today's enormous needs? Explore Bridgespan research, insights from leaders, and more on the Transformative Scale Resource Center.
Visit the Resource CenterThis is a conundrum a number of social sector pioneers struggle with as they consider how to grow the impact of their successful programs without growing their organizations. In short, how do they close the gap between knowing what to do and moving that knowledge out into the world for others to replicate?
Understanding the nuts and bolts of how organizations and systems manage to implement something new is the focus of the National Implementation Research Network, based in Chapel Hill, NC. It works on the cutting edge of the relatively new field of implementation science: the study of factors that influence the effective adoption of social programs and practices.
I recently spoke with Director Allison Metz about what it takes for nonprofits to spread their evidence-based programs to achieve greater impact. She shared several insights.
The first thing she wants to know about a program is whether it meets the four-part definition of a usable innovation. “We emphasize that any program or practice needs to be teachable, learnable, doable, and assessable,” she explains. When a program meets this “usable” test, scalability then rests on rigorous application of three principals:
Be clear about what your program offers. Metz observes how organizations that have completed a randomized controlled trial that demonstrates program effectiveness may still have challenges related to fully describing the model.
“Sometimes effective programs can reside in the developer’s head,” she says. “The program did so well at a small scale because the initial team was totally involved in an intensive way. They slept and dreamed the program. Then when we try and take it to scale, it may not be defined enough, which makes it challenging to support coaching at scale, or training at scale, or fidelity assessment at scale.
Be clear about how to implement your program. Implementation is a process that takes place in stages as an organization figures out how to support and grow a new program model. Often this takes several years to achieve and includes planning, staff training, and infrastructure enhancement to support the new program. If the what is not defined well enough, figuring out how to implement a new program will be difficult.
As Metz notes, the how is multifaceted: “If someone has a really well-defined what, then we want to understand selection criteria for staff, plans for staff training, fidelity assessments for staff and the type of data that are critical to use, the administrative supports that have to be in place, the assistance needed from partners, policy changes that need adoption, and funding resources that it would really take to implement that program.”
Make sure your program is a good fit wherever it is adopted. “One of the reasons we haven't realized population-level outcomes as a result of a greater emphasis on the use of research evidence is that we haven't focused enough on what might be considered ‘good-fit evidence,’ ” says Metz. In practice this means that adopters must conduct a comprehensive needs assessment of their target beneficiaries and then select the evidence-based program that best meets those needs within their context. “This idea of good fit is something that is a critical piece to sustainability and scaling,” adds Metz.
Thorough needs assessments, however, are easier said than done. Most organizations that set out to select an evidence-based program need greater support in conducting needs assessments, Metz observes. “Many needs assessments are descriptive in nature and don't focus as much on root causes related to identified needs,” she says.
Once a new program is in place, it’s important to build in continuous quality improvement practices to support the program’s ongoing fit. Plan how to measure implementation at every level of the organization. And make sure that the measurement systems provide the kind of information needed to continually improve. “Continuous quality improvement is really a key ingredient to being able to have sustainability and scale,” says Metz.
Implementation is hard and messy, with many points along the way where the process can break down. By focusing on three factors—what, how, and fit—developers of high-performing programs make it easier for others to follow in their footsteps.
Annaliese Calhoun is a consultant in The Bridgespan Group’s Boston office.
More from the blog