Scholars, practice professionals, and policymakers should welcome the new era of evidence-based programming and policies, but these constituencies need to be realistic about the complexities, uncertainties, and limitations that lie beneath what could easily become a simplistic process. This paper discusses some of the requirements for the replication of evidence-based programs, suggesting that many of these underlying assumptions are often not met. One of these requirements is the evidence itself, and alternative evidentiary criteria are discussed. A main theme is that even if a well-documented program exists, implementing it in communities on a broader scale requires different processes that are less well studied. For example, some alternative approaches to summarizing actionable knowledge are offered, including characteristics of effective programs, consensus groups, and the Pathways Mapping Initiative. In addition, strategies are discussed that hold the promise of bringing scholars and community stakeholders together in a collaborative process that will build community capacity and create and implement effective programs and services on a broader scale. Finally, the research enterprise itself needs to be transformed to more effectively contribute to program and system community change. Recommendations for improving the process are offered.
The main article is followed by three commentaries:
- A Case for Replicating Proven Programs
by Jean B. Grossman
- The Silo Problem
by Mary Ann McCabe
- Commentary on "Evidence-Based Programming in the Context of Practice and Policy"
by Karen A. Blase, Melissa Van Dyke, and Dean Fixsen