David B. Muhlhausen, September 2017
It’s important to celebrate milestones, and CrimeSolutions has hit a big one — 500 rated programs. That’s 500 opportunities for the criminal and juvenile justice and victim service practitioners and policymakers we serve to learn about what works, what doesn’t, and what’s promising.
While I am relatively new to the National Institute of Justice, I have spent a good part of my career championing evidence-based policy and the need for rigorous, replicated, program evaluations.
All our resources are limited, and we need to ensure the programs we fund are effective in addressing the many issues faced by criminal justice agencies. CrimeSolutions helps justice professionals, who may or may not be social scientists, improve their effectiveness. The systematic, independent review process and evidence ratings are intended to help practitioners and policymakers understand the implications of social science evidence that can otherwise be difficult to understand or apply, and serve as a basis for gauging the quality of evidence. In short, CrimeSolutions strives to help practitioners answer the question: Does it work?
What’s next for CrimeSolutions certainly involves continuing to rate programs based on the best available evidence. Beyond climbing our way to the next 500 programs, we will continue to improve our methodology for rating programs and making CrimeSolutions more accessible and useful to practitioners.
I cannot overstate the value of replication research. We need to beware of the “single-instance fallacy.” For instance, a program that works in Detroit may not work in a smaller city and vice versa. While numerous individual crime prevention programs have been found effective through randomized experiments, the success of these single programs does not necessarily mean that the same programs will achieve similar success in other jurisdictions or among different populations. For example, the Canadian program Enhanced Access, Acknowledge, Act Sexual Assault Resistance was rated as “Effective” based on a single randomized controlled trial; but would a similar program work in the United States? As another example, a single randomized experiment led to an “Effective” rating for hot spots policing in one jurisdiction, whereas a separate randomized experiment led to a “No Effects” rating in another.
To take our knowledge of what works in crime policy to the next level, we need to determine if initially successful results of these programs can be replicated.
A major challenge to replicating effective programs is that we often do not truly know why an apparently effective program worked in the first place. So how can we replicate it? We also need to think more about why a program works or doesn’t. Was it a champion? Did all the pieces come together perfectly, or did one key factor fail? Understanding the reasons behind the ratings is an enormous step to help us replicate the good results from jurisdiction to jurisdiction.
By working with criminal justice practitioners and researchers, I look forward to leading the National Institute of Justice in these challenging, but highly rewarding research efforts.