What is public criminology? Different people use different definitions. The most common one refers to the practice of scholars reaching out beyond academic circles and disseminating their research/findings to broader audiences, like criminal justice practitioners, public officials, journalists, and the general public. I view this as a narrow definition, although an important goal. Public criminology should extend beyond the narrow definition of translation and engagement to encompass a wide array of strategies— strategies that result in findings not just being heard about or understood, but also ones that inspire and actually lead to change on the ground, or what I call evidence to action.
I’ve organized my remarks around five global questions: 1) how do we know what works; 2) what more do we need to know; 3) how do we go about learning it; 4) how do we use evidence to drive change; and 5) what does this all mean for the future of criminology?
1) How Do We Know What Works?
Figuring out “what works” is an age-old question in our field, one that I’ve dedicated a substantial body of my research exploring. So, as an evaluation researcher, what I’m about to say is heresy, but here goes it: We are spending far too much time focusing on what programs work, as if knowing that will inform the field about what to invest in and implement.
believe we over-emphasize which programs work because we treat programs as if they can be taken off the shelf and applied effectively in different places that have a different mix of crime problems, different underlying causal mechanisms, and different people engaged in those crimes. If this were true, why do we see the vast majority of replication evaluations failing to detect a statistically significant impact?
focused on a thorough analysis and engagement with all relevant stakeholders, and directly related to the underlying causes of the problem under study. In other words, problem solving, which has been demonstrated by dozens of individual evaluations, as well as a Campbell Collaboration review, was found to be a successful strategy.
The key takeaway here is that we need to focus less on programs and more on strategies. We need to support evaluations that examine an array of responses, along with the quality of their implementation.
But how do we measure the impact of strategies? This is the classic evaluator’s dilemma, which I think may be responsible for our overreliance on brand-name programs to begin with. The dilemma is this: Most interventions consist of a variety of measures; how do we know which activities among them are making a difference? Or does the desired impact rely on a combination of a subset of those measures? If so, which ones? Criminology could look to the public health discipline for a potential answer—namely, employing core components analysis.
According to the Department of Health and Human Services, “Core components are the parts, features, attributes or characteristics of a program that a range of research techniques show influence its success when implemented effectively. These core components can serve as the unit of analysis that researchers use to determine ‘what works,’ and they become the areas practitioners and policymakers seek to replicate within and across a range of related programs and systems in order to improve outcomes.”[1]
Now, I’ve always had a measure of healthy skepticism when it comes to meta-analyses. That’s because a lot of the programs that appear similar by name—let’s take work release, for example—are all lumped together to say whether the program type works. That could mean combining work release programs that include job placement with those requiring a living wage with ones that are tied to marketable skills and with those that are targeted to young adult women with minimal job histories. I suppose there’s value in knowing whether the overall program concept works, but given the variation in the programs’ components, scope, and intended population, what are we really learning?
However, with core component analysis, meta-analyses can shed light on which components make programs successful across a range of contexts. This, in turn, can help researchers identify with greater precision what works, in which contexts, and for which populations. So instead of “what works,” core components answer the question of “why it works.” That’s the kind of information that will serve the practitioner community. (Caveat: I recognize that this isn’t feasible for all intervention types, and you still need program evaluations to feed into the core component analysis, but I believe this approach holds promise).
2) What More Do We Need to Know?
My second point is about what types of knowledge we need to grow. Evaluation work is important, but evaluations don’t always tell us what more we need to know. In addition to knowing what works and why, it is crucial to answer the question of “how it works.” Answering that question demands that we spend the time and effort to unpack interventions and processes in the ways that Stephen Mastrofski has done. One example from George Mason University (the home of Translational Criminology Magazine) is the seminal work of Cynthia Lum, Chris Koper and team on the nature and types of calls to 911, examining call data across nine jurisdictions. This work is important because without understanding the composition of calls to 911, we cannot assess whether police or alternative responders are best positioned to meet the public’s demand for services.
Similarly, we need a better understanding of culture, capacity, and supervision in the context of training and other interventions. Many new measures are targeted toward leadership or line officers—sometimes both. We tend to focus on the top and the bottom but miss the middle —that all-important first-line supervisor.
In other words, we need to invest more in basic research to understand emerging problems and inform the data-driven development of solutions to those problems.
3) How Do We Go About Learning It?
Across both the “what works” and “how it works” areas is the all important question of how we learn what we need to learn. To be candid, despite the excellent quality of my doctoral education at Rutgers School of Criminal Justice, I was not exposed to much about the craft of field research. I have learned on the job, through trial and error and the wisdom of more seasoned colleagues at the Urban Institute, the value of authentic engagement, inclusive research, mixed methods, and process evaluation.
First and foremost, we need to engage in authentic partnerships. The solution isn’t as easy as finding an agency or community entity to partner with. Researcher-practitioner partnerships are most productive when the researchers care as much about informing improvements in safety and justice as they do about getting published in top-tier journals.
Police and other practitioners fear “gotcha” research—when researchers propose studies, access data, and don’t even give them a heads up when they’ve published findings, some of which may be perceived as unflattering to the agency. Instead, researchers need to come to the table as equal partners. And it works both ways. I’ve conducted studies where I couldn’t get my police partners to pay any attention to the findings—despite the fact that they invited me in to do the research.
We also need to conduct what I term inclusive research, or what I would say is the “right” kind of research. What do I mean by that? There’s no one right methodology, to be sure—the method has to fit the research questions. But there are still right and wrong ways of going about the research. The wrong way is to collect data and assume you know what it’s measuring. The wrong way is to produce findings without ground-truthing them with the experts. The experts are the patrol officers, investigators, victims, 911 call takers, service providers, people with lived experience, and community members— the people closest to the problem you are trying to solve. This can range from full-on community-based participatory research to research that is largely empirical but still engages stakeholders in interpreting the findings.
That type of engagement demands that we employ mixed methods, or what I refer to as numbers plus narratives —numbers provide empirical grounding and evidence; narratives give context and meaning. Our field is too bifurcated, with people opting for quantitative paths and others defining themselves as qualitative researchers. Perhaps that’s okay—we all need to pick a lane. But in doing so, we run the risk of conducting research that is not directly tied to implications for improvements in practice. That’s why it’s important that, even if we stay in our silos, we work across them through interdisciplinary research teams. Different training and perspectives can add tremendous value both to the research process and the resulting findings.
Combining researchers with different backgrounds and training can also help ensure that evaluation research is comprehensive and documents the quality of implementation. Unfortunately, very few evaluations attend to implementation fidelity. I’m troubled by funders who over-emphasize the importance of randomized controlled trials (RCT) without attending to the quality of how the treatment was implemented. What that means is that if you find through an RCT that a program doesn’t work, you might be confident in the rigor of the methodology, but you won’t know whether the program is flawed in theory or implementation. We could be rejecting highly promising interventions on that basis. We must do more to ensure that implementation evaluation is a central component of all evaluation efforts.
Across all these recommended research processes is the concept of approaching our studies through a racial equity lens. We cannot credibly study issues pertaining to safety and justice in this country without acknowledging our country’s history of chattel slavery, Jim Crow, and much that followed, which has fueled institutional biases, fed mass incarceration, perpetuated structural disadvantage, and importantly infused the data we use and even the research methodologies we employ with biases. We need to be more intentional in positioning our research in that context. That means interrogating the data we use. Are arrests the best source of recidivism data when we know they are driven in part by where police patrol and who they pull over? Is putting a dummy variable to represent race in a regression analysis really measuring what we think it is?
The research process is highly important, and I believe it’s foundational in ensuring the evidence we generate is not just shared but also leads to improvements in safety, equity, and justice. But moving on to the fourth segment of my remarks today, the question remains:
4) How Do We Use Evidence to Drive Change?
I referenced that my key goal at the National Institute of Justice (NIJ) is promoting evidence to action. But what is the research on how that happens? It’s called implementation science, and I’m so pleased that Dr. Tamara Herold has joined NIJ as a Senior Advisor to me on this topic. Tamara has culled the literature on implementation science, both within but largely outside of criminology, to include, for example, the areas of public health and medicine. She came up with several key takeaways, (and unearthed dozens of models that have gone untested).
Here are a few nuggets:
- We know that credible messengers make a difference. We are all more likely to open our ears and our minds to new ideas and ways of doing things if someone like us communicates those strategies, someone we respect—a thought leader in the field.
- t’s not just the messenger but how the message is conveyed. Storytelling is a particularly powerful means of communicating research evidence (although I would argue that it is even more powerful when quantified outcomes are included in the story).
- We know that checklists can be helpful in ensuring that evidence based policies are followed. This comes largely from the fields of aviation and medicine but can certainly be employed in criminal justice.
- We also know that using the nudge technique is helpful—taking a practice that is already a habit and adding onto it incrementally. Building nudges into everyday practices can include embedding evidence-based practices (EBPs) into policies, software, and preexisting performance metrics.
- And we know that the successful implementation of EBPs requires internal capacity so that staff have the time to learn about—and the tools with which to implement—EBPs.
- It’s also essential that we celebrate those who are crusaders in supporting and implementing EBP—as the CEBCP’s Hall of Fame does so effectively. George Mason University has led the way in that regard.
This all leads to the fifth and final question I’d like to pose and answer today:
5) What Does This All Mean for the Future of Criminology?
What it means is that we need to train our students better (not to mention ourselves) on research that makes a difference in people’s lives. We need to invest in relationship building the same way we’ve long invested in skill building. We need to change the incentive structure to do a better job rewarding applied research and the practice of public criminology. Our applied researchers need to be rewarded for partnerships that have real-world applications.
This will not only fuel applied and actionable research, but it’s also of critical importance for attracting and retaining students and creating a pipeline of the next generation of criminal justice scholars. More than ever, students today are attracted to criminology because of a passion for social justice. We need to complement that passion by teaching them the hard skills associated with research that engages people on the ground. The future depends on it—not just the future of criminal justice and sociology programs, but also the future trajectory of advances in safety, equity, and justice.
I recognize I’ve laid a lot on the table. It is not my intent to simply toss a bunch of random ingredients your way and say, “you figure out the recipe.” The federal government has an important role to play in guiding the field forward in the interests of public criminology and in pursuit of evidence to action. I’d like to share some of the ways we’re doing that at NIJ:
- To build a better and more actionable knowledge base of what (and why it) works, we are requiring that all our funded evaluations include a logic model and process/implementation evaluation component.
- To grow knowledge on emerging issues, we are inviting proposals for formative evaluations on topics that require more basic research and piloting before they can be evaluation ready.
- To ensure research is conducted in a manner that is inclusive, employs a racial equity lens, and builds on the strengths of interdisciplinary teams, we have written these approaches into all solicitations as priority considerations. In addition, we recently announced the establishment of a Center for Enhancing Research Capacity at Minority Serving Institutions (MSI) run by John Jay College of Criminal Justice, an MSI and Hispanic Serving Institution. And we continue to support the WEB DuBois Fellowship for Research on Reducing Racial and Ethnic Disparities in the Justice System solicitation, which includes an incentive to bid emerging scholars as co-PIs and rewards applicants who include a robust mentoring component.
- To ensure evidence leads to action, we released a solicitation for proposals to evaluate different strategies associated with implementation science. And we continue to support our Law Enforcement Advancing Data and Science (LEADS) Scholars Program, which empowers research-minded law enforcement to integrate research into policies and practices.
Looking ahead, we could also use a lot more help and engagement from the academy. I firmly believe that until the leadership of these associations embrace and include a newer, younger, and more diverse generation of criminologists, getting sustained support for public criminology will continue to be challenging.
As Ruth Peterson called for in her presidential address to the American Society of Criminology (ASC) in 2016, the best strategy for informing policy debates and developments is for ASC, “to continue to grow the diversity of its membership; to integrate the research and findings of scholars of color into the mainstream of criminology; and to take further steps to conduct research and share findings with diverse audiences to ensure that post‐truth does not become normative regarding crime and justice issues.” The best way to do that is through public criminology.