Learning from Doing Evaluating the Effectiveness of the Second Chance Act Grant Program
Reauthorized in 2018, the Second Chance Act (SCA) aims to reduce recidivism and improve outcomes for people returning from state and federal prisons, local jails, and juvenile facilities through the provision of federal grants. During this panel, National Institute of Justice-funded researchers will detail two ongoing evaluations of the SCA grant program:
- An evaluation of the effectiveness of the SCA grant program per Title V of the First Step Act.
- A longitudinal examination of the long-term impacts of the SCA program.
MARIE GARCIA: Good afternoon, everyone. My name is Marie Garcia, and I'm a Social Science Analyst in the Office of the Director at the National Institute of Justice. On behalf of NIJ, I'd like to welcome you to today's webinar, Learning from Doing: Evaluating the Effectiveness of the Second Chance Act Grant Program. Today's webinar supports Second Chance Act Month, and it is being hosted by the National Reentry Resource Center. Please note the webinar is being recorded and will be made available online at a later date. And just as a reminder, if you have any questions for the panelists throughout today's presentation, please be sure to put them in the chat or the Q&A box.
I'd like to begin today by thanking our colleagues at the Bureau of Justice Assistance for inviting NIJ to participate in Second Chance Month. As the research, development, and evaluation agency of the Department of Justice, NIJ is committed to engaging in innovative research to support our understanding of reentry and to provide useful tools and knowledge to our practitioner colleagues throughout the U.S. We appreciate the opportunity to join you today during Second Chance Month and discuss our ongoing evaluations of the Second Chance Grant Program. As a quick reminder, you can track all Second Chance Month activities using the hashtags here on the screen, and by going onto the National Reentry Resource Center website.
So today research teams from Georgia State University and NORC will provide insight into their evaluations of the Second Chance Act Grant Program. During today's first presentation, doctors Bill Sabol and Elizabeth Beck from Georgia State University will discuss their evaluation which is mandated in Title V of the First Step Act. Broadly, their work will examine the effectiveness of grants used by the department to support offender reentry at the state, local, and tribal, and federal levels. During today's second presentation, Dr. Carrie Markovitz from NORC and Dr. Andy Wiegand from SPR will present on their longitudinal follow up of an original cohort of participants in the Second Chance Act Grant Program. I'd like to thank Bill, Elizabeth, Carrie, and Andy for being here today, and for sharing this important work with all of us, and for participating in Second Chance Month. And again, if you have any questions for both--for either research teams, please do pop them in the Q&A box, and we'll be sure to get to them at the end of our presentation. So now I'd like to turn things over to Bill.
WILLIAM J. SABOL: Thanks, Marie. So I want to talk a little bit about what we're doing at Georgia State on this evaluation of Second Chance Act Grant. And the punch line is there's a lot in progress and, of course, in the last year and a half like all of us COVID has affected the grantees. So I'm--and their implementation. So this is really a lot about design and where things stand and not much about outcomes. But no real surprise there.
So Marie touched upon the purpose of the Second Chance Act. I'm not going to review those again. But I did want to comment that in the reauthorization of the Second Chance Act, there were some changes from the original Second Chance Act that in my view broadened the scope of interest of the act. Retained with the first Act is a focus on recidivism. But the act also has interest in other outcomes, more reintegrated outcomes, like employment, housing, child support, and places a big emphasis on matching offender--of matching needs of persons who are incarcerated and soon to be released with supervision strategies.
So we've been doing a lot of work the past year, year and a half, on evaluation design. And we're at the point now where we've completed valuable assessments and working on site plans for three sites. One is in Alameda County, California. It's a site run by the Alameda County Probation Department. The second is the Criminal Justice Coordinating Council of Lucas County, Ohio. And third is Hope of Buffalo in Erie County, Pennsylvania. And just a little bit of background on the sites, so you get a sense of the flavor of what they're trying to do and how they're similar or different. In working with the sites and talking with them and thinking about kind of what's the gist of what they're trying to aim, one thing that they have in common is an effort to try to build self-efficacy among persons who are involved in the programs. And they're doing it by different means. So essentially they're trying to improve decision-making, and how people respond to challenges upon release. And they're--like I said, they're doing it by different means. In Alameda County, there's a focus on increasing knowledge and awareness of services, and how to respond to those challenges using different technologies. In the Lucas County, the Criminal Justice Coordinating Council, there's a--the focus is primarily on incarcerated parents and their families, and again seeking to increase self-awareness and engagement. And Hope of Buffalo, the project in Erie County, has focused on identifying service gaps, and using community-based supports and cognitive behavioral therapy to help clients learn to adjust to their environment--environments. So everything is focused on trying to improve client decision-making, provide services, and improve well-being.
I'm going to flip through some of these sites quickly because the main points are everybody's trying to deal with at least moderate but moderate to high "risk level persons," whether that's supervision levels or risk, as determined by some risk instrument. Common risk instruments are COMPAS, LSI-R, and other well-known instruments. Each program has pre and post release components. Again, this is consistent with the Act. In Alameda County, they were the first county in California to do virtual in-facility case planning before release, first county in the state of California. So they're--but a big part of their work is pre-release work, the case planning, workbooks, and doing service referrals using an app that Department of Justice has created. And a post release component is to develop virtual reality simulations of settings or situations that persons might encounter upon release.
In Lucas Coordinating County Council of Ohio, they have a curriculum that focus on family, and parenting issues, and family engagement, couple communication, and whatnot. And they also engage graduates from their program to work in post-release programs. In Erie, again, in pre-release, they were trying to work in jail facilities, providing workbooks and some training and preparation for release. And then provide post-release services that focus on preparation for work using cognitive training and peer mentoring. One thing that's interesting about their peer mentoring is their peer mentors are former law enforcement or corrections staff, which is kind of interesting as opposed to formerly incarcerated persons. Okay.
Briefly, all the sites have run into COVID problems and have had one or more delays. I'm not going to detail these. I presume the slides will be available, and if you're interested, you could read through those on your own. But we're at the point now where we settled on a number of designs for outcome and impact evaluation. Luckily, even before we got involved in the Alameda County case, one component of their program was already being evaluated by University of California, Berkeley use a randomized control trial or pre-release case planning. And the findings from that was this virtual pre-release case planning improved appearances, first appearances on probation, by like 11 or 12 percentage points. The county took that information decide to implement this county-wide. For the other three phases, well--we're planning to use randomized controlled trials for two of the three. And for the third, we're going to map--we're going to do comparisons between a client group that gets the reentry workbooks with a group of persons who have been on probation prior to the introduction of the workbooks. So we'll be able to do a pre post with matching to try to compare outcomes. In Lucas County Coordinating--Criminal Justice Coordinating Council they're well along the way of enrolling people in the randomized controlled trial. Like the other sites, they're dealing with decreased enrollments because of lockdowns in facilities that occurred during COVID but slowly things are starting to pick up. And with the Hope of Buffalo site in Erie County, we're planning to do kind of a dosage response model with comparison--match comparison group person--other jail inmates for similar risk and needs.
We’ll do process and implementation evaluation the same time it's a combination of quantitative and qualitative analysis to understand core questions about timing, delivery of services, dosages of programs, and so on and so forth. And then we are focusing on cross site issues. A core issue here is we want to be able to determine whether we can aggregate data across the site, looking at comparable people who received different types of interventions and make some broader inferences about the scope and impact of Second Chance Grants. And I'm going to turn it over to my colleague, Elizabeth Beck now. She's going to talk about two particular cross site issues that we're focusing on collaboration and self-advocacy. Elizabeth?
ELIZABETH L. BECK: Thank you so much, Bill. Thank you, Marie. Thank you everybody who's joining us. And I am going to talk a little bit about two additional issues that we're looking at, and that are the issues of self-efficacy and collaboration. And as Bill mentioned, all of the interventions have a self-efficacy kind of component, but it's not necessarily identified as such. And I personally got really interested and vested in this idea of self-efficacy when I read a article in the Wake Forest Law Review by somebody named Charles Tarwater who wrote the article when he was in prison. And what he says in that article is that learning skills in prison and getting GEDs in prison is a--is a--is a good thing, but the most important thing is to unlearn everything that you've been experiencing in terms of how to be a human and ways in which all decision-making is taken away from you. So which to me sounded like a portion of what he was talking about was self-efficacy.
And so we began to look at, and we know that self-efficacy in smaller studies is associated with less recidivism, has good outcomes, but we don't have a larger way to look at it. And so we began to experiment or play with the idea of doing--using the sites to do retrospective interviews with people on self-efficacy, and our goal is, at each site, is to do about 18 interviews, but that they'll represent people who have been out of prison, three to four months out of prison, eight--or jail, out of--out of prison or jail eight or nine months, or out of prison or jail 14 to 18 months. And we chose these--this interview idea with the notion that self-efficacy dips and can change. So when somebody first gets out, they feel empowered. And as they begin to interact with more structural barriers to housing, to jobs, to a variety of things, that self-efficacy begins to--our hypothesis or thought is that it begins to wane. And how do we then create a model of self-efficacy in intervention that allows resilience during those non-efficacious experiences that all people experience when they get out of prison or jail, be they can't get a job or they can't live with where they used to live, because it's too close to where a felon is, or the variety of things that interact and affect people. So with the idea that self-efficacy is a really important protective factor and for resilience, we really wanted to create a design that would allow us to look at the relationship between self-efficacy and recidivism and understand not only how we can measure it, but how can we begin to build it in ways that make change.
Along with that we also during the site visits, noticed the ways in which formerly incarcerated people had or had not leadership roles within the sites that we were visiting. And we became very interested in ways in which--because all of the sites require--any kind of good reentry requires community involvement and collaboration. And so we were really interested as well in what makes strong community collaboration? What are the factors that support strong community collaboration? And what is the difference in programs where formerly incarcerated people are in leadership roles for collaboration and for decisions that are made? And so those are the cross-site issues that we're interested in exploring. Not sure if you wanted to add anything else? Okay. So we're--I'm happy to answer any questions when the time comes.
And we did do pretests. And within those pretests, the idea of the dips in self-efficacy definitely played out. The issue of the stigma and barriers becomes very important over time, past, in many cases, when somebody's involved with the program. And then the burdens that people have when they leave prison, including medical bills, and things that they've occurred from prison beyond the fines and fees that they have to pay as well also affects self-efficacy. So those are the things we're looking at. So, thank you.
WILLIAM J. SABOL: Okay. So I'm going to back up. I've seen a couple of questions and a few minutes I have remaining to try to address them. One question was about LA County. So I don't know the geography of California that well, but LA is a little bit south of Alameda County. So in California probation is organized at the county level and this particular site that we're talking about is Alameda County's probation department. So people who get assigned to probation in that county will be possible subjects of the intervention.
And then another question was about whether the programs are far enough along for RCTs. So there's been question for a number of reasons. One is, are the programs stable enough to be designed and tested? And what--where are they in their--in their statuses? My response to that is, we already know from the Alameda County that they can establish stable interventions, because the evaluation I alluded to on the--about the prerelease video conferences was done even before we started work on this project. Alameda grant goes back to--I forget when they originally started, 2018. And they worked with Berkeley, it was the Berkeley to design that, test it, and maintain the random assignment throughout. Now, they were measuring early outcomes--or short-term outcomes, whether people made their first appearance within 48 hours, and not measuring longer term outcomes like recidivism, but they've demonstrated they can maintain a stable intervention.
And similarly, in the criminal justice court and counsel in Ohio, Lucas County, I think they've had a lot of experience running programs. And they've been able to maintain their intervention with stability through the COVID. I mean, they've dealt with the problem of lockdowns, but they've been able to get--maintain enrollments. And they're about halfway to reaching their target enrollment--targets. And they've got about another year for their grant. And another point too on the recidivism outcomes and other outcomes, our intention is to measure at different periods, like shorter run, maybe a year, 18 months, and then continue to track folk longer term using administrative data to look at a variety--at these outcomes, like recidivism, employment, and what have you.
So lot to do, we still got a lot to go. But we're cautiously optimistic about the capacity of the sites to continue to maintain their interventions. And we're starting to receive data from the sites that we're going to analyze, that confirm or deny whether that assumption they can maintain stable interventions is true, and then move forward with that. And at that--this point, I'm going to turn this over to my virtual colleagues from NORC and I believe Carrie that will go to you. Thank you.
CARRIE MARKOVITZ: So good afternoon, everybody. My name is Carrie Markovitz. And I am from NORC at the University of Chicago. I also have my colleague, Andrew Wiegand who is from Social Policy Research Associates. And we are presenting on a longitudinal study of the 2009 Second Chance Act Adult Demonstration Program participants. So, in our presentation, we'll talk a little bit about ourselves, and then we're going to give you a little bit of background on the original evaluation that we did together. It was NO--it was Social Policy Research Associates, NORC at the University of Chicago, and MDRC; we did that together. And so that evaluation was done between 2010 and 2018--2017, 2018. And now currently, we're doing a follow up looking eight or more years after individuals participated in the SCA programs, and looking at long term outcomes longitudinally, so we are in the middle of that study. So we're going to talk a little bit about the study and the pilot that we've just completed, and then answer questions.
So first, just a little bit of background. Again, I'm from NORC. We're an independent nonprofit research organization founded in 1941. We are affiliated with the University of Chicago. And we're nationally recognized in our expertise for research, evaluation, analysis, and data collection. We are partnering with Social Policy Researcher Associates who we also partnered with on the previous study, and they are an evaluation, research, and technical assistance firm with 30 years of experience in evaluation. And they have extensive experience conducting evaluations focusing on justice system involved populations. So now I'm just going to hand things over to Andy so he can talk about our previous study.
ANDREW WIEGAND: Thank you, Carrie. And I'll echo what prior--our presenters have said. Thank you all for joining us and for letting us talk to you today about some of the things that we've been working on and continue to work on. I'm going to talk briefly about the original evaluation that we did, the evaluation of the 2009 Second Chance Act Adult Demonstration Projects, to tell you how that study was set up and give you a little bit about the findings from that study, and then I'll turn it back over to Carrie to talk about our follow-up from--to that evaluation. And I think actually Carrie I have control so I can just advance the slides on my own.
So the prior evaluation that we conducted was an implements--an implementation study and a random assignment impact study conducted between 2009 when we're originally funded for the study and 2017. The focus of our study was to examine programs developed by seven of the first round SCA Adult Demonstration Program grantees funded with grants that were originally awarded in 2009. Our study participants were measured on a range of outcomes at both 18 months and 30 months following random assignment. Getting a little more into the evaluation methodology, we randomly assign 966 individuals overall across the seven programs. These individuals were all randomly assigned between December 2011 and March 2013, so a span of about 16 months for our intake and were randomly assigned either to a program group or a control group. The program group members were able to participate in whatever individualized Second Chance Act services were available to them from the seven grantees of the seven programs. Control group members did not have access to Second Chance Act services, but they were able to receive other reentry services, whatever reentry services were otherwise available in their communities. And we'll talk a little bit more about how that could've had some implications for our findings and results that we saw.
We conducted a follow-up survey at 18 months, so 18 months after random assignment got a pretty good response after that, 82%. So that follow-up survey at 18 months. We also collected administrative data on employment and recidivism at 18 months and 30 months to assess study participant outcomes. So we had the survey at 18 months and administrative data at both 18 and 30 months. Talking a little bit about the findings, individuals in the program have had better long-term employment and earnings. In the second year after random assignment, the program group reported consistently higher employment rates than those in the control group. And towards the end of the observation period at 30 months, the program group earned approximately 83% more than the control group in quarterly earnings. In terms of recidivism, we didn't find similarly positive results in the 30 months following random assignment. Those in the program group were no less likely to be re-arrested, re-convicted, or re-incarcerated. So there really weren't any meaningful differences between the two groups on multiple different measures of recidivism. Similarly, the time to re-arrest through the first re-arrest or time to re-incarceration was no shorter for the program group and there were no fewer total days incarcerated, that includes time both in prisons and jails. So overall we didn't find positive impacts from program participation on recidivism. In fact, there were slightly greater total number of re-arrests and re-convictions among the program group, though this quite possibly could be because the enhanced case management and oversight of those in the program group might've increased the likelihood of catching new offenses or identifying ways in which individuals recidivated.
So some of the possible reasons we highlighted in our initial--or our original report summarizing these findings. The service--the service differential between the program and control groups was largest for employment-related assistance services, and was more modest for other services, so there was about a 20 percentage point gap or so between program and control group members in the receipt of services related to employment, finding employment, retaining employment, those kinds of things, whereas the gaps between program and control groups in receipt of other services wasn't as stark. There were still significant differences between them as you might expect since program group had access to all the Second Chance services. But the gaps were more on the range of 10 to 12 percentage point differences. And so perhaps the fact that the SCA programs really provided a substantial amount of employment-related assistance helps to explain why we saw positive impacts on employment and earnings and didn't see similar positive impacts on recidivism.
Another important note is that control group members were often able to access similar services elsewhere. So while there were gaps between those and the program and control group in receipt of services mostly across the board, there were still a fairly high percentage of folks within the control group who were able to access services and therefore potentially improve their overall outcomes in the months and years following random assignment. And then, perhaps this goes without saying, but despite the best intentions, the SCA funds may not have been able to--have been adequate to meet all of the many complex needs of those returning from incarceration. So the programs did what they could, they did the best they could with their funding, but obviously funding isn't infinite and many of the folks returning to communities have a vast and long list of needs and so participants identified as part of the survey a number of additional services they would've liked including more of some of the services they actually received. And so it's quite possible that the funding just simply wasn't enough to cover all of the needs that these participants had. So with that as kind of a brief summary of our original evaluation, I'm going to turn it back over to Carrie to talk about the follow-up to that evaluation that we're conducting currently.
CARRIE MARKOVITZ: Thanks, Andy. Thanks so much. So as you can see, we had some findings from the previous study. Some were encouraging around employment; others were possibly more disappointing especially in terms of recidivism. So we are currently doing a follow-up to the SCA, the original [INDISTINCT] oops, to see if longer term outcomes--to see if the outcomes continue longer term, or if we see possibly better outcomes in the longer term.
So to examine the longer term impacts of the SCA program on this first generation of the adult demonstration grant participants, we--NORC and SPR together are doing an additional data collection and analyzing both new survey data and administrative data on similar outcomes to the ones we did for the original study. So we are looking at recidivism, of course, for obvious reasons, and employment because of the encouraging findings. But we're also going back and looking at some of the other outcomes such as in housing and in childcare and such, to see if there were, you know, possibly outcomes--significant outcomes there. We're looking at eight to 10 years from when participants were originally enrolled in the study. So we had some obvious concerns about being able to follow-up to find people and to be able to convince them to participate in the study.
So our research questions are--you know, very similar to the ones we had previously, looking at and--eligible applicants to the program, whether the group that received the individualized SCA services and those who did not participate across the seven SCA Grant Programs did we see better long-term outcomes in recidivism, employment and earnings, and as I said, other outcomes, housing, family formation, benefits, substance abuse, etcetera, just as we did with the previous study.
So again we're utilizing a combination of primary survey data and existing administrative data from various state and federal agencies. We decided to go back and attempt to locate and survey as many of the original studies’ 966 randomly assigned program and control group members as possible. We decided because this--a study like this, this longitudinal year, the--so much has passed. We decided that it would be best for us to design the study in two phases. That--the first phase would be like a pilot where we would attempt to reach individual--a small group of individuals to complete the survey instrument. Obviously, we would design our survey instrument and gain the necessary approvals from NIJ and our institutional review board. And then, based on our findings in that pilot, we would then apply that to the full study which would involve the follow-up survey and the collection of administrative data. We also included in the pilot first phase some initial outreach to our--to several entities to attempt or to learn what information we need, what forms we need, what processes we need to go through to obtain the necessary administrative data. So that was another component to the first phase.
So phase one, we--as I said we mapped out our initial design, we gained IRB and NIJ Human Subjects Approvals. We developed and piloted evaluation tools, a survey and administrative data collection protocols and tools, in this first phase. The survey we developed was shorter than the original survey; it's a 30-minute questionnaire that we piloted with 50 non-incarcerated cases. So as Andy said in his previous present--in his presentation, on the previous study, we found that several individuals who participated in the study were incarcerated at the time of surveying. So we also anticipate that will likely be the case for our follow-up study. And so we wanted to try to look at the locating difficulty for the 50 non--for 50 non-incarcerated cases. Also, for the administrative data, letters were drafted and request documents, we--in some cases, we're able to already establish MOUs and data sharing agreements. So we made quite a bit of progress there. We then documented and discussed our findings with NIJ.
So, you know, and I mentioned the survey data collection, we selected a random group of 50 evaluation participants and conducted prefilled locating, the Pilot Data Collection took about eight weeks. We did this from November to January and we sent Early Bird invitations. We were quite encouraged; several respondents called in to complete the survey, and we only allowed a two week period for that where normally we’ll have a longer period. And then we began outbound calls to respondents and to their contacts. And we sent text messages and emails and also we conducted locating activities which produced a lot of new contacts, which, in some cases, we're able to send additional letters. But given the eight-week period, we weren't able to do a lot of follow up during at least the pilot period. And then, for the Administrative Data Collection, we developed tools to collect and monitor criminal justice system and employment-related administrative data from federal, state, and local public agencies. We investigated and communicated with public agencies to confirm or update information and we began the request process for obtaining these data. We also obtained approvals for conducting interviews with incarcerated study sample members which as I said, will be critical given that so many are incarcerated at this point.
So what we learned from the pilot study, we just--we learned a few key things which have helped inform our current efforts. So right now, we're just at the beginning stages of our whole data collection. And what we learned from the pilot study, so first of all, over the eight weeks, we only had a 20% response rate, which is quite low. But we also anticipate that with more time, we'll be able to locate and complete surveys with far more people. Eight of the 50 respondents did the Early Bird call in which was very encouraging. That's a quite a high percentage. The pilot was not enough time, as I alluded to, to make the necessary repeated outreach that we need to make and attempts in order to survey individuals and find them. We weren't really able to fully follow-up with newly found contact information. And also, kind of, develop the necessary relationships. A lot of times we'll talk with relatives and friends and it can take some time to gain their trust and understanding to be able to then help us locate people.
The pilot survey did not include interviews with incarcerated respondents or in-person visits, we had not started that process during the pilot. It was only phone calls. So we anticipate that both including the incarcerated respondents and utilizing in-person visits will make a huge difference in the response rate. We also expect to be able to obtain administer--oh, also, the Administrative Data Pilot was quite successful, and we expect to be able to obtain administrative data for most individuals in the evaluation. And if we are unable to obtain high response rates on the survey, we may rely more heavily on the administrative data for assessing some of the participant outcomes depending on, you know, how it goes with the survey. So we're very excited to have this opportunity to be able to follow-up with the first generation group from SCA. And we're looking forward in the future to be able to present on our findings from that study. So thank you.
MARIE GARCIA: Okay. Thank you, Carrie and Andy, Bill and Elizabeth for your presentations. So I wanted to move into the question and answer session. And I know our panelists have been answering some of your questions as it comes through.
ANDREW WIEGAND: Seems as though Marie's phone died which is never a good thing. So I'll jump in and any of the other panelists feel free to join. I think what Marie was saying was that we were--like, we've seen a few questions in the chat, and I know we've been trying to answer those as we've gone along. I haven't had a chance to look and see if all of the questions have been answered.
MARIE GARCIA: Thanks. Andy, can you hear me now?
ANDREW WIEGAND: We can hear you, Marie.
MARIE GARCIA: Great. Thank you. I apologize, everyone. My phone died and I apologize for that. So one of the questions that came through which was--I think be great to talk about because I know both projects [INDISTINCT] in the development and getting started. So Bill and Carrie, could you [INDISTINCT] you're doing and when you expect to have any interim or final findings for these evaluations?
WILLIAM J. SABOL: Hi, this is Bill. Sure. So one of the elements of the mandate in the Second Chance Act is to provide a report within a year after enrollments in any of these projects and the one of our sites that's farthest along is the Hope of Buffalo site in Erie, who's planning to finish up late 2022. And we started to get data from the site so--and they've--yeah. So finishing enrollment by 2022, we'll be able to have some short term outcomes for that, like, say September, October 2023 to meet that 365-day report. The other sites probably won't finish all of their enrollments until sometime in 2023. So our intention is to--you know, follow the law, produce the one-year report, and then sequence interim reports as longer time periods go by. The interim reports would just provide updates on outcomes over time and then, you know, we'll start pulling things together as they go on.
The other thing is with the setup, it's the self-efficacy work that Elizabeth talked about, we're planning to go in the field this summer and hope to have something to report on from that first round of interviews and visits. I think through some of the comments, I know folks are--there's a literature on self-efficacy and we're hoping to contribute to that with the first round. And then with subsequent rounds, see if there's more to add to our understanding of how people respond and, to the extent that we're able to, articulate how involvement in the programs or people in the programs, or family and communities affect that will be able to contribute in--to the literature in that way. So tentatively, we'd like to have a draft by the end of this year, early next year on the self-efficacy work. So--but, you know, that's just--that's just the plan right now. Carrie, I don't know if you can add to that.
CARRIE MARKOVITZ.: Okay. So we are doing data collection through 2022 and so we anticipate having a report for 2023, probably mid-year 2023, I believe is our schedule.
MARIE GARCIA: Okay. Great. So there was a question about the article that Elizabeth mentioned from the law review and Bill provided that reference in the text, so if anyone is interested of--on that article, that particular reference and perhaps the other references that come along with it, that particular answer is in the text, the Q&A box.
We have a question about the Juvenile Second Chance Act participants, and I can take that particular question. So NIJ's focus on these two evaluations is primarily--is adult. So we're focusing only on adults. We have in the past conducted one evaluation of the Juvenile Second Chance Act Program. We haven't done anything since. We would absolutely support doing that work. We do--NI--NIJ now, we do--we are focusing on juveniles in the work that we're funding, so we're, you know, hopefully in the future we might be able to support additional work on the Juvenile Second Chance Act participants.
We have another question about--oh, my. Where did it go? We have a specific question about how the department and BJA solicitations will change, or their site selections and how that will change. I believe Andy is adding something there. That's a great question. NIJ is not--we are running the evaluation and the science and the research behind the operation of the Second Chance Act Program. So that question is probably best for our colleagues in OJP, so not to skirt that question but NIJ really isn't able to answer that. But again, we're happy to support our colleagues in doing this evaluation research.
We have another question about--very early on in the conversation about any efforts either project is doing to educate the community and public about attitudes towards formerly incarcerated persons as they enter their community. You know, is--are they being accepted and, you know, how is that impacting their lives as they return? So either Bill or Carrie, do you want to speak to that?
WILLIAM J. SABOL: So one of the sites, the Criminal Justice Coordinating Council in Ohio, Toledo does--holds community events. Right now, the name of them is slipping my mind.
ELIZABETH L. BECK: Going Home to Stay.
WILLIAM J. SABOL: Going Home to Stay. Thanks, Elizabeth. And directly addresses that issue. I think all the--all the projects are trying to work on these issues in different ways. For example, in Erie, they were trying to work on housing issues and deal with community responses to what we might call a halfway house and, you know, work and to provide support in that way. But I would say the Lucas County, the Criminal Justice Coordinating Council is--example is an explicit one where they're really attempting to address those issues.
MARIE GARCIA: Okay. Great. We have a--I think, Bill, to the--to the program that you just mentioned, someone asked if you could please repeat the name of that program.
WILLIAM J. SABOL: So it's called the Criminal Justice Coordinating Council of Lucas County, Ohio. And the actual--oh. The name of the program, they call it Building System Capacity and Testing Strategies to Improve Outcomes. So yeah, so I don't know--I mean, what was the name of the...
ELIZABETH L. BECK: Going Home to Stay.
WILLIAM J. SABOL: Yeah. Going Home to Stay is the community group meetings within that program. That's one of the elements of the program. They call it Going Home to Stay, to work to address broader community responses to returning incarcerees.
MARIE GARCIA: And I will put in a plug for next week. NIJ is hosting a webinar on the impact of criminal records and expungement. And we will also be highlighting the NRRCs--NICCC inventory which is the database that talk--that has all of--a lot of information on collateral consequences. So that might be informative also about how folks who are returning and what impacts, you know, they face and challenges they face when they return. So that might be something that you might be interested in.
There was a question I think to me about the evaluation that I mentioned about the juvenile. I will certainly provide a chat response here in just a second. Carrie, there is a question for you specifically on slide 31. And the question which is--I'll read it. When looking at the re-arrest numbers for those randomly assigned to the program, were the offenses that led to this re-arrest, the same as when they were first arrested, or did they differ? Is that something that you can speak to?
CARRIE MARKOVITZ: Don't, you know, that's a wonderful--that's a great question. I don't know if I can answer that right at this moment. Andy, do you happen to know?
ANDREW WIEGAND: I'm not sure which slide was slide 31. So I mean, if you...
CARRIE MARKOVITZ: Yeah, I'm trying to figure that out too. The Findings on Recidivism.
ANDREW WIEGAND: From the original evaluation?
CARRIE MARKOVITZ: Yes.
ANDREW WIEGAND: Okay. And Marie, I don't see the question. Could you repeat what the question was and I might be able...
MARIE GARCIA: Sure. The question was--let me get there. Just one second. When looking at the re-arrest numbers for those randomly assigned to the program, were the offenses that led to this re-arrest the same as when those were--when they were first arrested or did they differ? And the comment that follows, I know those who were in the program were no less likely to be arrested but just wondering if the offense severity changed.
ANDREW WIEGAND: We looked at arrests by different offense type and we categorize it a violent crime, property crime, drug crime, public order crime, and we didn't see any really within the recidivism data, we didn't see much of any effects between program and control group at all. And so there were no differences in those different categories of crimes. So I would say we couldn't--we can't definitively answer that question but the evidence we do have available suggests that there didn't seem to be a difference in the severity of the crimes people were arrested for.
MARIE GARCIA: Okay. Great. Thank you. Okay. And I just wanted to say at Lisa, I'm--Lisa, I'm going to send you my email address and you can follow up with me because I'm having a bit of difficulty finding it on the website given my laptop issue, so I will send you my email and you can follow up with me. Let's see. Do we have any other questions?
We did have one comment very early on in the conversation about the use of social biology using communities as ecosystems and looking at--let's see, sensitivity training, emotional rapport. And Bill, if you wouldn't mind, can you speak to those more community-based efforts that you and your team will be assessing? Because I know the statute requires assessing the impact of the programs on communities and just kind of doing a more community-based assessment, so can you talk just a bit about those requirements of the statute?
WILLIAM J. SABOL: Sure. So part of the community-based approach is tied to Elizabeth's discussion of collaboration. And what we want to know at first is, like, who are and who's involved, what roles they have and, both in program design but more in program implementation, are the programs bringing in and reaching out to different kinds of academy groups? So that would get at the kind of the front end part of it. The statute does ask really the question like what is the impact of this on communities and community crime and all that. And that's a pretty difficult one to address in part because, you know, these programs are comparably small. For example, in Buffalo, you know, they're getting 20,000 people a year going to jail and they're working with about 300. So to sort of say, okay, can this small program affect an entire county? No.
But one idea that we're trying to explore is kind of like with the geographic location, you know, we know where people return or at least some address data, could we identify certain areas where there might be concentrations and then if there happened to be some--in some areas, are there comparable areas that don't have as many participants where we can at least try to get some comparisons. But bottom line on this, I think we'll have to do some--we'll have to focus on that more qualitatively and try to get some sense of in talking with entities that involve a collaboration, some sense of what this means for the community of people returning. What--and things like are they aware of the program? Are they--does involvement in the program affect how they respond to people returning, or are there any distinctions that are identifiable and measureable? So definitely a challenging part of this and it's not something--it's not something that can easily be sort of put into a regression model but I think through the qualitative work, the collaboration in itself, obviously, we get some clues about how that might be working.
MARIE GARCIA: Thank you. So we have another question that reads as drug decriminalization and other sentencing reforms occur during intervals of longitudinal research, how are relative impacts of those reforms, the primary interventions, and other contextual factors differentiated?
WILLIAM J. SABOL: So as long as those reforms affect those treatment and control group, theoretically they would and you're really looking at the differences, right, between the groups. So having good well-identifying treatment control groups that go--that go through society and time in the same way, you know, they might affect a treatment group's trajectory but we're real interested in, is there a difference between treatment and control? So I think--so in that sense, we're not addressing the question of what's the impact of these reforms, we're asking if these things occur, are there differential effects on the groups that we're measuring?
MARIE GARCIA: Okay.
CARRIE MARKOVITZ: Yeah, I agree with Bill's answer that the randomization theoretically takes care of any changes and that we're really looking at the difference between the treatment and the control group. So reforms might improve chances for the control group but we're trying to see that the big differences, their participation, and the treatment group's participation in the program, and that's what we're trying to measure.
MARIE GARCIA: Great. Thank you, both. So I believe most of the questions were either answered in chat and--except for those that I've actually read out loud here right now. So we don't have any further questions in the chat or in the Q&A. So I would like to thank again our colleagues at BJA and the NRRC for inviting us to be here with you today especially Carrie, Andy, Elizabeth, and Bill for presenting. I am the scientist assigned to these two projects, and I'm incredibly excited about the future of the evaluations and the longitudinal impacts of, you know, Second Chance and what is happening to folks over time from a discussion lens. So we're very interested and very excited about these two projects. Yes, this recording will be posted. We will make the--this presentation and the recording available at a later date. I believe it will be available on the NRRC website. And again, you can follow all of Second Chance Month activities using the hashtags that were presented earlier today and on the--going on to the NRRC website. So again, I want to thank everyone for joining us today and for listening to our presentation and we hope you have a great day and enjoy the rest of Second Chance Month.
Opinions or points of view expressed in these recordings represent those of the speakers and do not necessarily represent the official position or policies of the U.S. Department of Justice. Any commercial products and manufacturers discussed in these recordings are presented for informational purposes only and do not constitute product approval or endorsement by the U.S. Department of Justice.