Improving School Climate - Breakout Session, NIJ Virtual Conference on School Safety
Review the YouTube Terms of Service and the Google Privacy Policy
On February 16-18, 2021, the National Institute of Justice hosted the Virtual Conference on School Safety: Bridging Research to Practice to Safeguard Our Schools. This video includes the following presentations:
School Discipline, Safety, and Climate: A Comprehensive Study in New York City, Elise Jensen
As the first-ever comprehensive study of school climate, discipline, and safety in New York City, this study employed a rigorous mixed-method design with two research strategies: (1) a comprehensive quantitative analysis analyzing the effects of variations in school population characteristics, climate, and discipline and population characteristics; personal student characteristics; and (2) in-depth case studies in five schools using alternatives to suspension to explore on-the-ground implementation lessons. The quantitative analysis allows us to examine the intersecting effects of school-, and individual-level factors on school disciplinary outcomes/use of suspension; formal justice involvement; and academic outcomes. The qualitative findings highlight school safety, climate and culture, especially experiences with existing security measures; experiences and feedback about alternative approaches, including whole school approaches, prevention programs, guidance interventions and restorative approaches; how other schools might succeed in implementing similar practices. This study helps fill a gap in the scholarly literature on "what works" and has important implications for educators and justice policymakers nationwide.
Randomized Impact Evaluation of Capturing Kids’ Hearts Program, Thomas Hanson
The Capturing Kids’ Hearts (CKH) program is a school-wide, skill intensive, program designed to strengthen students’ connectedness to school through enhancing protective factors (strong bonds with teachers, clear rules of conduct that are consistently enforced) and targeting modifiable risk factors (inappropriate behavior, poor social coping skills). CKH trains all school staff to model and teach relational and problem-solving skills, communicative competencies, and consequential thinking. This study uses a cluster- randomized experimental design to examine the extent to which CKH reduces violence perpetration, victimization, and problem behaviors; enhances relationship bonds between and among students and teachers; and increase students’ social competencies and academic performance. Key outcomes include measures of (a) violence perpetration and victimization, (b) relationship bonds between and among students and teachers, and (c) personal and social competencies. Staff and student self-report survey data were collected in the spring prior to implementation of CKH and in the spring of the first and second implementation years. Archival record data was also collected to assess student attendance and discipline outcomes. Estimates of program impacts based on the staff surveys suggested that CKS had small but consistent positive impacts on various aspects of school relationships, student voice/disciplinary climate, and student behaviors, but no discernable impacts were detected on the outcomes assessed by student surveys. Archival data results were mixed, indicating that schools that implemented CKH exhibited greater increases in excused and unexcused absences and suspensions, but more pronounced declines in disciplinary referrals.
Comprehensive Assessment of School Climate to Improve Safety in Maryland Middle Schools, Catherine Bradshaw and Elise Pas
This study examined the efficacy of an adapted version of the Classroom Check-Up (CCU) teacher coaching model to address the detection of, prevention of, and responding to bullying. This teacher-randomized trial included 78 teachers who were randomized to the intervention (coaching with mixed-reality simulation) or comparison condition. We collected teacher surveys and classroom observational data for pre- and post-test. Intervention teachers were significantly more likely to report responding to bullying with referrals; to intervene with the victims and perpetrators; and to report adults at their school did enough to address bullying. This shows promise for this novel, teacher-focused intervention.
Improving School Safety in the District of Columbia: Lessons Learned From an Evaluation of Safe School Certification, Renee Ryberg
We present lessons learned from an evaluation of a model using technical assistance (TA) to guide schools through a framework to improve their organizational capacity and improve school climate. Despite implementation challenges with staff turnover and competing priorities leading to significant attrition, we found that students in schools receiving technical assistance for implementing the model had more positive changes in perceptions of school climate. These differences were quite small, and offer limited evidence that providing schools with TA to improve organizational capacity is associated with more positive school climate. The efficacy of capacity-building interventions may be limited by the very conditions that inspire them.
>> Well, hello, everyone.
Welcome to breakout session number seven, Improving School Climate.
There are going to be four talks scheduled for this session.
Might give a couple more minutes for people to log in and for our participants' numbers to rise a little bit, and then I'll go ahead and introduce everybody, and we can get started.
All right.
The participants are starting to level off now, so again, welcome, everyone.
This is breakout session number seven on Improving School Climate.
There are four talks scheduled for this session, and so during this breakout session, you can use the chat option if you're having technical issues that you come up with, and then if you have any questions...
Sorry for the sirens in the background.
If you have any questions for our speakers, please submit those in the Q and A section.
Questions will be answered after all speakers have presented, and if you have a question for a specific speaker, please try and indicate which speaker you would like to answer your question when you submit it.
So for more information on all of our various speakers, please see the speaker bios found on the conference website.
So first today we will hear from Dr. Elise Jensen on School Discipline, Safety and Climate: A Comprehensive Study in New York City.
>> All right.
Let me get this set up.
Okay.
Thank you, everyone, for attending.
My name is Elise Jensen as was just said, and I work for the Center for Court Innovation in New York City.
Oh, hold on.
I am not sharing my findings, I think.
I've already...Hold on.
Now I'm going to share my screen.
Okay.
Is this...Now we're good? Okay.
I think something was wrong, but I didn't see my face.
All right.
Yes, I work for the Center for Court Innovation in New York City, and we'll be presenting on the Comprehensive Study of School Climate, Discipline and Safety in New York.
Other researchers on this study were Lama Hassoun Ayoub, who presented yesterday; Talia Sandwick; and Josephine Wonsun Hahn.
They were also coauthors in this NIJ report.
First, I want to thank the NIJ for the opportunity to conduct this study because we were able to do a quantitative analysis on disparities in discipline, as well as the negative impact of it in New York City schools, but also, on the other side to do rich case studies on positive practices being implemented in five schools.
So, for this presentation, I'm going to go over our research questions and then a brief discussion of our quantitative findings, but because this session is called Improving School Climate, and we discussed more about school climate in our case studies, I'll focus the discussion, the remainder of the discussion, on those.
All right.
So our three research questions were...
The first two were based on the quantitative analysis, and question is, how do the personal characteristics such as age, race or discipline history of students predict either future suspensions and future arrests? Also, question two is, how do school composition, discipline and school climate predict future suspensions and future arrests among students? Question three, which is for the qualitative component is, how do effective restorative justice practices work on-the-ground, and what lessons can be shared with other schools? So to go over our quantitative methods, we used student data from the New York City Department of Education, which was linked to arrest data that we got from the New York State Office of Court Administration, which we got information from the adult court as well as juvenile and family courts.
Our student sample was 87,471 students that were from middle school and high schools.
All students had at least one incident in the 2011/2012, 2012/2013 school years, and we tracked those students 1 year prior to that incident and 2 to 3 years after the incident.
The number of schools in our sample was 804, and because the nature of our data, which is students within schools, we used multilevel modeling for the analysis.
All right.
Onto the qualitative results already.
I should say that we did conduct a lot more analyses, but again, for the purpose of this presentation I'm really going to highlight just a few things, and in this table, I should say that pretty much everything was significant, but we do acknowledge that the size of our sample was quite large and probably a factor in that.
So our outcomes were future suspension and future arrest both were binary variables, indicating that either they were suspended or arrested in what we consider to be...
after what we consider to be the initial disciplinary incident, and because they're this binary outcomes, we use hierarchical logistic regression, and here we're presenting the odds ratios.
So, for the results, what you see here, for those who received a future suspension the relationships with the largest odds ratios are students who are Black or Hispanic have the strongest likelihood of receiving a suspension in the future.
Also, as expected, you'll see that the students who were suspended on their initial incident were also more likely to be suspended again in the future.
When you look at future arrest, some of the strong predictors...
One of the strongest predictor was being male, which is not surprising, and also, the number of prior arrests were also strongly related...
They strongly increased the likelihood of future arrest.
Also, Black students had a stronger likelihood, and having a suspension on the initial incident did increase the likelihood of future arrest, showing potentially some evidence of the school-to-prison pipeline, which is also one of the things we were looking at.
Here I'll make a note that prior grade advance was not significant with future suspension, and the way we measured grade advance was simply just moving from one grade up to the next.
It was kind of our measure of academic achievement.
So it wasn't...
It didn't have much of an impact on future suspension, but there's a little bit of where those who did advance to the next level were less likely to have a future arrest.
All right.
I'm kind of flying through this, so my apologies but definitely here to answer questions in the end.
So then we looked at the school predictors of future school suspension and arrest of students, school suspension and arrest.
We actually didn't focus too much on these other than really looking at school climate, which we used the New York City School Survey as our measure.
One of the questions is, I believe, the school safety index, asking students and teachers and parents questions about if they, you know, feel safe in the school or if they think that their student feel safe in the school, in the hallways.
Do they feel respected? Things of that nature.
We didn't see really see anything, any impact there on student-level outcomes.
Though the one for...
There was some significance.
The odds ratios are nearly equal to one, so it doesn't really have much of an impact there.
Okay.
So now that I've quickly flown by that, I just want to discuss the qualitative methods, and so while we had all this DOE administrative data we...
to explore the exclusionary impact of discipline, they actually don't collect any information on positive practices or at least not that's available to access for us, and so when we started our case studies, few schools were known to be engaged in RJ approaches or related practices, so we conducted the studies in five schools.
We used information with publicly available school demographic data to make sure that the schools were demographically similar to or representative of other schools in the New York City.
We also had recommendations from our advisory board, and we had a principal survey that got a little bit of maybe some of the positive practices that they used in their schools.
The schools were two high schools.
One was a middle school, and one was a joint middle and high school, and another was a transfer school.
We conducted the fieldwork in 2016/2017 and interviewed school staff, School Safety Agents, students, parents, and then we wanted to make sure that those that we studied were actively implementing promising practices.
So, in each school, the staff interviewees underscored the importance of taking a multifaceted, flexible approach to RJ.
The staff reported that various practices were often used in tandem and tailored to address the specifics of a given incident.
These might include the one-on-one restorative conversations, staff-led mediation, restorative circles, counseling, targeted mentoring programs and student-led RJ.
Within this individualized approach, multiple people could lead distinct components of a larger RJ response as deemed appropriate based on their training, availability and preexisting relationships.
Most of the RJ processes were led by staff, which is typically the RJ coordinators, social workers, counselors or administrators, and there was increasing reliance, however, on student leadership and related processes in all but one school, and I'll talk a little bit about that later.
Some of the activities were part of a formal schedule such as community building circles during specific class periods or a designated time.
Much of the work, however, was informal and impromptu.
Like, sort of conversations took place between hallway and classes.
And, also, all schools partner with a community-based organization to support RJ either directly or indirectly such as counseling and then special initiatives.
Staff interviews from each site also reported that their schools took a holistic approach to RJ with community building, wraparound social supports for students, and they perceive those as integral to any efforts to prevent and resolve conflict.
So some of the perceived impacts are that the vast majority of staff, student and parent interviews broadly endorse RJ as a preferred approach to school discipline and conflict.
They thought that it was more effective than punishment models and reported that it impacted their school communities.
They cited that it was better at addressing root causes of conflict as well as minimizing further harm and creating new ways of responding to it, also cultivating empathy and clarifying misunderstanding, and many interviewees knew that RJ wouldn't solve all conflicts or strengthen relationships, but at that moment, they said that they found comfort in knowing they could handle a conflict.
School-wide level was a strong sense of safety, better school climate.
There are fewer fights, greater emphasis on prevention.
There were self-reported perceived decreases in suspension and attendance.
It fostered empathy, relationships and accountability through the school community.
RJ and student leadership was mutually reinforcing, which we'll talk about later, and there was a shift in mindset from using punishment to accountability.
Okay.
Okay, so while most RJ...
While most interviewees supported RJ, they also highlighted a range of concerns and obstacles to building restorative schools.
We grouped those concerns into six themes where we could also provide an approach to address...
[ Distorted audio ] and how to build a robust [ Distorted audio ] So in the first many...
centering community building, why this is important.
Many interviews believed that community building is the foundation of RJ and the keys to conflict prevention and resolution.
It requires trust in our ability and even discomfort, which could test relationships.
They said that when you create these strong relationships and ties, your school becomes like a family where you don't dispose of anyone, and everyone becomes committed to working together.
And they said that it's important to have staff and student relationships both in and beyond the classroom, whether that's scheduling time for mentoring outside or being involved in clubs or other events, also fostering a nurturing environment and sense of community among staff, engaging families in the school community critical, is critical and both challenging because of systemic challenges related to busy schedules or anxiety about immigration status or language barriers.
Okay, I have to fly through this.
I'm sorry.
There's a lot of information here.
So for the next one, RJ could be seen as a threat to the traditional authority of educators.
As one person explained, teachers sometimes come to their profession like, "No, it's my way. I'm the adult.
Students need to respect me," but you have to have an open dialogue of reflection, trust and mutual responsibility, which are necessary for RJ.
So how to do that, one promising strategy for shifting traditional school hierarchies was calling on the adults and students to adhere to the same core values of RJ, which is perspective talking, active listening, learning from mistakes and taking responsibility for one's actions.
One of the staff acknowledged that, you know, they assume they're hypocrites if they don't do that.
Other things are, create a culture of adult grow and self-reflection on issues related to race and class.
Also, if staff feel heard by leadership, they're more easier...
It's easier to give up some control over the students.
Lesson three, they must build comfort with new models and accountability.
The challenge with this is that...
with punishment is that it's deeply ingrained, and RJ is a bit of a culture shift.
So interviewees reported that in their school communities that some considered RJ to be soft...
[ Distorted audio ] ...no consequences whereas advocates say it's hard and time-consuming but is stronger and more effective.
There's also conflicting mandates about how to respond, you know, according to DOE guidelines versus NYPD School Safety Agents.
Even when they do buy in, it's just easier sometimes to...
because...
so you have to be able to buy in and trust in RJ.
One of those things is having firsthand experience and seeing is believing, also enhancing communication about RJ and letting people know, like, what's happening and following up about the results of the incident or, yeah, and what happened with referrals, student referrals to another staff.
Anyway, right, enhancing communication is important.
Lesson four, to institutionalize RJ it's important to use time and resources that show that RJ is a priority because...
in our next point.
It takes a lot of work, and it's easier to suspend students, but if you have appropriate resource allocation, building in staff and other resources such as physical staff, as well, and training, then that does help with institutionalization of RJ.
And it's also important to build RJ into school life, which includes the community building circles in advisory classes and events.
Confronting adversity and engaging diversity, students are dealing with a lot of adversity and stress on multiple levels, which includes individual, family, community, structural, and that can amplify conflict and other issues.
So there needs to be school-based support to help students directly and to decrease school-based issues, and how to do that, the students, they need to provide a holistic approach, giving extensive...
mental health.
That can be through the community-based organizations partners, also celebrate diversity through student-led efforts and cultural relevant education, political education, professional development, culturally representative staff, and but also allow some time to vent about, to process oppression go hand in hand, and lesson six is basically saying, "We've got this." Incorporate more RJ student leadership.
It increases the trust buy-in and accountability to each other, said it's kind of this positive feedback loop where RJ promotes student leadership, and student leadership enhances the effectiveness of RJ.
I think I am running out of time, but also how you do this is create official student rules whether they're peer mediators, RJ ambassadors, put in Youth Court and also supporting organic student leaderships where in some cases allow the students...
initiating their own circles and whether formal or informal, asking staff for help and then also using some of the groups like Black Lives Matter to push culture shifts.
And I do have a wrap-up slide, but I'm going to end here.
>> Excellent.
Thank you so much, Dr. Jensen.
Next, we have Dr. Thomas Hanson, and he will be talking on a randomized impact evaluation of Capturing Kids' Hearts Program.
>> Thank you so much.
Yeah, I'm Tom Hanson from WestEd, and as Caleb mentioned, I'm going to describe the evaluation of the Capturing Kids' Hearts Program, but first, let me just express appreciation to the many WestEd staff, the South Carolina Department of Education, the Flippen Group, which is the developer of Capturing Kids' Hearts and the district and school staff that made this study possible.
This is joint work conducted with many WestEd colleagues, including Gary Zhang, Sarah Gutenberg, Alexa Stern, Anthony Petrosino and Cindy Zheng.
So, first up, what is Capturing Kids' Hearts? Capturing Kids' Hearts is a school climate improvement program developed by the Flippen Group.
It is designed to provide staff protocols and tools and skills that to strengthen students' connectiveness to school.
The pillars of the program are to build strong bonds with teachers, to establish clear rules, rules of conduct through sort of codevelopment of social contracts and to work towards facilitating student-managed classroom behavior and to target modifiable risk factors.
The theory of action basically is, you know, the...
You know, basically the program aims to affect student behavior by changing teacher practice and behavior.
So the training...You know, that's sort of how it works out.
The teachers and other staff are trained to sort of facilitate strong, interpersonal connections with and among staff and students, social skills instruction, pretty much the same factors I've just described, but there's meant to be explicit and clear cocreated expectations and probably most importantly is...
or the key component should have consistency across administrators, teachers, professional support staff and all sort of parties at the school.
And through those activities, the students' relationships with teachers are expected to improve and be more positive, more positive relationships with peers.
Students actually...
Student voice is sort of facilitated through sort of self-management and developing expectations.
Increase school bonding, empathy, altruism and sort of far down the line in terms of the theory of action is lower aggression and reduced violence victimization.
The training, just really quickly, so the training consists of a sort of 3-day foundational training provided to all staff in each school, a 2-day training for all school administrators, then a more intensive 2-day training for these folks that are called process champions, and these are sort of nominated or self-selected staff in each school, maybe about five or six in each school, that's used to be mentors to other school staff on school sites, and then Flippen also provides 1 to 2 days of on-site coaching and consultation with staff, and that pretty much encompasses sort of the, you know, the training and the first year of implementation.
In the second year, is most of the training involves...
There's continued consultation, but there's these booster sessions provided to staff at the schools.
There's also train...
You know, the new teachers and administrators in each school go through the same sort of set of foundational trainings that took place in the first year.
I'm not going to spend much time on this, but this basically shows the sequence, so there's a good bit of training in the summer before implementation and then fall, and then that on-site consultation takes place in the spring and then the boosters again in the following year, second implementation year.
So this is a 3-year study, cluster randomized study, prepandemic.
We wish for the times, you know, where we could have that kind of teacher-student interaction and, you know, that we had during the period of this study.
It involved 27 middle schools within four districts.
The data comes from self-report survey data collected from students and teachers in 2016, which was prior to implementation and then 2017 and 2018, so we have the first year of implementation and the second year of implementation.
It was a wait-listed design, so the schools assigned to the control group implemented Capturing Kids' Hearts after the study, after all the data from the study was collected.
We also relied on district archival data, absences, tardies, suspensions, disciplinary referrals and test scores.
The staff measures were aligned with the theory of action.
Remember, Capturing Kids' Hearts is...
to impact students via staff behavior that affects student behavior.
And I'm not going to go through much detail here.
We have measures of school relationships, staff engagement, teacher perceptions of student voice and disciplinary climate and student behavior.
We had similar measures collected from students on school relationships, student voice, school engagement, student competencies and aggression and violence.
We also had...
I don't have a slide for it, but we also collected data from district archival records.
That would have been a data on absences, tardies, school suspensions, disciplinary referrals and student test scores.
I had to reach back and figure out that, as well.
Survey response rates were pretty strong, were pretty good, 72 percent on baseline and, you know, over 70 percent for, you know, average amount 75 percent, 76 percent for student survey and, you know, 80 percent or above for the staff survey.
Just...This was a randomized control trial.
We did...You know, we tested for baseline equivalents.
We found no treatment/control differences in response rates, no treatment/control differences on staff-reported measures, no real evidence of differences on the student-reported measures or the archival data except baseline outcomes tended to be more favorable in the treatment group.
We conducted a whole bunch of analyses, so I guess you'd say, like, out of 56 tests, we found two instances of a statistically significant baseline difference.
That was on the measure of student empowerment and school bonding.
That's a lot of tests, so these results could be due to chance, as well.
But keep that in mind.
Okay, so in terms of implementation.
The Capturing Kids' Hearts sort of uses these protocols or tools to sort of...
that teachers use to sort of practice sort of or facilitate sort of bonding and sort of, you know, relational aspects of, you know, in school setting, and they're very simple.
One is sort of that teachers greet students at the door each and every day through a fist bump or a handshake, and that sort of process has sort of...
You know, it sort of facilitates sort of a connection, but it also allows teachers to kind of check in on each student and see how they're doing.
It gives them a moment to sort of just sort of notice, you know, differences in students' behavior or maybe recognize students are just not having a good day or they are having a good day.
The other component is the social contract.
So at the beginning of the year, students and teachers in every...
And this study took place in middle schools, which is a critical component.
But in every classroom, students and teachers cocreate a social contract of behavioral expectations.
It's like they basically...
they codify how they would like to be treated by their peers and teachers in this school, and so each classroom...
So I think in middle school there's, like, six different classes.
There's going to be a cocreated social contract with listed behavioral expectations in every single classroom.
Teachers also have these protocols to ask students to share information about personal experiences in class.
They call it good things.
I've been on calls with The Flippen Group, and it's kind of funny because you have these calls, and the first thing you do is, you go around and you talk about sort of good things that are going on around you, so it took a while to get used to that in the phone calls with the developer.
I wasn't very good at that, actually.
I had to make up some things.
Anyway, and then the final goal of this is that there becomes the capturing kids' hearts classroom is that the classrooms become self-managed.
That is the students using sort of the behavioral expectations.
They eventually come to correct other students' misbehavior.
So we asked a series of questions in both treatment and control schools about these sort of four kind of processes.
We asked students, and we asked teachers about how often they executed these practices, and this is the result for teachers.
Basically, this looks at intervention and control group differences in teachers, students' reports of teachers greeting students at the door.
So in year 2, we see the intervention group.
Thirty-seven percent of students reported that this happened in treatment schools and 25 percent in control schools.
The frequencies are not that high, but there definitely is a contrast here in treatment and control groups.
We asked teachers the same question.
Actually, they all say they greet people at the door, both in treatment and control.
Have no explanation for this, why there's a difference in teacher reports.
Maybe it's a more informal greeting.
We also asked whether they cocreated a social contract.
You see there's a pretty big treatment contrast for the student reports, and there's a pretty big contrast for the teacher reports as well, but they're still pretty, you know...
So among half the teachers reporting in control schools, they still intend to use this social contract as well, at least based on self reports.
Students are asked to share personal information.
This is the good things.
There's a contrast, but note that this is a fairly frequent behavior or practice in control schools as well, and the teachers, they actually, there's a much larger treatment contrast in the teacher reports on this item.
Finally, this is the teachers sharing positive experiences with their classes.
This is all about establishing personalization and bonds.
There's a difference in treatment control, but the difference is not that large, and this is sort of that self-management item.
We see really small differences between treatment and control.
This is based on students' responses.
So in general, we are seeing a treatment contrast here.
We're seeing it kind of in the control group that there is still...
These practices are still pretty common, so now I'm going to move on to estimating program impacts.
We estimated impacts in two different ways.
Fortunately, the results were the same for both ways, so I don't have to add caveats.
We estimated the impacts longitudinally.
That is, by following students, you know, from baseline year in sixth grade to seventh and eighth grade or seventh to eighth and controlling for sort of covariates that were assessed in 2016.
It was just a longitudinal analysis, or we estimated through the repeated cross-sectional analysis and included school-level covariates for the baseline measures.
That second school-wide impact model allowed us to just include more observations and achieve a greater statistical power, and it also just allowed us to sort of really focus on, sort of, the effects of all students in the school.
I'm just going to run through some of the differences in the staff-reported outcomes.
If you look at this graph, this is, sort of, adult-student personalization.
That blue line is for the intervention group.
The orange line is for the control group.
We see that there's a pretty stable trend in the control group, where as adult-student personalization increases with staff reports for the intervention group.
This difference is, the effect size is not...
It's about 0.25.
All the effect size were around 0.25 or 0.3 standard deviations.
We see similar trends in the adult-student caring relationships.
Staff reported trust in students and peer supports.
Same thing.
You know, at baseline, there seems to be equivalence, and as time goes by, the growth tends to be more strong in the intervention schools.
Staff collegiality, staff efficacy, student voice/empowerment, rule clarity, the disciplinary climate, staff fairness, student responsibility is pretty stable.
Well, the same pattern.
Classroom disruption actually goes down more in intervention schools and violence/disruptive behavior goes down more in intervention schools as well.
Okay, so that's the staff measures.
The story is, "Oh, it looks good." The staff reports are actually suggests that the program is associated with improvement in school climate in a lot of different dimensions.
So we looked at similar measures for the student self-report measures.
These are the school-relationship measures, student voice/empowerment, rule clarity.
What did we find? We found no impacts.
There was no evidence of any sort of differences across time, both in the longitudinal models and the cross-sectional, repeat, cross-sectional models on any of these outcomes.
Also for staff bonding, that's school engagement, student competencies, aggression, violence, no impacts.
What about the archival measures? The results for the archival measures are mixed.
We've found that, actually, absences increased more in intervention schools, both, you know, unexcused, excused and total absences increased.
Tardies decreased, and out-of-school suspensions actually increased more in the intervention schools than the control schools, and disciplinary referrals decreased, so we're pu.
led, actually.
Why was there an increase in out-of-school suspensions but a decrease in disciplinary referrals? So the results for the archival measures are definitely mixed.
So just to summarize here, I mean, our implementation analysis, which I did not present, I mean, we think this was a well-implemented intervention.
It was sort of the Cadillac version of capturing kids' hearts in these schools.
The staff in the intervention schools received sort of more intensive training or as intensive as any other implementation.
We found consistently small impacts on pretty much all of the staff-reported measures.
We found no impacts on the student-reported measures, and we found mixed, perplexing results on the archival outcomes: increased absences and out-of-school suspensions, but fewer disciplinary referrals.
That's it.
I could repeat my appreciations, but we started off.
Again, we appreciate the funding from NIJ to complete the study, and a lot of folks were involved in the execution.
With that, I think Catherine is back, so we should pass it to Catherine for the next presentation.
>> Yes.
Thank you so much, Dr. Hansen.
Now we'll hear from Dr.
Catherine Bradshaw on a comprehensive assessment of school climate to improve safety in Maryland middle schools.
>> So I just use the usual share-screen feature? >> Yes.
>> Okay.
Hopefully this works.
And then is it showing up in presenter mode now? >> Not yet, but you should be able to hit that button up in the top, and that should work just in the same way.
>> Okay.
Is it showing up yet? Sorry.
This is a different feature for me today.
>> No.
Yep.
I hear it.
I hear it now.
Now we're seeing something.
>> Okay, hopefully now...
>> Now if you can go to presenter view from there...
>> You would think I'd be able to do this every single day and...
There we go.
Is that working? >> That looks good.
>> Okay, great.
Thank you.
So I'm Catherine Bradshaw.
I'm the PI on this project, and I have a number of co-PIs: Elise Pas, Sarah Lindstrom Johnson and Katrina Debnam that are joining me virtually in spirit today, and then I wanted to also acknowledge my costar, Chelsea Duran, who has been a co-investigator and an analyst for these particular set of findings.
So this work is part of a broader set of projects that we have been conducting in Maryland that we refer to collectively as the Maryland Safe and Supportive Schools Middle School Initiative is what we're going to focus on here today in terms of the National Institute of Justice funding.
But this really built on a very long-term partnership that we've had with the Maryland State Department of Education and two university partners: the University of Virginia, where I'm located now, and then I also have a faculty appointment at Johns Hopkins, and work in the School of Public Health at Johns Hopkins had started on this before I transitioned, and I have a dual appointment, but I also have colleagues that continue at Johns Hopkins as well.
And then we have a mental-health partner called Sheppard Pratt House System that's part of our implementing partnership, so this has been a long-term initiative that we've been engaged in with the state for several years, and in fact, it dates back to a prior round of funding.
I see a couple of my friends and colleagues that are on as participants in this session that were also part of the S3 or the Safe and Supportive Schools funding mechanism out of US Department of Education, and so that preceded this project and in that case focused on high schools, and we had 58 high schools that were part of this broader project that used the three-tiered framework for evidence-based practices following the positive behavioral interventions and supports model.
And so in that high school trial, we had 58 high schools and a roughly 50/50 split there and were able to document significant impacts of the training in the Tier 1 and Tier 2 supports using the PBIS or multitiered system to support for behavior framework.
I listed down here the citation for that paper, and I'm in my office for the first time since August, and I just got a copy of the issue that just came out, so I will update that, but it appears in the Journal of Remedial and Special Education, so it's very exciting that they sent us a couple of copies.
But in general we've found in the high school trial that we were seeing significant impacts in classroom climate, particularly as observed by outside assessors as well as some impacts on student reports through school climate and impacts on implementation support.
And so when the NIJ funding opportunity came down, we thought in the spirit of this movement towards replication that we would take that general model that we had tested at the high school level, make a few developmental tweaks and also some tweaks based on lessons learned at test that model at the middle school level.
So now I'll transition, talking about what the middle school project that is funded by NIJ is focused on, and there are three main aims, and it is now to basically leverage what we had learned at the high school and the outcomes of the high school trial in relevance to middle schools that would be tested here with the NIJ funding.
So the first, was it, developmental adaptation of the high school framework around multitiered supports for middle school settings, and then we went in to test the efficacy of this particular model, and I'll go into a little bit more detail what the actual intervention is in just a second for those of you that may be less familiar with positivity for support or multitiered systems of support, but we were also interested in impacts on safety and climate, prosocial behaviors, engagement, achievement and such in this sample of middle schools, 40 to be specific.
And then we also have a cost and benefits analysis.
So today I'm going to focus largely on some of the observational data because that's fresh off the press for us to be able to talk about what some of the impacts that we received based on outside observers.
As was noted in the prior talk, sometimes you get a difference of impact based on the respondent, so it sounds like in some studies you get teacher or staff reports that are significant.
In other studies, you might get student impacts.
Well, we also have a set of measures that we've been developing and honing around observing school climate, and so this provided an opportunity for us to try to triangulate the impacts, so I wanted to talk about the observational data today because it's a little bit novel.
So just to give you a snapshot of the sample, these are all Maryland middle schools in more traditional six-to-eight middle school settings.
None were in a K-to-eight setting.
They were just all stand-alone traditional six-to-eight.
They were also implementing, prior to our enrollment in the project, the Tier 1 supports through positive-behavior support, PBIS, and that was part of the state scale-up.
That was not part of our trial in terms of our intervention supports.
What we were really interested in was to try to see how to build on that Tier 1 supports and load up the triangle, so to speak, to implement other evidence-based practices.
We had 3 years of participation, so there was a fall data collection in one year and then spring of the subsequent years, and then there was true random assignment, 20 intervention and 20 control or comparison.
And again, remember that the control/comparison are implemented in PBIS Tier 1 supports, and we certainly can't restrict them from doing other activities that they may be doing on their own.
So we assess fidelity of implementation of other supports in the control arm because they were not true controls, but a comparison.
So as I mentioned, there was the three-tiered positive behavior support logic that we had used, and the schools, in both conditions, at baseline, had been trained in the Tier 1, but the focus here was on providing additional training in this menu of evidence-based practices and three-tiered framework and using school-climate data to select which intervention should be implemented at those Tier 2 and Tier 3 supports.
So we brought in other evidence-based practices, including Wendy Reinke's Classroom Check-Up, Gil Botvin's Life Skills Training Program, the Check & Connect program as well as some elements of restorative practices, John Lochman's Middle School Coping Power Project and programming as well as elements of Treat Assessment by Dewey Cornell, and so schools were not expected to implement all of these, but rather to pick one or two other interventions at the Tier 1 and/or Tier 2 supports that would be more integrated in, so schools could have that choice.
That was very intentional, and we were certainly not expecting or encouraging them to implement all of these strategies.
So in terms of the coaching supports that were provided across these three different tiers, we had coaches from the projects that we referred to as the MDS-3 coaches, and so these were systems coaches that were working collaboratively with the school teams to use data, particularly around school climate, to develop a school-implementation plan and that they were also working in conjunction with the various school teams.
Schools might already have a PBIS team or a school-improvement team or a safety team.
We often try to encourage them to be doing multiple activities rather than having siloed teams, and so there was often an effort to integrate those teams together since schools also had diversity teams, which is very relevant because we provided some additional training around cultural responsiveness that may come out based on discipline data or school-climate data.
Our coaches also worked one-on-one with specific teachers using that Classroom Check-Up classroom-management approach and to optimize student engagement, and then we provided training and technical assistance.
So for example, if a school said they wanted to use Gil Botvin's Life Skills Program, we would buy that set of materials and training materials, much of which is available online, and get them access to that so that way they could implement it in the school and really own and implement the true model as intended, and then overall trying to build capacity to sustain the activities.
So we used the US Department of Education's Framework for School Climate, which includes three broad features of safety, engagement and environment, and so these mapped onto the different evidence-based interventions that I just talked about.
And schools would get data based on the school-climate information that was anonymously completed by the students and would use that in conjunction with support from their coach to make decisions about which of the interventions they might want to implement.
We have, through our school-climate efforts, validated this model both at the high school and now the middle school level, which is kind of the cool thing about doing a project at a high school and then going to a middle school is that you can even contrast the models, and we have found that the same model fit occurs for middle as well as high school, and we have a number of papers that are focused on the construct of school climate.
Should you be interested in those, I'd be happy to share out some of the school-climate work more generally, and this is just a visual to further clarify the climate model that we were using around safety, engagement, environment, and this is one that's certainly endorsed by the US Department of Education, the Safe Schools Initiative, and one that our team has been involved with for several years.
And so we didn't want to just come in and collect the data and scurry off.
We really wanted the data-collection activity to be a youth-development opportunity, a youth voice, a youth-engagement activity, and so schools in both conditions contributed the school-climate data using an anonymous online survey.
There was a passive parental-consent process, and it was technically administered by the district or the state in the partnership, and we were able to gain access to it for analysis, but it did not have individual identifiers, and so we aren't able to link Johnny's data over the multiple waves but rather do those linkages at the school level or grade level.
So we had posters like this all over.
In fact, we had a youth group that came up with this tagline, "What kind of school do you want your school to be?" And there were a lot of youth-development activities that were embedded in this, including a video of kids talking to other kids about why they should take the survey seriously and how the data could be used.
And there were data that were shared out not only to the school staff and parents, but also to students as a result, so we tried to talk the talk and walk the walk when it came to youth engagement in the programming.
So that online survey does generate several reports that are available in real time, so we didn't have to take the data away and crunch it on our own.
It was available to the school administrators through a password-protected site.
There was an executive summary, a quick report in more detail, advanced report so they could really dig into this data, and in fact, some our work on that earlier iteration on the survey was founded by NIJ back in 2005.
So we've been working with this survey system for many years and have several years of data and had expanded it when we got funding from the Maryland Safe and Supportive Schools Initiative to focus more generally on issues around school climate.
So these data are just available to the school to inform decision-making, and here are some of the other outcome data that we use, so both the climate data, which is listed there on the bottom, is kind of a process tool to inform decisions but also serves as an outcome.
And then the first line here is actually the fidelity data.
That is assessed at the school level, so we have two tools that were administered by outside assessors unaware of the school's intervention status: the SET And the ISSET which have been documented previously as standard tools for assessing Tier 1 through Tier 3 fidelity of the different evidence-based practices.
And today I'm going to talk largely about the classroom observations and some nonclassroom observations as well using those two tools that are listed in the middle, the ASSIST as well as the SAFETY.
These measures have been well-validated by our team and others, is outside tools that trained observers can come into the school setting and determine different elements of student interactions, and they map also onto that safety, engagement and environment feature.
This is just to give you kind of a sense of the scale and the scope of the project.
So annually, we saw it vary just a little bit, but roughly between 24,000 to 28,000 students were completing the survey across the different years.
That baseline was the fall of that first year, and then the others were all spring, and then we also had pretty good turnout from the staff.
On parent-response rate, we're not alone on this, but that was pretty low, and so it was more kind of an additional piece of information and not a main outcome for us.
And as I noted, I was going to focus today a little bit more specifically on the observational data that's reported here and the fidelity data or at the school level.
There was also coaching support, and we have other studies, including a recent study led by Elise Pas from our team that has a very detailed analysis of the coaching data, and it's focused also on some cost analysis, about how much does the coaching cost, not only for the coach but also the people that you're coaching? So if you coach an administrator, it technically costs more than it would a teacher in a classroom or a paraprofessional in a school.
So we have another paper that came out over the summer in 2020 that digs into the coaching data, so I'd refer you to that.
I think she did a very nice job leading that effort.
Although, I'm probably biased, so...
So I'm going to talk here a little bit more just about the observational data, focused on not only the SET/ISSET data but also the data that we have from the outside observers, just a little bit of information here on our analytic approach and some of the covariates that we adjusted for in the analyses.
And so in terms of the implementation, just kind of a high-level takeaway: Interestingly, we didn't find any significant differences on the SET, and that's likely because you'll recall that all the schools in both conditions already had prior training and Tier 1 supports, and so the SET is largely focused on Tier 1 supports.
And so on some level, it showed that we didn't compromise integrity in the control condition, but didn't really have too much room to move in the intervention condition.
When it came to the ISSET, similarly, no significant difference on those three scales, but there was a trend in significance for targeted interventions, and this again is showing that there was already a pretty good foundation of activity that was happening with schools in both conditions and not really that much room for movement.
With regard to the ASSIST, this is Assessing School Settings: Interactions of Students and Teachers, a measure that has been used in several other projects by out team and others that has been well documented, and I'm happy to provide you information around that, but they are both tallies, like literal counts, that are down within a 15-minute window in a classroom as well as global ratings.
And so that's, "While you were in the classroom, did you"...
It was more of a Likert scale rating.
The global ratings are more akin to the class data Bob Pianta and others have developed but are more behavioral in nature.
So in terms of our significant impacts, we saw some trending improvements as it related to proactive behavior management, so that's something that teachers do, proactively trying to correct things, prepare students for transitions.
We also saw a significant decrease in students' disruptions based on the outside assessors.
However, there are no significant differences on the other scales that are listed here, including teacher approvals, disapprovals, reactive-behavior management or opportunities to respond.
These are just some of the data.
Our lines aren't quite as clean and straight as some of the others that were presented.
Our data seemed to bop around a little bit more, and it may well be because we were counting up tallies, and so you kind of get a little bit of a seasonal change here in some effects that are kind of going on in the project around changes over time.
This also shows you some of the disruptive behaviors that are changing for the control in the intervention condition and the way the significant impacts are.
And then these are some additional data, reporting the findings for the ASSIST and the subscales, so you can see some significant impacts on anticipation and responsiveness, so the teacher's ability to really be able to anticipate when there's going to be a problem, like precorrecting for a transition as well as monitoring and impacts on students' socially disruptive behavior.
Just a little bit more visualization of the data just so you can see kind of where some of these impacts that are happening.
This is teacher monitoring and teacher anticipation and moving finally onto the nonclassroom global ratings.
So we also not only worked in the classroom, but we did observations using another tool, including the safety outside of the classroom setting to see whether because it's a schoolwide effort, are you seeing impacts that are penetrating outside of the classroom as well as inside? And some impacts on relationships in the first year in adult monitoring as well as a bit on rules and consequences, but generally not consistent pattern of findings here in the nonclassroom setting, just to give you a snapshot of how those data are kind of moving around.
So just to wrap up here, in terms of some conclusions, I want to acknowledge a few limitations and kind of design considerations, not necessarily the limitations of the study, but the research question was framed around, "What's happening in and above the Tier 1 supports for schools that get the additional training?" And so since all the schools already had training or high fidelity in Tier 1, we don't really expect to see too much movement there, which ended up being the case.
So we could have proven it there in that case.
So with regard to the other evidence-based practices that were layered on, schools varied in that, so obviously some schools might be using Botvin, and others were using the Classroom Check-Up, so there may be differences in the use of the evidence-based practices both across schools and within schools.
For example, some classrooms like seventh grade would be using Gil Botvin's, but not necessarily using it in sixth grade.
Similarly for Coping Power, so it's harder to take these kind of classroom-wide or schoolwide models and then provide an opportunity for schools to really roll them out on their own, and then they're kind of picking and choosing where they want to place them, and that might be accounting for some of the reasons why the effects were a little bit attenuated in some of those areas.
In general though, we did think when we kind of compared what was going on in this trial relative to the higher high-school trial there was a lot more room for improvement at the high schools, I think, because we didn't have as much support at the high-school level when that project started, but we also thought we had kind of a better independent variable here.
We had learned quite a bit from the high-school trial.
We thought that we had greater penetration, so we were actually a little bit surprised that the effects were not as strong at the middle school as they were at the high school.
Some implications and takeaways here before I wrap up with next steps, we...
Changing and improving climate is difficult, and even those changes that do occur are a little bit hard to sustain over time.
Some classrooms and nonclassroom effects were observed in this study, but the prior paper I just mentioned actually found a more consistent pattern that favored the high-school trial and the intervention condition there, and again, maybe it's more room for improvement or just kind of a baseline fewer levels of supports in place that allowed for us to really have a bigger impact there.
And then also thinking quite a bit about where you want to do your investments, and I think we are still seeing evidence that your biggest bang for your buck, so to speak, is a focus on the universal and Tier 1 supports.
Many schools get very distracted and kind of excited about the more intensive evidence-based practices, but I actually find that you get the biggest kind of impact by having a really true, solid Tier 1 support.
Terms of our next steps, we have a lot more analyses to go.
Although the trial has technically ended and we've had a number of papers come out, we always feel like there's more we can do, and so subgroup effects are something we're digging into as well now, also looking at fidelity and dosage of the specific evidence-based practices and variation in coaching and some baseline adjustments for the SET/ISSET as possible predictors of change over time.
We actually found that was a significant factor in the high-school trial, so we'll be doing that, and since we do have a pretty large N when you add the 58 in the high school and the 40 schools in the middle, pulling them together into a single trial will also allow us to do some analyses, more of a data-harmonization since the analyses or specific right in here that I presented to you just to the middle school, but it'll be interesting for us to do a more advanced analysis with that larger N.
And then we also have a series of cross-analysis papers.
I mentioned one by Elise Pas on the coaching piece, but Sarah Lindstrom Johnson from our team has also done quite a bit of cost analysis, looking at the cost associated with evidence-based practices in relation to Tier 1 and actually finds that this model is considerably less costly than other evidence-based practices even when it's all bundled together in the tiered framework, so that may also help us kind of evaluate, how much impact will we truly expect for a relatively low-cost intervention? There are also a number of programmatic expansions and next steps, including some additional areas of support for administrators.
We found that that was an area that we would like to focus more extensively on, and I have a colleague, Keith Herman, who's doing some work funded by NIJ focused on administrators, and we think that's a really viable next step, and then thinking more about Booster Tier 1 supports and focus on sustainability, I think there were improvements that may not have been picked up by our measures, and so we also need to be thinking more about our measurement to ensure that it's sensitive on that level, so that's it for me.
>> Excellent.
Thank you so much, Dr. Bradshaw.
Last, we will hear from Dr. Renee Ryberg on "Improving School Safety in the District of Columbia: Lessons Learned From an Evaluation of Safe School Certification." >> Great.
Thank you for unmuting me.
All right.
So I will be presenting lessons learned from our evaluation of the Safe School Certification program in D.C.
There we go.
Before I dive in, I want to recognize the team of my many collaborators on this work, so this project was a joint collaboration between Safe School Certification in D.C., OSSE and the Office of Human Rights as well as a large team at Child Trends.
I want to point out our PI, who's up here in the top-left corner, Deb Temkin, and I also want to recognize the generous support from NIJ that allowed this work to happen.
So in this presentation, I'll start out with our motivation for the study and the context in D.C. that inspired it.
I will then describe the Safe School Certification intervention that we evaluated from both an implementation and outcomes perspective.
To foreshadow a little bit, we had some substantial difficulties implementing the Safe School Certification in D.C., which led to some significant attrition complicating our outcomes evaluation.
And then finally I will close with some proposed implications for both policy and practice.
Okay.
Starting with motivation, Child Trends had an established relationship with D.C.
going into this project.
We've helped D.C. support their implementation of the Youth Bullying Prevention Act since 2014, and in terms of D.C.'s context, they have higher than average rates of students experiencing violence in schools, so you can see this here with the elevated blue bars for D.C.
above the orange national average of students reporting being threatened or injured on school property, being in a physical fight and not going to school because they felt unsafe, so D.C.
had an interest in trying to reduce violence and improve school safety, and they partnered with us to do so.
Everyone in this virtual room knows about the benefits of school climate, but schools have gaps when it comes to being able to identify, implement and scale effective approaches, and research has shown that this is especially true in urban areas.
So organizational capacity building is an approach to help communities gain those skills to be able to make decisions and affect change.
Enter Safe School Certification.
Safe School Certification is an organizational capacity-building model designed specifically to help schools build capacity to improve their school climate.
It's based on a technical-assistance framework.
So all activities for Safe School Certification are guided by this eight-part framework.
It consists of eight interconnected elements that define a school's capacity for improving school climate.
The first three elements in light blue, data, leadership and buy-in, are considered foundational to the next five elements, so I'll focus on these three.
It's important to note that the framework does not specify how schools should accomplish these elements but gives schools the flexibility to determine, what is the best way for their individual school to accomplish this with the help of their technical assistance? So this is not a manualized or prescribed curriculum, but it allows for a lot of room of flexibility.
Like I said, technical-assistance specialists guide schools through the framework and help them figure out how to best accomplish each of these components.
So for example, leadership refers to having a team representing the whole school community leading the school through the Safe School Certification process.
This can take the form of having a core leadership team representing not only principals but also teachers, support staff, et cetera, and then also having a student leadership group to bring the student voice.
Data refers to collecting data and using it drive decision-making, so throughout the course of this implementation, similar to what Catherine just presented on, we provided schools an annual school climate report.
This was based on student survey responses where we reported out three domains of school climate, safety, engagement and environment, and how those students' perceptions of those domains varied by their characteristics, things like gender, sexual orientation, grade level and race.
So as schools work through their way through this framework, they must document the steps that they've taken for each step.
This documentation is then submitted to an external certification advisory board composed of community representatives and local school-climate experts.
The community advisory board reviews the submission and provides feedback to the school, and then at the end of the process if schools have met the qualifications for all eight of these parts of the framework, they can then develop a sustainability plan for their work and become officially certified as a safe school.
So that is the intervention itself.
Going to move on now to our study of it, so we recruited 26 middle and high schools in D.C.
to participate in the study.
These represented both public schools and public charter schools, which are very common in D.C.
We then assigned these schools into 13 matched pairs.
They were matched based on student demographics and school characteristics.
And then, within each pair, we assigned one school to a treatment condition that received the full Safe School Certification intervention, including technical-assistance support or a control group in which schools were free to complete the intervention on their own but did not receive TA, so then we started our implementation.
Like I said, we had significant attrition over the course of the project, so in year one when we did the assignment, we had 13 schools in each group, and this dropped over time, so by the final year of the study, we had eight schools.
That's five in the control group and three in the treatment group.
I also want to give you a sense of...
So that's the number of schools that made it through our study, but I also want to give you a sense of how many schools made it through that eight-part framework, through the intervention itself.
All right.
So we see in blue that five treatment schools and three control schools made it through those first three foundational elements.
Of those three treatment schools in orange made it all the way through the framework, but just one of the control schools did, and just one school made it all the way through certification, which happened to be in the treatment group.
Okay.
Moving on to our implementation evaluation, our implementation evaluation aimed to insert three research questions.
One: What factors contributed to schools' decision to participate in the project? Two: What factors were associated with sustained engagement? And three: To what extent did schools actually build organizational capacity through their participation? For the sake of this presentation and brevity, I'm just going to focus on the last two questions, and to answer them, we conducted interviews throughout the implementation period with school points of contact, the technical-assistance specialists, the program developer and the advisory board.
Okay.
First up, what factors were associated with sustained engagement? We did see that engagement with the program was pretty slow and uneven.
Just four schools were consistently engaged in the process and ultimately completed the framework.
So the first factor we found was baseline capacity.
Schools had to have some level of baseline capacity to be ready to participate in the intervention.
Some schools were acting reactively rather than proactively to address problems, and so they were constantly putting out fires and not able to maintain engagement in the process.
Relatedly, it was really helpful when schools were able to think big picture about the process from the beginning so not just going through the motions but really considering why they were participating in this intervention.
What did school climate mean to them, and what did they want to improve? It was also important for schools to have a champion from the beginning leading the charge.
This was especially important in the first 2 years of the program, someone to kind of get people organized, lead meetings, assemble a leadership team, et cetera.
It was important that this person had strong leadership skills, strong people skills and enough authority to motivate staff, but then, as participation in the program continued, it was important for leadership to be shared amongst a group of people.
This took time to develop, but it helped schools overcome common challenges like turnover among staff.
Along the way, we found that data kept schools engaged.
Schools were able to meaningful engage with those school reports that I mentioned and use those to guide decisions, and we also found that the technical-assistance component was key to facilitate progress through the intervention.
So it's great that schools remained engaged, but did they actually build capacity? Going to give you a famous research answer here, which is, it depends.
It depended a lot on level of engagement, so one TA specialist commented that this is very much a "what you put in what you get out" experience.
What we did see was that schools were able to grow in terms of their data literacy.
We also saw that schools were able to grow in terms of shared leadership throughout the process, so this wasn't a model that they were used to operating under, but they grew into it over time.
All right.
Moving on to the outcomes component, for the outcomes evaluation, the outcome of interest was student perceptions of school climate.
This model of school climate will look familiar based on Catherine's presentation but is not identical.
This is based on the Department of Education's school climate model.
So we measured school climate with the Ed School's...
the Department of Education's student survey, which assesses three domains of school climate, engagement, safety and environment.
We have validated this measure in some of our other work, and we use those findings to inform how we ultimately measured these three domains using structural equated modeling for our outcomes evaluation.
In terms of research design, going to give this at a super high-level overview, but feel free to ask me more questions.
Students were surveyed in each of the four years of the intervention, and we used a repeated cross-sectional design to examine how student perceptions of school climate at the school level changed over time.
Ideally, we would use a multilevel model here, but due to the high level of attrition, we didn't have enough schools at the end of the, you know, remaining in wave four to allow for this, so instead we've clustered standard errors within schools.
The only other detail I'll mention here is that we used an intent-to-treat design, so schools are included in our analyses whether or not they participated throughout the full course of the study, and here's what we found.
In each of these graphs, the control schools are in blue, and the schools receiving the technical assistance are in orange.
In general, the pattern is that, in control schools, blue, school climate slightly decreased in each domain over the study period, so you can see those blue lines slightly decreasing.
It's the most dramatic in safety.
In schools receiving technical assistance though in orange, school climate remained pretty steady or even slightly increased, which you can see most dramatically in the case of safety.
So these graphs are in a standardized scale.
The full range is one standard deviation.
So while these differences are statistically significant between the treatment and control groups, they're quite small, and in terms of effect sizes, the perceptions of school climate for schools...
for students in schools receiving technical assistance improved by about a tenth of a standard deviation per year compared to students in schools in the control group.
Notably though, these findings were robust across a series of sensitivity analyses, including when we just look at those eight schools that participated in the fourth year of data collection and when we use an alternative conceptualization of the treatment group, so thanks to the thoughtful suggestion of a reviewer, instead of defining the treatment group as the groups that received technical assistance, we defined it as progressing through the framework, so we looked at schools that made it through those three foundational elements of the framework, and we found that schools that made that progress didn't necessarily have improved school-climate scores over time, indicating that it was really something about the technical assistance that was helpful for improving school climate.
Okay, so to recap the findings, we had significant challenges with sustained engagement and attrition throughout the course of the study, noting that, at baseline, it was helpful for schools to have an initial level of capacity, a champion as well as big-picture thinking in order to sustain engagement with the program.
Nevertheless, we found significant but very small associations between participation and improvements in school climate, and schools did improve their organizational capacity-building in terms of data literacy and shared leadership, and this was largely thanks to technical assistance.
Key takeaway here is that organizational capacity-building interventions may be inspired by the very...
may be limited by the very conditions that inspire them, including high levels of turnover.
And turning to interventions, first, I want to present a implication from the policy perspective, so we saw that school-climate data helped sustain engagement in the project, and schools actively used it in their decision-making, so school-climate data is important.
As part of our work with the National Association of State Boards of Education, or NASBE, we have tracked how many states have policies requiring schools to collect school-climate data, and so I'll walk you through this map.
Dark green...The dark green states with the stars require the administration of school-climate surveys.
There are 12 states that require that.
An additional five states plus D.C.
encourage the collection of school-climate data.
These are the medium green with squares, and the next lighter color, you can see that 16 states talk about school-climate data, school-climate surveys and their noncodified policies, but there are 18 states, those are in the lightest green, that do not address school-climate data at all, so we have a long way to go.
And then thinking from a research and a grant-making perspective, the field is moving more towards whole school interventions, but this work is complex, and we need to take into account the complexities of this type of work as we do it.
So 91 percent of NCS and NCER-funded studies have weak or no results.
A recent article provides a framework for better understanding why we have so many null findings, including many of which were shown today, and these include frequent leadership turnover and complex governance structures.
We ran into both of these issues on our project.
So we need to take this into account, take these realities into account when working...
planning and evaluating whole school interventions.
These are complex systems.
We don't have as much control as we would like, so here are some considerations for grant makers and researchers into this space.
First is time.
Takes a long time to work through complex systems.
Took a long time for our intervention to kind of take hold within schools, and this needs to be accounted for.
Next is study size.
It's important to anticipate from the beginning high levels of dropout throughout the process so you don't end up with small ends and lots of attrition like we did.
And finally research design, RCTs are very difficult to implement in these complex systems.
Commend my peers here who managed to make it happen, and when we're thinking about evaluating interventions at this systems level, we need to check our assumptions and make sure what we're proposing is really reasonable.
Thank you.
>> Excellent.
Thank you so much, Dr. Ryberg, and thank you to all the speakers for presenting their research and taking the time to talk to us today.
We do still have a good 5 minutes left in our session here, and so if anyone has any questions, please feel free to submit them to the question and answer panel, and I can try and field those questions and...
So nothing coming in yet, but I had a question myself that really just kind of for...
really for everybody.
So what kinds of partnerships did you really see as being, like, crucial kind of drivers for your work, particularly partnerships around kind of data and for information sharing? And whoever wants to...
>> I can start off just because I've done a lot of thinking and actually a little bit of writing about using a community-based participatory research approach to develop authentic and lasting partners that have a shared vision, and so our partnership in Maryland started in the late 1990s and has persisted, you know, some bumps along the way with changes of leadership and other priorities and things, but I think that's been critical for getting big projects like this launched and getting the true sustained investment not only for the programmatic activities but also the integrity to the research design.
Sometimes, they get really excited about one element of the project but don't really have as much of a true investment in the data and the evaluation to be able to inform some of the decisions, so we've been very fortune to have a long and lasting partnership with the Maryland State Department of Education, and that's flowed and ebbed a little bit, you know, under different administrations and such, so I don't want to give an impression that it's perfect and is always robust as it once was, but a true partnership does need to grow a little bit and grow in different ways and bring in different perspectives, so for example, when we were launching our project, the state was very interested in restorative practices, and at that time, there wasn't an evidence base for restorative practices.
That has since, you know, been accumulating, and other studies have launched, but when we thought about what was going to be on our menu of, quote, evidence-based practices, they really wanted it on the menu because that was going to be critical about getting other folks to the table, and it was a priority for the state.
So that was kind of an area of a little bit of a bend to be able to bring in because it was able...
It allowed us to engage more partners in that dialogue on the project, and if we kind of stuck to just what works clearing house, it was available at that time, so that's just an illustration of kind of a area of compromise and bend to be able to help meet the overall goals of the project but also enable the partnership to be successful.
>> Let me just add on, I mean, this study we discussed today...
could not have sort of succeeded, I mean, without the involvement of the state, South Carolina State Department of Education.
They were highly critical to sort of helping facilitate district involvement, so there's sort of this sort of kind of a grassroots involvement and then sort of an organizational sort of partnership that I think is really helpful to making these things work.
>> For us, our key data partner was OSSE, the Office of State Superintendents, but were not partners with D.C.
public schools, which made things really complicated, so if we were to do this again, I think we would try to be partners with DCPS from the get-go.
>> And for us, we hadn't worked with NYCDOE before, only on a small thing.
Our PI had some connections, and so when we got this grant, that really helped us.
I shouldn't say, "Become partners with them," but we were able to implement this study and have access to their data, but we are in connection with several there who really supported the case studies but then...
So the next year, then we applied for another NIJ grant to implement our restorative-justice initiative, and that is an RCT, and it is wrapping up now but because of that connection that we had based upon our first grant that they were more supportive of allowing us to come in and implement that.
>> Well, excellent.
Well, that definitely seems to speak to the importance of these partnerships.
I'm sorry.
I don't want to interrupt anybody.
Well, we are getting towards the end of our session here, and so I did want to, again, thank all of the speakers for taking the time to talk to us about their very important work on trying to understand this really complicated issue of school climate.
So please...And we'll have about a 15-minute break, but please join us for our next session.
That's a plenary discussion on the causes and consequences of school violence, and that'll be at 2:15.
Thank you, everybody.
>> Thank you all.
Nice seeing you.
Disclaimer:
Opinions or points of view expressed in these recordings represent those of the speakers and do not necessarily represent the official position or policies of the U.S. Department of Justice. Any commercial products and manufacturers discussed in these recordings are presented for informational purposes only and do not constitute product approval or endorsement by the U.S. Department of Justice.