Bullying Prevention and Response - Breakout Session, NIJ Virtual Conference on School Safety
Review the YouTube Terms of Service and the Google Privacy Policy
On February 16-18, 2021, the National Institute of Justice hosted the Virtual Conference on School Safety: Bridging Research to Practice to Safeguard Our Schools. This video includes the following presentations:
Reducing Youth Violence by Leveraging the Influence of Network Brokers: Preliminary Results of Comprehensive School-Wide Intervention, Richard Gilman
The involvement of peers holds much promise for school-based anti-violence efforts to reduce the "bystander effect" (individuals who notice but avoid disclosing information to help a real or potential victim). This presentation will show how network "brokers" (i.e., those having direct relationships with peers who themselves do not have a direct relationship with each other) can be used to reduce the bystander effect. Data obtained over the first three years of a four-year study reveal significant reductions in school-reported violence episodes, self-reported aggression, and collective interpersonal distress, and significant increases in peer-to-broker disclosure.
Evaluating the Olweus Bullying Prevention Program in U.S. Urban Middle Schools, Terri Sullivan
We evaluated the Olweus Bullying Prevention Program (OBPP) using a multiple-baseline experimental design. For teacher ratings, we found significant main effects across all subtypes of aggression and victimization, with some variability in the timing of effects. The pattern of findings showed delayed intervention effects for boys and a weaker impact of the OBPP on 6th graders. We found main effects for student-reported cyber aggression and victimization, relational aggression, and a composite of physical, verbal, and relational victimization. Decreases in victimization emerged in the 1st or 2nd year of intervention, and reductions in aggression emerged during the 3rd year. Qualitative data that help to better understand these patterns of findings will also be presented.
Randomized Impact Evaluation of the No Bully System, Thomas Hanson
The No Bully System (NBS) is a set of interventions that are designed to activate adult and peer support for targets of bullying in a school. The goal of the study was to determine whether NBS reduced the recurrence of bullying perpetration and victimization among students, whether NBS specifically reduced bullying perpetration and victimization among those students at risk of bullying involvement (victims and perpetrators), and whether NBS improved perceptions of school safety, peer support, and other indicators of school climate among all students in participating schools. The impact evaluation used a cluster randomized experimental design that involved 24 elementary schools in the Oakland Unified School District (California). Results indicated that bullying victimization declined and safety perceptions increased among bully victims. Students in intervention schools who were at very high risk of being bully victims at baseline exhibited substantial reductions in victimization compared to their counterparts in control schools. No impacts were detected on school-wide measures of school safety, peer support, and other indicators of school climate for all students in participating schools.
A Systematic Review and Meta-Analysis of Interventions to Decrease Cyberbullying Perpetration and Victimization, Josh Polanin
Numerous school-based programs have been implemented to decrease cyberbullying perpetration and victimization. Although several previous meta-analyses have been conducted on the topic, the current review is comprehensive of the published and unpublished literatures and uses modern meta-analytic techniques. A total of 50 studies and 320 extracted effect sizes spanning 45,371 participants met the review protocol criteria. Results indicated that programs reduced cyberbullying perpetration (g = -0.18) and victimization (g = -0.13). Translated to the newly developed probability of positive impact, we estimate that future implementations have a 76% and 73% probability of decreasing cyberbullying perpetration and victimization, respectively.
>> ...very much.
Good day, everyone.
It's 3:30 p.m.
Thank you so much for investing your time with us today, and my name is Basia Lopez, and I'm a Social Science Research Analyst at the National Institute of Justice, and I will be moderating this day-one breakout session number four.
As the other breakout sessions at this conference, this one highlights a series of presentations on school safety projects funded by NIJ, and we have an amazing panel on "Bullying Prevention and Response." We will be talking here about findings from NIJ conduct studies under the school safety initiative.
First, the presenters will share the information and talk about their projects.
After the presentations have concluded, you, as the audience, will have an opportunity to ask questions of the presenters.
For that, you can use a Q and A option that is available on the bottom of your screen.
A chat option, please use this only for technical issues and other issues.
For Q and A, please use the Q and A option, so this is, again, breakout session number four, "Bullying Prevention and Response," and I am sincerely honored to have such great panel today presenting their projects.
First one will be Rich Gilman.
He will be talking about his projects like project titled, "Reducing Youth Violence By Leveraging the Influence of Network Brokers: Preliminary Results of Comprehensive School-Safety-Wide Intervention." Second will be Terri Sullivan.
She will be talking about her project titled, "Evaluating the Olweus Bullying Prevention Program in US Urban Middle Schools." Third, Thomas Hanson, he will talk about his randomized impact evaluation of the no-bully system, and Josh Polanin, he will be talking about "A Systematic Review and Meta-Analysis of Interventions to Decrease Cyberbullying Perpetration and Victimization," so with no further ado, I will give the floor to Rich Gilman.
>> Thank you very much.
I appreciate this.
It's an honor to be here.
We took a different tact against what many of the school violence interventions are doing, which is raid.
We are leveraging the power of social networks to kind of not only inform but also to help guide a very comprehensive intervention that has shown, now, promise, and I will show you through some of the data that we have collected over the past 3 years, and I should also say, as we have collected it right when the pandemic hit, so what the data we're going to be showing you is about 3 years of data prior to and a little bit into the pandemic, so you can see the effects of how we leverage or we have leveraged the power of social networks to help not only inform but to address what we call school violence, and, of course, school violence is -- As we know, it's alive and well in many of our schools.
This is the most recent data.
It would suggest that, in total, approximately 80 percent of all public schools are recording incidences.
Now, this is up to 2018.
Of course, the pandemic is going to change some of these, but we don't think it's going to go away by any means, and the percentages of reported incidents, I want to keep that in mind as we're going forward here, so we know that the incidences, even though they have slowly decreased over a period of time, they are still sufficiently high or high enough to warrant great attention, which is why we are so honored to be part of the NIJ in terms of combating this very tough social construct.
Of course, we also know that for many of our findings, four percent of all high school students are at such a sufficient fear of what is happening in their schools that they feel like they want to avoid school, and, of course, that's the last thing that we want.
Here's the issue with all of this that we are facing with the school violence episodes and what's going on with our schools.
It is not an isolated.
It is not done in secrecy, and, in fact, what we know is that even in spite of most of our social -- It's a social phenomenon, school violence, in many respects, but only 30 percent of our incidences are known to school officials and police.
In virtually every case that we know, peers are aware in some way of an impending school violence activity.
Whether it has occurred or will occur, someone knows this.
Now, if that is -- If they are motivated to bring that to somebody's attention, that's different.
Now, our pilot data suggests, and it was published several years ago, suggests that if that information is shared, it's typically shared with a best friend and, in fact, our data also shows, when we started to do this, is that it's very rarely that it's presented to somebody who can do something with it, so this is called the bystander effect, and the bystander effect is exactly the mechanism we want to reduce in schools, and how we do this is try to figure out how can we create a culture, a system in which we reduce the bystander effect to prevent this hesitancy, this reticence of disclosing information that could possibly help? By the way, this is not unique to schools.
This is across -- The bystander effect is becoming a very well-known not only cultural phenomenon.
Of course, this started in the 1960s, but it has really taken great strides not only in schools but in the workplace, and we're kind of ahead of the wave, but it is coming, the idea of, how do you reduce this bystander effect? And how we do this is one way.
Now, it's not to say that school interventions have not used the power of peers to help each other out, but oftentimes limitations up until this point have been -- Those peers have been working with friendship, so it's almost like a friendship dyad, but the issue with that one, of course, is that those kind of interventions, if you're targeting the best friend to report, well, oftentimes the best friends themselves are victimized in some way, whether it's bullying or whether it's harassment or whatnot, and they themselves are reticent to report that, and it's also quite fragmented how the use of peer reporting systems are used in schools.
For example, oftentimes, if we want to use peers, we use teacher nominations, and teachers oftentimes miss the mark when we think about who they're identifying as individuals who could help.
They might miss important factors that can hinder interventions, and I'll give you an example of a district that we used to -- that we work with and how we observed before we even submitted this information to NIJ.
It was a teacher-nominated forum for a peer-to-peer kind of a relationship, for an intervention, and who the teachers were nominating were what they thought were leaders, but the leaders -- What they were was they were high-profile students, and oftentimes, they're not the best in terms of leadership and, in fact, in doing so, the rates of violence actually increased because the very mechanism of change was not the right type of individual, so using teachers to kind of nominate can miss the mark, and then if they use a peer intervention, which is often the case, it's not really embedded within what the schools are aware of, which is this multitier service of support, so what we wanted to do is we wanted to do something very innovative, and that is using the power of social networks.
Now, the power of social networks, up to even now, is looking at the correlations between networks and outcomes, and that's great, but we wanted to leverage that information to select the right type of individual who could be involved in this peer intervention, and what we're really focusing on would be called network brokers.
Now, network brokers, if you think of any relationship in schools, they're often -- You know, they're not fragmented, but they're in groupings, and they're clusters.
The brokers are the very important and key element because they bridge these cliques or these groupings, and so, Fiona, as an example here, is seen as a broker, and the importance of a broker is that they navigate, and they generate information that is shared across systems.
If you remove Fiona from the broker, then one group oftentimes is not aware of what the other group is doing, and it is the power of that broker that we think can help, or that is what we surmised.
We surmised by getting those brokers and embedding them within this larger intervention we were designing may actually help reduce the bystander because it would improve at least these individuals in these other groups if they see something, if they overhear something, rather than, you know, going through that reticence of trying to report to an adult themselves.
They would go to a person like Fiona, and what would change as a result of that? And that is the key, so in addition, we wanted to embed those brokers within our larger MTSS model, which is often used in schools, and I'm going to go by it real quickly.
We certainly don't have time to kind of discuss how this is, but you'll see here how we embedded our brokers both in preventative models within this district, also in more smaller, targeted group settings, and even that we call tier three, those individualized, how we embedded the brokers, but beyond that, we also wanted to change the culture of the school, and how we did that is through various interventions, which I will walk through.
Importantly, what we wanted to know is two specific hypotheses.
Our first hypothesis is, in comparison to matched school districts, and that is based on proximity.
That is based on size of the district, and I will walk you through that.
We believe that embedding these peers within this larger intervention is going to significantly lower rates of objective school violence indicators, and here are the two comparison districts that we have.
We call it the intervention district and the comparative districts.
We matched based on gray distributions, socioeconomic data to a degree.
We really wanted to get as close as possible, and also proximity of the districts themselves, all of the districts, not only the intervention, but the other two districts, they are located within an hour of a major metropolitan area within the State of Kentucky, so we didn't want to have another district next door to the intervention district, so district one is located about 200 miles to the west, and then another district is located about 100 miles to the south, and so again, we matched everything as much as we could based on 2016 data, so one of the things we wanted to do was to determine whether embedding a peer network within this MTSS model would reduce objective rates of school violence, but the other one was we wanted to see what happens within the school, and our goal here is to reduce school violence instances.
We believe that a reduction in the intervention districts will be influenced by social agency.
That is the awareness that things are happening and not only the awareness but to be motivated to report these incidences.
By reporting the incidences, by bringing this to light, you reduce the incidences, and how you do this, we believe, is mediated through social resiliency variables, so one can be aware -- That's where social agency, one could be aware and motivated, but to take that actual step of reporting requires some sort of mediators, and that social support means, are the people around me supportive of what I'm doing? Social connections, do I have -- and it's more of an absolute.
Do I have connections? And then, social influence, do I feel like I can influence others? And so this is how we did this, and I'm not going to go through the measures, of course, but there were many, and they are all holding up.
What we did each year for the first 3 years of this study is we targeted third grade, sixth grade and ninth grade, and the reason we chose them is third grade is right at the beginning stages, right at the middle of the elementary schools.
Sixth grade is that transition, that first year into middle school, ninth grade, that first year into high school.
What we wanted to do was not only target each year, the third, sixth and ninth-graders, but we're going to follow those who have participated in the intervention the year prior, so actually we have collected data on is three through eleven, now that our third year is up, and the passive consent form was great, so we had more than 97 percent, which was wonderful because, as we know, with school network data, the more participation, the easier it is to identify who these actual brokers are, and so we identified, through not only self-report, but we used peer reports, and we also used objective school data using school violence indicators as you see here, and we also have our demographic data serving as controls, so how did we select the brokers? The brokers are selected through the power of the network itself, and the networks that we used between the centrality, which is kind of a way of connecting, and so between the centrality, the more you have between the centrality, the more you are bridging those groupings, and what we actually did is we chose the top 20 percent of each grade's distribution, but it's not just that.
We also wanted to make sure that those we chose were liked.
You can be a broker and not be very liked, so we also wanted to make sure that we had the top, those who were liked, those who had very few disliking ratings and, again, between the centrality, and as you can see, the numbers continue to improve, we think our goal here is to have a critical mass of individuals who are involved in the intervention and to have social influence over time.
When these brokers are identified at the beginning of each academic year, they then participated in a full-day workshop by Amanda Nickerson, who is very well-known in the bystander effect and the bully prevention.
She is the director at the University of Buffalo and their School Violence Center, so she's very good, and she would come once a year, and then she would also do consultations with the cohort training.
In addition to this, the broker cohort was trained on the bully-proofing your school, and they met on an average of a little over twice a month for the entire year, and we made sure that they followed the compliance to the protocols themselves, but they also developed strategies to improve the culture of the school itself, "If you see it, say it," kind of a thing, so it wasn't just that they got together and did something.
They did something for the benefit of the whole school to promote this idea if you're saying something, you don't need to go to a parent of an adult.
You can simply go to one of us, and they were always identified.
They would create shirts, among other things, so in addition to that, we had a school-wide teacher training and also a community advisory council who were also part of this intervention, so it wasn't just targeting one group.
It was involving the entire school together.
The results this far, because of the time that I have, the little time I have left, my apologies here, now, of course, when you're dealing with objective school data, this is derived directly from the state web site on school violence, and so it's just mean score relationships, but as you can see, there's an inverse relationship between the rates or reported violence and rates of reported harassment, and what this means essentially is that once the rates of reports go up, you see the decrease in the rates of violence and reported harassment go down.
Also, there is this student ratio, and what we think about a ratio here is the idea is, how many students are actually involved in these episodes, these violent episodes? And I should define violence as bullying, harassment, threatening with weapons or whatnot, so it's not just bullying.
It's beyond that, and the smaller the, what we think, the smaller the ratio, the less diffuse the violence episodes are because typically it's, like, maybe one individual, which is a lot better than when you have a large ratio, because that's involving more students, which tends to suggest that it's a system-wide problem as opposed to a person-wide, and as we know with the incidences here, our ratios, or at least at the intervention school, were significantly lower across time than what we were finding in our comparison schools.
More importantly, this is objective events.
What about the percentage with events? What about the percentage of students with in-school suspensions? What about percentages of students with out-of-school suspensions? These were all a match at the beginning, so there was really no differences, but as you can see, in response to interventions, you see a significant decrease, or I shouldn't say, "significance" because we're looking at this -- decreases along not only in number of students with events but in-school suspensions or out-of-school suspensions.
That's great, so that's objective data, which suggests that it's effective.
There's promise to this, but then when you start looking at peer-reported behavior, so how are students reporting on each other over time, you see significant differences in, again, reports or catching students who not only are starting fights or are targeting of bullying, but also you start to see this disliking tend to go away, as well, over time.
Lower scores here mean that the students are starting to kind of respect each other or at least like each other, and one of the things that we're noticing is how the power of these social network groups, these brokers, are really promoting these connections over time.
We also know this.
Controlling for gender and age and previous year's aggression, ego density negatively predicts.
What this ego density means is the connections, these small cliques, as it becomes less -- As the density changes where it becomes greater, which means it becomes more diffuse, you see a decrease in aggression.
What we surmise on this one is that individuals are less likely to succumb to peer pressure.
When you're really in these insular groups, it's very difficult to escape that insularity.
As you become more diffuse, that power of the insularity goes away, and you're more comfortable in reporting school violence.
We also think that there is a lagged effect here.
It doesn't occur right away, and this is our reports across stages, so here is our initial cohort, and as you can see, what happens for the entire group start to decrease over time, especially for grade six and grade nine, so it doesn't happen overnight.
We're getting this also anecdotally, as well, but there is starting to become a lagged effect in terms of the reports of behaviors.
We also see a significant, steady increase in the number of peer witnesses, and this is all written in percentages, so in the first year, very few students got involved.
Very few students reported, and even less were what we call brokers.
That was right at the beginning of this intervention, but as you can see, this has improved drastically in 2 years.
You have more students reporting also involving the broker.
Just a couple more slides, and I am almost done here.
My apologies, but here is what we also know.
The social agency, that is, am I compelled to do something with this information? At time one, had a direct effect on reporting up to school violence incidences in time three, but those were mediated through the degree of social support that I felt that the students felt, and also, did they believe that they were influenced through this? Social connections didn't seem to be -- at this point, at least, did not seem to be -- play an important hand, but it was the quality of the support that I was receiving from my peers.
It could only have just been one, and also, do I feel like I can make a difference? And so that had a direct effect on the number of school violence reports or incidences, so we find that there's preliminary support for the intervention through a number of witnesses reporting the events, and that includes involving the broker, and the decision to report seems to be mediated by the quality of the social support and perceived leadership abilities.
there is a number of things to do.
Of course, the pandemic has kind of put a halt on collecting some of the objective school data, but we are starting to do that again in March, and I'm happy to answer questions at the end.
I'm going to turn this over to Dr. Sullivan.
>> Thank you very much.
I appreciate that.
Thank you for your presentation, and let's see.
I think you may have to stop sharing your screen.
Thank you.
Let's see if I can -- Here we go.
Today, I'm going to present on an evaluation of the Olweus Bullying Prevention Program, and it was an 8-year study, but before I get started, I wanted to do perhaps the most important slide in my presentation, which is a recognition of the research team that worked on this project so hard for the 8 years, and our project coordinator, Anne Greene, I think is in the audience today, so I wanted to start with a thank-you to our team, and for our project, we're also extremely grateful to the 2,755 students and the 242 teachers who participated in this project, and today, I'm going to talk briefly about the methods we used, about our data analysis.
Then, I'm going to discuss the findings for our student behavioral health outcomes.
Those were reported by the students themselves and their teachers, and then I'm going to take a more in-depth look at the impact of the Olweus Program on school climate, and I'm going to do this using both quantitative and qualitative data, and then I'll close the presentation by always, I think at the end of research, you have more questions and sort of talk about some of our questions and some of our directions for the future, so our goal in evaluating the Olweus Program was to really look at, how did it do in terms of increasing safety and enhancing school climate? And this was the program that was selected by our school district.
This was a program they thought was promising for their district at the beginning of the project, and although Olweus, at that point, had been implemented pretty widely in the US, especially in elementary schools, there were relatively few studies that had evaluated its impact in urban middle schools, and the students who participated in the study attended middle schools located in three low-income communities located in the Southeastern United States.
Seventy to 100 percent of the students were eligible for free meals as part of the national school lunch program, and in our initial wave of data collection, the mean age of our students was 11.5 for sixth graders, 12.7 for seventh graders and 13.7 for eight graders.
The majority of students identified themselves as African-American, and our percentages ranges from 69 percent to 81 percent across the project years.
And one important thing to know about our project is it was an extension of a CDC-funded Youth Violence Prevention Center of Excellence grant, and that initial grant ran from 2010 to 2015, and then our NIJ grant continued this work from 2015 to 2019, and we evaluated the Olweus Program using a multiple baseline design where the order and intervention start time was randomly assigned for each school, and the intervention began in school A in 2011, in school B in 2012 and in school C in 2015, so we were able to really evaluate -- Once we started with the NIJ grant, we were able to evaluate if the program effects were sustained for the students who had already received the intervention and if they could be replicated among youth who hadn't received the intervention, and these were the youth who were attending school C, where the program began in 2015, and over the course of the 8 years, we collected 29 waves of data with an average of 298 students per wave, and the teachers also completed the student behavior ratings.
Students completed ratings in fall, winter, spring and summer, and we used a plan missing this design where students were assigned to complete two surveys per year in order to avoid student fatigue.
To give a little bit of background on the Olweus program for anyone who is not familiar with the program, it's a school environment intervention that includes several components.
One of these is a school-level component, where there is a school coordinating committee.
There is school training for all staff, and there is selection and implementation of school-wide anti-bullying rules.
There is a classroom component, which is primarily composed of regular class meetings between students and their teachers, and we really focused on administrator, teacher and student feedback the entire time to guide the selection and tailoring of the class meetings.
there is also individual-level components where school staff are trained to address any potential incidence of bullying.
There is peer involvement, which really is at the heart of the program, which included large family events in the evenings.
It included having Olweus education at meetings such as orientations, back-to-school nights and having regular communication with parents, and there is also a community-level component.
We had community members from the community who sat on the school coordinating committee, and we also had an after-school school leadership program that focused on some different initiatives in the community that were selected by the students.
This slide just shows an example of how the multiple baseline design worked and sort of shows how we would hypothesize the outcomes of the study, so the key piece of this is that we would expect to start to see intervention effects at or after the point when the intervention began in each of the school communities, and we included our measures we had, measures of the frequency of bullying behaviors and experiences.
We looked at physical, relational, verbal aggression and victimization.
We looked at teacher ratings of student behavior, the student behavior reports.
We also looked at cyberaggression and victimization in school climate measures using the student reports, and the school climate measures included a positive peer interaction scale and teacher support scale, and that measured students' perceptions of change in their entire school, so how did relationships among peers, how did teacher support change when I look and think about my whole school? And to just summarize some of the results for the teacher ratings of the student behavior, we found decreases in physical, relational and verbal aggression that began in the first year and continued through the subsequent years of the intervention.
The same was true for physical victimization, and for relational and verbal victimization, we found decreases that started in the second and subsequent years, and for the teacher report of student behavior, we did find significant differences by grade and by sex, and so for verbal aggression and relational aggression and victimization, effects emerge for girls during the first year of the intervention, but they didn't emerge for boys until the third year of the intervention, and we kind of went back to some research, Nicky Crick's research, which shows that girls may perceive relational aggression, kind of non-physical aggression, as more hurtful and harmful than physical aggression and may be more likely to tell their teachers about observed incidences of non-physical aggression and to intervene in these cases, and across our ratings, the effects were generally stronger for seventh-and-eighth-graders as compared to sixth-graders, and we felt like this may be due to the cumulative effect of the intervention across multiple years, as sixth-graders would only have the experience of that 1 year of the Olweus Program, but it really highlighted to us a need to really focus more on that transition program, and for the students' self-report, we found decreases in relational and cyberaggression that were found in the third and subsequent intervention years.
We did not find any significant changes in the rates of physical aggression.
We found decreases in student reports of in-person victimization that were found across all intervention years and decreases in cyber victimization that were found in the second and subsequent years of the intervention.
The self-report data didn't differ across sex or grade, and one of the differences that we felt like if we look at our teacher ratings and our student ratings is that the teacher -- The students were reporting more globally on their behavior.
The measures were really asking them about these behaviors across different context, and it could be school.
It could be home.
It could be neighborhood whereas the teachers' focus was really on what is happening at the school, so we felt like some of the differential results may be in the differences in the perspectives when we look at the teachers and the students, and, you know, the Olweus program has a number of components to support students in school, and, you know, we get into, how well are they really generalizing in other content or other, sorry, other context, and so if we look at a focus on school climate, we didn't find any significant findings for school climate.
We didn't find any changes on students' reports of the peer interactions or students' reports of the teachers' support, and so we wanted to think about that and look at that more in depth, and we felt like one issue might be that the measure we use, this was the inventory of school climate, it tested the generalized ability of the Olweus program to more general aspects of interpersonal relationships.
For example, for positive peer interactions, one item was, "The students get to know each other really well," and I think it's important to know the reach of the intervention, but we did recognize that some of the questions were more general in nature.
If you look at other studies of the Olweus program and evaluations, significant intervention effects have been found for student and teacher behaviors that were more closely related to the intervention components like increasing the frequency of students who may help victims or teachers who may respond to potential bullying incidents.
So, you know, when we looked at that, we thought that could be one area, but it really led us, you know, led us to the question of, what are underlying mechanisms that maybe we're not getting at? So we went to qualitative studies to kind of consider this more in depth, and we conducted focus groups with 26 teachers, individual interviews with eight administrators and eight school coordinating committee members, and those were done in summer and spring or spring and summer of 2018.
Our participants' average age was 47 years.
Seventy-eight percent of participants identified themselves as Black or African American.
The remaining identified as white.
Participants were asked to describe ways in which the Olweus program helped to foster positive student-student and student-teacher relationships, and we used a thematic analysis where we went through open, consensus, axial and selective coding, and when we asked school staff, "How did the Olweus program help the development of positive student-student relationships?" four themes were identified, and those included, it really offered opportunities for informal engagement and discussion.
It facilitated teamwork and collaboration.
It increased awareness of bullying and bullying prevention through instruction, and it led to behavior change, and so in the interest of time, I'm going to give two examples, but I can talk about more in questions later.
One idea was that it really provided an informal space for engagement and discussion, which can be harder to find in a middle-school setting, and one teacher mentioned that, "A student was brave enough to say, 'Yeah, I was being bullied,' and then I could tell, like, others were chiming in.
It was almost like, because that person took that step forward, then others kind of followed along, and, you know, it was a real honest conversation," and then looking at the ways that it facilitated teamwork and collaboration, one quote was that, "It gave them a platform to listen.
They were able to listen to each other, and even if they didn't totally agree to express themselves in different ways and, you know, to appreciate that." And then we also looked at, how did the Olweus program promote the development of positive student-teacher relationships? And five themes were identified for this that included providing opportunities for consistent, informal discussion kind of similar to the student-student theme, increased student-teacher rapport and ability to connect with each other, facilitating student-teacher collaboration on events and projects, improving interactions between students and teachers and fostering student-teacher trust, and again, just to give a couple of examples, one example for increasing student-teacher rapport relatability and ability to connect with each other was, one teacher said, "I think it was one of the things that it allowed us to do was to share who we are as just a person, what we've grown up with, the things that we went through as young children, as teenagers, you know, and as teenagers or whatever, to get them to understand that we weren't going through the same stuff alone," and then we looked at fostering student-staff trust.
A teacher noted, "I think teachers opened up as professionally as they could, and that allowed students to feel like they could open up and talk about their concerns or go to a teacher or specific teachers if something was bothering them, whether that be in a verbal or written way." And so kind of thinking about the qualitative findings and summarizing all of the different findings, we learned that the Olweus program allowed teachers to create an informal and a safe space where students felt comfortable sharing experiences related to bullying behaviors and listening to each other, that the shared activities related to the program facilitated teamwork and collaboration.
Teachers were able to tailor their discussions.
There was a flexibility in the program from the beginning where teachers could tailor their discussions in the manner they felt was most meaningful and relevant to students, and we felt like this flexibility really increased student comfort in engaging in discussions, and the teachers really established a space where students felt safe and confident that their concerns would receive necessary attention, and they were able to see their teachers in a new light kind of outside of their academic role, which enhanced student-teacher relationships.
When I think about future directions, I kind of had four things that I thought were really needed as we look to the future.
One was understanding the generalized ability of the intervention.
You know, will our findings generalize to other urban middle schools when we look at the quantitative data? When we looked at the qualitative data, there were a lot of aspects of school climate that came out, such as respect for diversity, you know, community relationships and involvement, student voice and involvement, and while our findings really alluded to these, there needs to be more targeted focus on asking these specific questions and really understanding these domains better.
I think when we think of large middle schools and middle-school settings in the US, we need to really understand the reach of the Olweus program with respect to school climate.
Some school-climate measures really focus in on individual changes like, how did your individual relationships change with students you know, with teachers you interact with, and I wonder if we had taken that route if we would've had different findings when we looked at school climate, and one big limitation to our data, of course, is that we need student voice.
We focused in on school faculty.
I think that's a very valuable perspective, but we also need to have student voice represented.
And with that, I will stop sharing, and I will turn things over to Dr. Hanson.
>> Thank you.
Thank you, and hello, everyone.
Good afternoon.
I am -- Here we go.
Hopefully that works.
So today I'm going to describe the evaluation of the No Bully System.
The No Bully System was founded or developed in 2003 in San Francisco.
At the time the study began, the developer, which is called No Bully just to keep that distinguished from the system, was already implementing sort of the No Bully System in 300 predominantly elementary schools in the Bay Area, San Francisco Bay Area of Los Angeles, Boston, Hawaii, Mexico and Hong Kong.
There had been two prior evaluation studies indicating providing preliminary evidence of promise of the No Bully System.
These studies were small, and they focused really on just participants in the program, which I'll get into more in a little bit, but they found roughly 80 percent of the students who were targets of bullying experienced less bullying once the intervention was complete.
So the No Bully System is a set of interventions designed to activate adult and peer support systems within the school for the targets of bullying, so there's -- Like other interventions, there's sort of a schoolwide component that sort of focuses on antibullying policies and making sure that they're in place and provides training and working with school leadership teams.
It also has -- It also trains all staff to prevent and interrupt student harassment and bullying and when necessary to refer students to a solution team.
The core component of No Bully is the solution team, and this is the innovative component of the program.
In the solution team, a trained adult facilitator or solution coach brings together a group of six to eight students, and so this is referred to as the solution team, and those students include sort of the perpetrator of bullying or perpetrators of bullying, bystanders and prosocial peers, and through a series of three brief meetings, the solution coach leads the team to end the bullying of one of their peers by cultivating kind of empathy and developing peer-driven solutions, so the idea is to activate the peer group to intervene in the bullying behavior.
Of note, the targeting of the bullying is not included in these meetings though she or he is invited to attend the final session if they want to, to share her or his experience, acknowledge the actions of the team members.
The entire process, the three meetings plus the initial assessment, selection of a team and target check-ins and follow-up takes about 2 to 2 1/2 hours to complete.
This next slide just shows the training provided to school staff to implement No Bully.
All school staff participate in a 3-hour foundational training.
School leadership teams receive 2-hour coaching sessions, and the most intensive training goes to the solution coaches, which is really tricky because the solution coaches are facilitators, and it takes some nimbleness and skill to sort of manage this team, but they receive 1 1/2 days of training.
Okay.
The study, it used a cluster-randomized experimental design involving 24 elementary schools.
The study took place in Oakland Unified School District.
Prior to randomization of the schools to condition, we took into account not only the school size and student demographics, but we grouped schools that were implementing similar programs into the same strata, so the sample was stratified according to the number and types of similar programs schools were implementing, including PBIS, social-emotional learning programs, restorative practice, work combination of two or more program types.
Due to constant turnover in the district, we took into account leadership turnover by also stratifying schools into a special group that just had new principals.
The study focused on three research questions.
First, do targets of bullying who are the focus of solution teams experience reductions in victimization and improvement in their perceptions of safety at school? So this just focuses on the targets that participate in the solution teams that are the target of the solution team.
Second, we -- The next two questions were examined experimentally.
Does the No Bully System reduce bullying perpetration and victimization for students at heightened risk of bullying involvement? And third, for all students, does the No Bully System improve school safety, peer support and other indicators of school climate? Let me provide a little context.
Oakland Unified School District at the time was experiencing extremely high turnover amounting principals and school staff during the study period.
During the 3-year study period, that district actually went through four superintendent changes, which is amazing.
One-third of the principals turned over each year in the school, and in four of the schools assigned to the treatment condition, used a sort of a wait-listed control, treatment wait-listed control where the schools that were assigned to the treatment group received the program for 2 years, and then those assigned to the wait list received the program afterwards, but in four treatment schools, teacher turnover ranged from 25 to 50 percent per year.
Also, implementation varied.
It started off slowly.
Maintaining a schoolwide focus was very difficult, and three schools implemented three or fewer solution teams, so we had mixed implementation.
This next graph basically shows sort of, you know, the number of solution teams and sort of the grade level of the solution teams.
Eighty-three solution teams were implemented, which is at the -- during the time period.
We found more solution teams were conducted in the second year as the solution coaches gained more experience implementing them.
They were implemented in all grades though most teams were conducted at the upper grades, and we -- as you...
Well, that's pretty much it.
There were more girls that were targets of bullying participated in the solution teams than boys.
Okay, so the first question: Do targets of bullying who are the focus of solution teams experience reductions in victimization and improvement in their perceptions of safety at school? So the solution coach sort of asks -- There's this thing that's called a solution-team log that's used by the solution coach to sort of keep records of sort of what's happening during these team meetings, and also, the solution coach checks in with the target and asks three questions about bullying frequency, severity of bullying [audio drop], so these sort of questions are asked prior to the start of the solution team after the first solution-team meeting, at completion of the solution-team process and at follow-up.
Okay, so based on these responses, you can see that this graph shows the average numbers of days the target reported being bullied, and I guess the first thing to look at, there's different segments of lines because we were -- You know, in some cases, we could conduct -- We could collect four waves of data, and in some cases, we can only collect two, but first if we focus on the baseline here, this is the number of days that the target of the bullying reported that she or he was bullied in a week, so these are really high levels, right? These are 5 days.
That's a lot of days in the week, and we do see a pattern as, at the next time point, those days go down to three, and at time three, it goes down to 1.2, and then, at time four, it goes down for those who have four waves of data sort of 1 day a week, which is still too high, but it's substantially lower than it was at baseline.
There's also a measure of the severity of bullying.
We can see that the trend does go down over time, and we sort of improvements in sort of perceived safety at school.
Oops.
Okay, so that's the first question, so it appears to be, based on sort of the solution-team logs, that the target that bullying is reduced, you know, by participating in the solution team.
So the next question asks, "Do the targets of bullying who participate in solution teams experience reductions in victimization and improvement" -- Oops, that's not -- That's question one.
Sorry.
Oh, well, this is the answer, yes, but -- Okay, so this is -- The answer to question one is, is there an improvement in the situation for targets of bullying? And the answer is yes, but we don't know what would've happened to students who were not in the solution teams that were targets of bullying.
There's no counterfactual.
We just followed these kids that were the targets that were in the solution teams [audio drop].
Research question two: Does the No Bully System reduce bullying perpetration and victimization for students at heightened risk of bullying involvement? So this was the experimental impact question, and what we had to do for this question since we were focusing on the targets of bullying is to identify likely targets of bullying prior to implementation, so we administered a survey prior to implementation, and we asked, you know, students to -- all students in grades three to five, you know, to report on sort of their level of bullying, both victimization and perpetration, and we used the Peer Interactions in Primary School Questionnaire to assess the level of bullying victimization and perpetration.
So we identified two distinct samples using baseline data, those at heightened risk of victimization, that means those that were in the top five percent at baseline of -- on the PIPS measure and those at heightened risk of perpetration, those were at the top five percent baseline on perpetration.
When we estimate our models, we found that, for those at the very highest risk of victimization, their victimization levels went -- at baseline.
Their victimization levels went down by 0.182, which has an effect size of 0.37, so those kids who were in No Bully schools identified as high risk of being victimized experienced lower levels of victimization in subsequent years.
Also note, for perpetration, no effects were apparent for perpetration.
So does NBS reduce bullying perpetration and victimization for students at heightened risk of bullying involvement? Well, yes for victimization sort of, and for perpetration, no.
I say, "Sort of," because when we linked the survey data to the solution-team logs, we -- There were actually very few students who reported being at high risk for victimization where actually they were not the same students that participated in the solution teams, so likely any effect of the solution teams was indirect based on our survey measures.
Also, you know, just some limitations of the study is, like, how we defined high risk really matters.
We could use different cut points for high risk, and the results will differ.
We did choose a priority, the top five percent, and that mattered.
If we had chosen the top 20 percent, we would've not observed any impacts.
Finally, research question three, we asked, "For all students, does the No Bully System improve school safety, peer support and other indicators of school climate?" We examined outcomes in three areas: safety, school supports, empathy and student voice.
And we found in no case did we find a statistically significant impact of the program of these outcomes, so we did not find a generalized impact of the program on these measures.
There you have it.
So in summary, that, you know, we have to acknowledge again the intervention took place in a very unstable environment with, you know, in a highly impacted urban school district, but bully victimization declined, and safety perceptions increased among those sort of victims or targets of bullying that were targeted by the No Bully solution teams, and the experimental impacts on victimization among students at high risk of being victimized were present and showed a reduction in victimization, and we found no schoolwide impacts on school safety, peer support or other indicators of school climate.
Appreciations, of course, go to NIJ, No Bully, the Oakland Unified School District and so many WestEd staff, too many to mention.
Thank you very much.
And with that, I will pass the baton to Joshua.
>> Thanks, Tom.
Here we go.
Thanks, Tom, and thanks to all the copresenters.
It's been a wonderful panel so far, and I'll try to keep it to the 15 minutes or so, so we have some time for discussion at the end, and thank you all for joining us today.
I'm here to present a project we're calling "A Systematic Review and Meta- Analysis of Interventions to Decrease Cyberbullying Perpetration and Victimization." If you missed at the beginning, my name is Josh Polanin.
I'm a principal researcher at the American Institutes for Research and, of course, big thanks to NIJ.
This was a grant that was awarded a few years back, and we're awfully grateful for the funding.
It's been a ton of fun to work on it.
One more note before I get started, just this is from me, not from NIJ or from any of my coauthors, but last summer at these presentations, I started saying some words of support for the Black Lives Matter movement, and I see no reason why I should stop that support with the turn of the calendar year so just wanted to publicly state my support of the Black Lives Matter movement and all of the folks who are in that fight as well.
Okay.
Thank you.
And one more thank you before we get started, like all of the rest of the project teams, we have a huge project team, and I'm incredibly grateful for everybody's support.
Dorothy Espelage, who will be talking a couple of times tomorrow, was the co-PI on this project, and we had a big team at UNC as well and at University of Florida.
At Development Services Group, Jen Grotpeter was the researcher coordinator and helped us get through many troubles along the way, and then at AIR, Laura Michaelson did a wonderful job of doing -- of conducting a lot of the analyses for us so huge team, and there's probably 10 other people not listed on here who helped in screening and coding and so thanks to all of them.
Okay, so what did we actually do, and what are we actually talking about? So we're talking about cyberbullying, and of course when we started this project a few years ago, we knew cyberbullying was an issue.
We certainly didn't realize we were all going to be headed inside in front of our computers, and we certainly didn't realize that students would be in that same situation, so we don't have completely accurate data, but we do know, at least a few years ago, 91 percent of teenagers were accessing the Internet on a mobile device daily.
Four out of five said, "Almost constantly." That's almost assuredly gone through the roof in the last year or so, and of course using the Internet to connect with your peers can often lead to cyberbullying, and we're talking about two different types of cyberbullying here, both perpetration and victimization.
Perpetration, the act of inflicting or receiving negative or damaging harassment through the Internet and victimization, includes a whole host of activities like being excluded from sharing information or be excluded from interacting online and having your passwords put out there and receiving, of course, threatening messages.
This leads -- This occurs -- At least the most recent NCES data, 15 percent of students report being a victim of cyberbullying in the last month.
That's almost assuredly gone up in the last year as well.
We actually looked for some more recent data and haven't found anything great over the last 6 months, so if you do happen to know of a study that looks specifically at cyberbullying on a large scale, we'd be interested in seeing that information.
And then for those students who are victimized, it leads to anxiety and depression and lower academic achievement and suicide ideation and a whole host of other issues, so it's obviously an important problem to tackle.
And so what have researchers done to tackle it? So it's a global effort led in part by the US, but also it's really a worldwide issue.
In fact, in our study, we found many evaluations that were conducted in Europe, in Australia, in Korea, in Taiwan, including the US and Canada, and in fact, I think you'll see in a minute, but more than half of the studies actually came from non-US samples, so it's really a global effort.
And there's a whole host of programming, and I'm going to talk about the characteristics of that programming in a minute, but they generally take two different forms.
Programs can either directly target cyberbullying, and that's the focus of the study or programs, focus on cyberbullying sort of tangentially or as part of a whole host of other issues that they're trying to tackle, and there's been a lot of these evaluations.
We'll talk about how many in a minute, and because there's been a number of evaluations, there has been previous meta-analyses conducted on this topic.
There's actually one that goes all the way back to 2011, which was a very early meta-analysis on this topic and only included a few studies.
That one is, of course, out of date, and they didn't necessarily conduct a lot of moderator analyses.
Some studies have, some meta-analyses, have been conducted but have used slightly older methodology, and there's been a whole host of meta-analyses that have been conducted on the topic of cyberbullying but didn't necessarily focus on intervention studies themselves.
So certainly when we wrote the grant proposal, and even as of now, there's been other meta-analyses conducted, but we think ours sort of fits the niche of being up to date while also using some modern methods that we'll talk about in a minute.
Okay, so jumping right into the research questions, we have three main research questions and then one sort of subresearch question, and the first one is just, "Are school-based prevention programs effective at decreasing perpetration and victimization?" And so we took a really wide lens here, and we wanted to be as inclusive as possible, and then sort of a subquestion within that is, "Are those programs just as effective or more or less effective at reducing traditional in-person bullying perpetration and victimization?" And then a second question was, "What were the programs that were implemented? What were their core components, and how did that map onto a typology?" And so we coded lots of different aspects of the study's programs and tried to map out, "Okay, this program did this, and this program did that," sort of, "Where do they fit in a larger typology of those components?" And then we did a whole host of both confirmatory and exploratory moderator analyses to try to determine, "Were some programs more effective than others based on sample characteristics or program characteristics and a host of other variables as well?" So that's a brief introduction, but I wanted to skip right into the methods and the results, so I'm going to go through this briefly.
We created a review protocol, and we tried our darndest to make all of the data that we collected as open as possible.
We created an open science-framework page, and if you go to this link, you'll be able to see some of the data that we collected as well as the review protocol.
It's only some of the data because the paper is still under peer review, but if you're interested in seeing more of the details, take a look at that, or just search for it on OSF.
Our inclusion criteria, I've already run through most of this, but I'll just repeat some of it briefly.
K-through-12-aged students students in a school setting had to measure cyberbullying perpetration or victimization.
Used a two-group design.
It could be random or nonrandom.
Business-as-usual comparison group published on or after 1995 because that's about when cyberbullying became a thing with the Internet, written in some various languages.
We searched widely for different publication types, and it could be any type of program.
Okay.
You'll have to trust me that we did quite a comprehensive job of searching the literature.
We've screened the literature using some best-practice methods that we've been developing over the last few years.
We created a codebook and a relational database to collect all of the information.
All of it has been verified and double-checked, and then we did several interesting analysis pieces, and we used the latest meta-analytic methods and tried to account for effect-size dependency and conduct a bunch of moderator analyses, so lots of fun stuff there, but not as fun as the results.
Okay, so reset for a second.
How many studies did we find? What was the process? So it's a little bit blurry.
I apologize for the resolution, but we started with a really wide search, and we first identified about 11,000, a little bit more, citations.
We abstract screened those and reduced that down to a number around 500 to full-text screen, and then based on another screening we identified 73 reports, so 73 individual PDFs that were included in our review, and that actually resulted in 50 different studies that were included in our meta-analysis.
So within those 50 studies, we tried to build a typology of the characteristics that were inherent in the programs.
So what we did was, we said, "Okay. What are the different," what we're calling, "core components that made up these programs?" So things like participating in some sort of set curriculum, participating in some professional development, having videos or some sort of skill-building activities, and we took all of these different characteristics, and we said, "Okay.
Could we group them somehow? Does it make sense to sort of group them into some sort of category?" And then that's what we've done here.
So we actually tried to do some fancier analyses and some latent-class analyses and some factor-analysis stuff, but that didn't really work out, so we went back to just sort of looking at all of the different characteristics and placing them into the component categories, and this is the typology we've come up with.
We've got seven different component categories here, and then these percentages are the percent of studies that included one of these component categories.
So you see all the way at the top, a large majority, in fact more than 80 percent of the studies included some sort of skill-building activity within it, and at the low end of the scale, less than 20 percent of studies included some sort of targeted response or therapy, and you can see the sort of range in between there.
So we hope that this -- We've obviously also created, written a paper that we've submitted to a journal, and this is part of the publication, and we hope this can help future program developers know where to sort of put their efforts into, and so perhaps we're good on looking at the skill-building side of programs, and we need to maybe focus more on the training or the school-climate issues going forward.
Here are the characteristics of the studies.
I think I've already touched on the most important thing, which is the location, so only 18 of the 50 studies were conducted in the US.
I failed to mention that there were 45,000 participants across all of these studies, and you'll see we collected quite a few unpublished studies.
About 76 percent of studies targeted cyberbullying specifically, and so about a quarter did not, and the rest, this is interesting but not as interesting as this slide.
So what we have here are the results from the meta-analysis, and the very top row, or the very top two rows, are probably the ones of most interest.
The top row says that there were 44 studies and 96 effect sizes, so that means there's multiple effect sizes per study, that looked at the effects of interventions of cyberbullying perpetration, and if I can move -- I'm not going to be able to, so I'm going to do this by memory.
The average effect size, so those programs reduce cyberbullying perpetration by an effect size of 0.18, and if you go all the way over to the right, it's a column I can't see, so I'm going to have to do this from memory, but it says PPI, and that's the probability of positive impact, and that's a relatively new translation-effect size, but basically what it's saying is -- I think it's 76 percent.
We would expect that if you randomly selected a study and implemented it that there'd be a probability of positive impact of 76 percent, meaning that 76 percent of the time that randomly selected study would decrease cyberbullying perpetration, so that's a pretty good sign, and similar results found for cyberbullying victimization, although slightly less, and then also answering our sort of sub-research question.
It's about similar effects for in-person bullying perpetration and victimization.
So that's promising effects for sure.
Maybe not as large as we'd hoped, but still promising that if you implement a program that you'll see some reduction in cyberbullying.
So then we took these results and we said, "Okay, are there characteristics of the programs or the studies that would help explain the variation?" Because there is some variation in the effectiveness of the studies, and if you just look at the final, the far-right-hand column, what you'll see is that none of our moderator analyses, and each one of these rows is a different moderator analyses, none of our confirmatory moderator analyses explained any of the variation, so we tested things like the country of origin and the focus of the program, the timepoint, the effect-size type, the percentage of males, percentage of nonwhite students, all for cyberbullying perpetration here.
And none of the moderators proved to have a statistically significant impact.
Unfortunately, the same story is true for cyberbullying victimization, and we tested, again, the same moderator analyses and didn't find that they explained any of the variation.
We also, not shown here, we also conducted a wide-ranging set of exploratory meta-regression analyses where we include a number of variables in the model and try to explain some of the variation.
We had a little bit more success with explaining that variation for those analyses, but I'll skip over those for now.
Probably the most interesting finding and one that we're still working on and thinking about is that for bullying perpetration and victimization, we actually saw a big difference in the effectiveness of programs that targeted cyberbullying for both perpetration and victimization.
And so you can see here that if you look at that top section, the studies that, the programs that targeted cyberbullying had a larger effect that programs that didn't target cyberbullying in terms of their bullying-perpetration reduction, and the same thing for victimization.
So that's an exploratory result, but it's an interesting one nonetheless.
Okay, and to wrap up here, one quick limitation, of course, all of these studies were implemented before the global COVID-19 pandemic, so it's hard to completely generalize these results, and who knows what would happen if you tried to implement one of these in a completely online model? But there's probably going to be some generalization from that, but we do have some good news, is that the programs did seem to be effective, and the last thing to note, like I said before, is that there's a lot of programs that sort of address the skill building and the psychoeducation and that we may need to start focusing a little bit more on some more targeted responses or some more training, and that's it.
Thanks for participating and listening.
>> Thank you.
Thank you very much.
What a plethora of information here.
I appreciate all our panelists for presenting and providing us with this important information.
It is evident that these projects are essential to producing scientific evidence on bullying prevention and response.
We still have some time left for Q and As, and there are a couple of questions in our Q and A box.
In the meantime, I do encourage the audience to type your questions in, and we will answer as many as we can.
So I'm going to start, and I think these couple questions came in when Rich was presenting, so, Rich, to you, "What is passive parent consent?" >> Passive parent consent means that the parent has to sign for their child not to participate as opposed to active consent, so what you typically get through that is a higher consent rate, and, of course, this is all passed through the IRB, so that's what passive consent is.
>> Thank you, and the other question also came in during your presentation, Rich, so that's again to you.
"Any negative impact on the brokers?" >> No.
Actually, the opposite.
We see a positive benefit to the brokers, and that's our subsequent -- Unfortunately, with 15 minutes, I didn't quite have enough, but I'm happy to send that information to the speaker.
They could just reach out to me directly, and I can show you the table, but in terms of our findings, it actually has benefit.
We see higher mean scores on our prosocial measures than we do on a matched control group.
>> Thank you.
Thank you very much.
Another question comes to Dr.
Sullivan, and the question is, "What were the effect sizes for bullying outcomes?" >> That is a great question, and I would actually -- I've been trying to minimize my presentation once I saw that question come in, but I'll be glad to send you those effect sizes.
I just need to get to be able to pull it up.
>> All right.
Terrific, and just before we move to another question, I want to remind the audience that for any additional information, you are welcome to join our discussion board.
That is available on our conference website under the discussion tab, and you can either start your own discussion or ask questions directly when it comes to this particular presentation or any other.
So All right.
So we have another question, and this one, I guess, is to all of you, so whoever wants to jump in, please feel free to do so.
"Particularly for No Bully, did you find any" -- Maybe that's specific to someone.
I'm not sure to whom.
"Did you find any evidence of any iatrogenic effects, particularly in regard to retaliation following the conferences? Also, how were prosocial peers identified?" >> Right, so I'll take that.
Right, so I think the idea is that, "Did we find sort of the participation in the solution teams or the program actually increasing bullying perpetration or victimization?" And no, we did not.
I mean, we do -- I mean, when we estimate the effect sizes for perpetration, I mean, there is a positive direction of the effect.
It's, like, 0.10, but it was not statistically significant.
There's no -- We found no anecdotal evidence or evidence ensuring that the logs about that either.
For prosocial peers, the solution coach generally identified because they tended to be teachers and they would identify selects or the members of the solution team, so that's sort of -- There was no formal way of sort of identifying this based on nomination.
>> Thank you very much.
So if you have any other questions, we can still wait a moment.
In the meantime, I would encourage our participants to put you on the spot and just give us any conclusive thoughts that you might have, specifically on bullying prevention and response to the audience, and I'm not going to pick anyone.
Just any conclusive thoughts that you would like to share with the audience on bullying prevention and response.
>> Well, I find -- Just to put something out there, I mean, I found it interesting that through Dr. Sullivan's work and our work with No Bully and other work that I've seen, they're really not really detecting generalized effects on school climate.
They're more sort of -- And so our findings were similar in that regard.
Just a little note.
>> No, whereas I think for us we did find some evidence.
Of course, this is still going on, but it did seem to improve the culture, but there has to be a concerted effort not only by, again, the intervention, but the entire school or the entire district really buying into this, and we used the power of the brokers in order to do that, which was often an overlooked component when we think about antischool violence and secluding bullying efforts is to pinpoint specific influencers, social influencers, within the school, and this could be an opportunity to conduct, which we will, conduct future interventions on, you know, the nature and the quality of these brokers over time.
>> And I think, you know, I had some similar thoughts about, you know, not finding, you know, kind of turning back to the generalized effects of school climate and some of the differences in our findings and really trying to map that onto what a large middle school looks like.
And as a sixth grader or a seventh grader or an eighth grader, how much am I going to know about my entire school and kind of where's the reach and where are the mechanisms, you know, of those effects? And for us, we're seeing a lot in the teachers and what the teachers are doing and their responsiveness and their ability to just create these spaces where the kids can really interact and really explore some of these issues.
>> All right.
While you were talking and giving us your comments, a couple more questions came in.
So first one is to Dr. Gillman.
"I'm curious about how the social-network-based program reaches those students who are, at best, the fingers of a school social network: ostracized, neglected loners, however they are conceptualized." >> That's a really great question, and that's my love is analyzing those who are on the outer fringes of the network, very important.
Anecdotally, we know that they have -- And we're just starting to analyze this data now, so using network, you know where every student is along that network, including those who have very few connections with others.
That's considered the ostracized.
That's what we use for the term.
Anecdotally, it appears that as part of the reporting, there is a pretty sizable population of those who have decided to report, who are ostracized, to the brokers.
They would not do it individually.
They would not do it directly, but our data suggests that they are reaching out to those brokers.
Even if they are ostracized, they still have connections to that broker, and from what we know, they're a little more comfortable doing that.
Now, that's anecdotally.
Empirically, we're still looking at the data right now.
>> Thank you very much.
There's another question.
They keep popping on us.
That's good.
"Were there common characteristics of the brokers?" >> You know, that's interesting.
Of course, by nature, when we're identifying brokers, we are already distinguishing them from others.
Right? They are the ones that already are pretty centrally located within our -- When I say centrally, within the network, but they're also spanning those unique cliques, so just by the nature of how we are identifying these brokers, they are more liked.
Not only that, they are less disliked, so that's important, but in spite of this -- I'll show you some data that I didn't have time to show you, and it'll be real quick, but that might also help.
Let me go ahead and share my screen as soon as I get here.
Oh, goodness gracious.
Technology is great until you forget how to use it.
So these are our brokers.
This is cohort one, cohort two, cohort three.
That's where they are.
Pre-intervention, there is no differences on these cohorts against a matched sample.
That's the sample matched on age, on gender, and so what we're looking at here is in terms of their specific social support, how they see their social support, their influence, there's no changes.
However, not only that, but their ability to identify and be motivated to do something about bullying does not change at the end of the first year, but you start seeing these lag defects all of a sudden through time and what we're suspecting in response to interventions.
Now you start seeing a difference in some of these qualities or at least how they see themselves and their ability to influence others.
That seems to be a function of being part or exposure to the intervention.
Of course, we still got plenty to look at, but that's -- No differences at the beginning, but there becomes differences over time.
>> Great.
Thank you, and the last question that I have in our queue asks this: "Did any of the studies desegregate to understand not only if the programs worked but for whom? Example, do they work equally well for BIPOC and LGBTQ+ youth?" So I guess that's to the entire panel.
>> Yeah.
I think that's -- doing that was addressing studies.
Maybe that's me, although I'll let you guys address it, too.
For the meta-analysis, we did not -- We made a resource choice to look only at the main effects and no subgroup effects, but I think that's probably a nice follow-up is to think about looking at subgroup effects going forward.
I don't know.
I'll let others answer with a specific study.
>> Well, just for the No Bully study, it was conducted in elementary schools, and we didn't have the power to estimate those types of differences, or even the questions, actually.
>> All right.
Thank you very much.
So our time is almost up.
We have a couple more minutes until 5 o'clock.
I don't see any other questions in the queue.
Again, I encourage you to join the discussions on the discussion board on our website.
Tomorrow at 11 a.m.
Eastern, we will start the conference with another plenary session with a special guest, Nicole Hockley from Sandy Hook Promise, who will talk about "Tragedy to Transformation: Preventing School Violence with Proven Programs." And again, I want to thank our panelists.
It was an honor to host you here today, and we also want to thank our audience to find the time to join this session, and if there are no other questions, we will adjourn, but before we will, I want to thank you again, and have a great day.
Take care.
Disclaimer:
Opinions or points of view expressed in these recordings represent those of the speakers and do not necessarily represent the official position or policies of the U.S. Department of Justice. Any commercial products and manufacturers discussed in these recordings are presented for informational purposes only and do not constitute product approval or endorsement by the U.S. Department of Justice.