U.S. flag

An official website of the United States government, Department of Justice.

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.


Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

"Sentinel Event" Review in the Criminal Justice System

James Doyle, J.D., NIJ Visiting Fellow

Listen to James Doyle discuss the basics of a "sentinel event" review in the criminal justice system. This learning-from-error approach borrows from principles that medicine, aviation and other high-risk enterprises have successfully used. Former NIJ Fellow Doyle offers the basics to understand this innovative idea that takes a system-wide perspective of error, bringing all stakeholders together in a non-blaming, forward-looking way after a bad outcome, such as a wrongful conviction, occurs.

JIM DOYLE: Good morning, everybody. This will be a new thing for criminal justice, but it’s not a new thing for people who are involved in serious work in other fields — in medicine, in aviation, in other fields. When something goes wrong, it’s someone’s job to find out why it went wrong. In criminal justice we haven’t really mobilized our capacity to do that. We do have an adversary trial inspection and a field system that helps us figure out and catch things before we go wrong, but that’s an effort to look back, not an effort to look forward and see how things are happening and how they might happen again.

So, one of the things that’s given a spur to thinking about this are the DNA exoneration cases, where, in a situation where once we could have thought we were perfect or nearly perfect or close enough to perfect, we now know that things are wrong sometimes, we know that the public cares that they’re wrong sometimes, and we know that the public expects us to do something about it. And these cases that involve convicting the wrong man can give us some pretty quick access to examples of how medical people started to think about situations where from time to time a hospital will operate on the wrong patient.

One particular example of a wrong-patient study looked very closely at an event where a woman was wrongly operated on for brain problems, and the study showed some interesting things. First of all, it showed that it wasn’t just one mistake involved in choosing the wrong person and operating on her; there were 18 separate individual mistakes. But beyond that, a key feature was it showed that none of these mistakes by itself would have been enough to cause the wrong-patient surgery, that all of these mistakes had to interact with each other, and that all of these mistakes had to interact with latent weaknesses in the hospital’s systems. So, the doctor forgot to ask the patient’s name, there were errors in the charts, the patient’s face was draped so that no one could see who the patient was, the communications between the hospital’s computers were very bad, the communications between the physicians and the nurses were bad, and beyond all of this, there was a culture where when things went a little bit wrong, nobody saw those as dangerous warning signs; everybody saw them as the kind of little screw-up that they were accustomed to. Translating that into criminal justice and taking for a moment the wrong-man conviction example, you can see pretty clearly that when the wrong person is convicted it’s not one person’s error that made that happen. Yes, the eyewitness had to choose the wrong person from the lineup or the photo array, but the police had to choose the person to put in the lineup or the photo array, had to design the techniques in which the photo array was displayed, in which the interviews were conducted; the prosecutor had to miss the mistakes that the police made; the defense had to fail to catch the mistakes that the police and the prosecutors didn’t catch; and ultimately the trial proceedings and the appellate proceedings didn’t work to catch the wrong person either.

So, in all of those situations something was done or not done by everyone, and usually the right answer to the question “who’s responsible for this wrongful conviction?” is everybody involved to one degree or another, either by making a mistake or by failing to catch a mistake. And “everyone involved” doesn’t just mean the people at the sharp end — it doesn’t just mean the detective, it just doesn’t mean the person at the forensic lab bench — it means the people who created their working environments, who trained them, who hired them, who’ve put their caseloads up to where their caseloads are, who funded them or failed to fund them. It’s the interaction of all of these things that makes wrongful convictions happen. The phrase for this that the doctors studying the wrong-persons surgery discovered in an existing safety literature was an “organizational accident.”

In an organizational accident, no individual’s mistake or misconduct is enough; small mistakes combine with each other, they combine with latent weaknesses in the system, and together they cascade and make an error, and when the doctors discovered that, it gave them access to a huge literature coming from events like the Chernobyl meltdown or the Challenger launch decision, where big, complex organizations had failed by having organizational accidents. And the question then became “what do we do about this?”

Now, easily, our reflex is to try to look and find the bad apple, to find the doctor who operated on the wrong patient, to find the prosecutor who didn’t turn over the Brady material, to find the lab technician who did the wrong test at the bench. But the fact is, blaming one individual or one component doesn’t explain an organizational accident, because many things had to combine. And it also doesn’t help us in terms of learning from the organizational accident, because even after we’ve exorcised one bad apple, we still have the situation left that the bad apple was operating in. What we really need to know in one of these situations is why did the choice that the bad apple made, conceding that it’s a bad choice, seem like a good choice or at least a bad choice that person had at the time? Why was it that this detective zigged instead of zagged? Why is it that the prosecutor decided not to turn something over? Why is it that the defense lawyer investigated this or didn’t investigate this? Why is it that the victim was interviewed in one way and not interviewed in another way?

All of these things help us if we’re going to try to pivot from blaming someone for what happened in the past toward how we’re going to understand what might happen in the future if we don’t intercept it and we don’t try to cure the underlying system weaknesses that are involved.

The lesson that the medical people and the industrial people learned was that a failed component, a failed individual, or a failed technique is not by itself either the sole cause or the complete solution to any situation. We have to look at how the components of the system — the human components, the technical components, the logistical components — interacted with each other and with their larger environment. It’s true that many mistakes start with the police, but it’s also true that many of the things the police do or don’t do they do or don’t do because of things that happened before they got involved, things that affected their funding, their clearance rates, their caseload pressures. Many of the things the police do they do because they anticipate something that’s going to happen with the lawyers further on along the process. And so, all of these components are part of a system where things before the component and after the component interacted with the component, and all of them interacted with a larger environment.

These are things that the typical adversary trial is not going to capture and it’s things that, generally speaking, we don’t spend a lot of time looking at. We do have arguments from time to time about what’s the rate of wrongful convictions in cases, is it 2 percent or 3 percent or 1 percent. But all of us know that if you’re on the front lines in the system, you’re not prepared to say that there’s a good rate of wrongful convictions, that 2 percent is good enough, and you know very well that the public is not going to accept that either. What the people on the front lines want, for reasons of professional pride and really self-respect, want to do is to get every case right. What the public expects us to do is to get every case right or at least to see how we can avoid getting the next case wrong if we discover one that was wrong.

It doesn’t really matter very much for this perspective whether the rate is 1 percent or 5 percent or 10 percent. One of the things that the safety people have learned in aviation and medicine is that Murphy’s law is wrong. Everything that can go wrong usually doesn’t go wrong, but then we draw the wrong conclusions from it. And so, one of the things we’re trying to do is see whether we can find a list of events that we can learn from that can treat every one of these defects that we do find as a treasure. Many cases, for example, will fall in the category of a near miss or a good catch, where someone is almost convicted, where the case goes for six or seven or eight months before a new result comes back from the lab, or an alibi witness comes up, or the other side of that coin is many situations will occur where a dangerous person is almost released but someone finds the warrant at the last time and keeps the person locked up or finds the parole detainer at the last moment that keeps the person locked up. This category of event, the near miss or the good catch, where the near miss or the good catch came from good luck or it came from having an extraordinary individual present at the time, are still things that we thing we ought to be able to look at, because we can’t count on always having good luck or always having an extraordinary individual present when this situation recurs. So each of these near misses or good catch events gives us an opportunity to learn about underlying weaknesses in the system.

Another example, wrongful convictions are pretty obvious; cold cases that stay cold too long or a case stays cold forever but ultimately after 10 years a CODIS hit comes out, investigation is followed up and we learn why or who. We need to learn why we missed that person the first time. The wrongful release of someone who should have been held, because there was reason to believe that he was dangerous, is a situation that we need to look at very closely in domestic violence, revictimizations, and elder abuse. Revictimizations, people have looked carefully and learned fruitful lessons from carefully examining individual events, not at the level of a huge national commission but at the level of the people who actually do the work. Why was it that this happened? How is it that we can avoid having it happen again?

So that’s — the idea is to look for lessons to adopt the medical motto that every time you find a defect you should treat it as a treasure. How you look at it is another question that we want to explore as part of this Sentinel Events Initiative. In medicine, for example, there had always been a tradition of looking backwards at patient deaths or injuries caused by treatment, in morbidity and mortality reviews. But often those turned into questions of blaming and shaming, and they stayed within particular stovepipes. So the lesson we want to explore here, following from the organizational accident perspective, is that to look at these cases the way we need to look at these cases, we need to have all of the stakeholders at the table: We need to have not just the police, not just the prosecutors, not just the legal professionals; we also need people who know and can speak from the victims’ perspective. We’ll need, from time to time, people who have a particular scientific expertise, a particular knowledge of the way a forensics lab operates. We’ll need people sometimes from corrections or from probation or from the judiciary. Because if you understand the organizational accident approach to how things go wrong, you can see that no stovepipe either has the whole answer or a complete solution to anything that happens. When we have all of these people together, we have to see if we can learn if there’s a way to get them to focus on something beyond blame. The problem with blaming an individual is, first of all, it probably doesn’t explain the organizational accident and how it happened. But maybe just as importantly, it also tends to drive tales of helpful mistakes underground; people become more and more hesitant to report things, no one wants to get involved in the unpredictable machinery of disciplining a friend or a colleague, and so things that could be reported and could be learned from tend to stay submerged if what we are operating in is an atmosphere where laying blame seems to be the goal.

What we want to try to look for is a way where we’re not trying to lay blame for past events but try to cut the risk of future events. We want to try to look for lessons in a way that’s continuous and routine and happens a lot, not where we convene a national commission every 10 years because the public outcry about some scandalous wrongful conviction or disastrous wrongful release forces us to do so. What we want to do is to be able to try to find a way to make the people who actually do the work sit together and think about how this happened, how it could happen again, and what we could do to prevent it happening, not to leave this for commissions of blue-ribbon dignitaries to handle, but to get the people who actually do the work to work on these things together on a basis of shared professionalism and devotion to system integrity.

What we’re trying to do here is see whether there’s a way to give the good guys in the system something to work on other than trying to chase down the bad and incompetent guys in the system. That’s what we hope a sentinel event set of procedures can do. We don’t have any feeling that what we want to do is imposed from a federal level, a requirement that everybody everywhere looks at every event all of the time. But we do think it’s worth exploring whether there will be enough of these events that people on the front lines will be willing to explore and can learn from to make this a productive enterprise. It is a pretty glaring gap in the way the criminal justice system looks at the world that we don’t really have developed a kind of capacity for this sort of forward-looking accountability, this way to try to look at things that have happened in ways that cut the risk of them happening again.

So, I think what NIJ is looking for here is not beneficiaries who at this point we’re going to hand a lot of money to or victims who NIJ is going to impose a lot of stuff on. What NIJ is looking for at this stage is real partners, people who are going to help talk through, on the front-line level, all the serious questions that this approach implicates. Things like how do we handle confidentiality, how do we handle liability, how do we choose an event that’s productive, how do we choose the teammates. All of these things are not things that we have secret answers we’re holding up our sleeves, but things that we hope the participants in this are going to help us to find and explore further.

Date Created: August 14, 2019