Research supported by NIJ is helping to lead the way in applying artificial intelligence to address criminal justice needs, such as identifying individuals and their actions in videos relating to criminal activity or public safety, DNA analysis, gunshot detection, and crime forecasting.
AI is a rapidly advancing field of computer science. In the mid-1950s, John McCarthy, who has been credited as the father of AI, defined it as “the science and engineering of making intelligent machines." Conceptually, AI is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention. One facet of human intelligence is the ability to learn from experience. Machine learning is an application of AI that mimics this ability and enables machines and their software to learn from experience. Particularly important from the criminal justice perspective is pattern recognition. Humans are efficient at recognizing patterns and, through experience, we learn to differentiate objects, people, complex human emotions, information, and conditions on a daily basis. AI seeks to replicate this human capability in software algorithms and computer hardware. For example, self-learning algorithms use data sets to understand how to identify people based on their images, complete intricate computational and robotics tasks, understand purchasing habits and patterns online, detect medical conditions from complex radiological scans, and make stock market predictions.
[note 1] “What is Artificial Intelligence,” The Society for the Study of Artificial Intelligence and Simulation of Behaviour.
[note 2] Bernard Marr, “What Is the Difference Between Deep Learning, Machine Learning and AI?” Forbes (December 8, 2016).
Just like humans, learning is a matter of classification and patterns. AI is said to learn through supervised, unsupervised, and semi supervised and reinforcement learning. In supervised learning, AI algorithms are trained by using large numbers of labeled examples. Unsupervised AI algorithms strive to identify patterns in data, looking for similarities that can be used to categorize the data without the aid of labels. Semisupervised learning uses a small amount of labeled data to learn to classify a larger set of unlabeled data. Approach is useful when extracting features from data is difficult, and labeling examples is a time-intensive task for experts. Reinforcement learning trains an algorithm with a reward system, providing feedback when an artificial intelligence agent performs the best action in a particular situation. In reinforcement learning, the system attempts to is going through a process of trial and error until it arrives at the best possible outcome to find the optimal way to complete a particular goal, or improve performance on a specific task.
Adapted from What is AI? Everything you need to know about Artificial Intelligence and SuperVize Me: What’s the Difference Between Supervised, Unsupervised, Semi-Supervised and Reinforcement Learning?
AI is all about patterns and thus classification. A tool uses in statistical classification, a confusion matrix, also known as an error matrix has become widely used to tune and assess performance.
A confusion matrix that describes the performance of a classification algorithm (or “classifier”) on a sets of test data where the actual values are known. It promotes visualization of the performance of an algorithm. By thinking of the following values as “dials” on a virtual system, we can adjust the AI algorithm to provide us the required measurements to accomplish our end-result. These measurements include: “true positive” for correctly predicted event values; “false positive” for incorrectly predicted event values. “true negative” for correctly predicted no-event values; “false negative” for incorrectly predicted no-event values; “recall” providing a sense about when it’s actually the right positive answers, and how often the AI algorithm predicts it (high precision high true positives); and “precision: that tells it us how often the AI algorithm predicts as true positives as true positives (high precision low false positives). So when we think of accuracy we can think in terms of recall and precision: high recall, low precision, meaning most positive examples are correctly recognized) but lots of false positives. For Low recall, high precision most positive examples are missed, but those predicted are indeed positive (low FP).