This technology brief examines the use of “chatbots” within the criminal justice system, which are computer programs designed to simulate conversation with human users.
A chatbot is a conversational interface that communicates in a natural language (written and/or spoken) through different media, including websites, mobile and messaging applications, and phone calls. The proliferation of messaging apps and advancements in artificial intelligence (AI), compounded by consumers’ expectations for quick answers, has contributed to the rapid development and deployment of chatbots in various contexts. The current report notes that chatbots in the criminal justice system have the potential to improve efficiency, redefine engagement, expand access to justice, and reduce costs associated with administrative overhead for various criminal justice stakeholders; however, chatbots have inherent risks that must be considered before their implementation. These include 1) the misinterpretation of user input, leading to incorrect responses; 2) biased training data; and 3) vulnerability to hacking. Advancement in AI will continue to improve chatbot capabilities and applications. Although chatbots are currently being used in the criminal justice system, there are more possibilities for how they could be implemented in the future. Examples provided include law enforcement recruitment and investigations, court system awareness and access, corrections and community supervision, and victim services and support. This report also provides a discussion and table of chatbot classification. information on chatbot system architecture, a review of agency considerations before implementing a chatbot, policy and governance, privacy and security, user engagement, the role of humans, user engagement, the role of humans, and ethics. 5 figures