Blog

How is AI used in Law Enforcement?

Effective policing starts with the ability to make quick and effective decisions under pressure, often operating with imperfect information, relying on prior experience as a guide. The more we can supplement existing processes with better information, delivered in a timely manner, we can reduce the burden on individual officers.

Artificial Intelligence (AI) has reached nearly every corner of the world, impacting communities and citizens in ways that we may not even fully understand. Law enforcement is no different, with agencies around the country (and the world) beginning to deploy AI for a variety of applications.

This begs the question: is it helping? If it is helping, how is it helping? And should we be concerned about non-human inputs involvement in such a crucial element of our day-to-day lives?

What factors should be considered before choosing AI

Due to its potential impact on the lives of everyday Americans, AI solutions in law enforcement, and public safety in general, should be be thoughtfully considered before making their way into the technology stack of an agency. This sentiment is echoed at every level of global government.

In June of last year, INTERPOL and the United Nations Interregional Crime and Justice Research Institute (UNICRI) released the Toolkit for Responsible AI Innovation in Law Enforcement, which functions as, “a practical guide for law enforcement agencies on developing and deploying AI responsibly, while respecting human rights and ethics principles.”

In October, President Biden issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence,” to wit the Congressional Research Service released a report in December focused specifically on AI and law enforcement.

Here are two factors that agencies should consider when adopting AI solutions.

Bias and Ethical Concerns

Ultimately, AI is only as good as the data it trains on. If the data is biased, the system will be as well. According to the NAACP “AI models can inherit biases from historical crime data, leading to discriminatory policing practices.” As a solution, they recommend that government, “Prohibit the use of historical crime data and other sources known to contain racial biases in predictive policing algorithms.

In lieu of eliminating datasets, many organizations around the country have begun to work on policies and practice that mitigate the issue. The Center for Advancing Safety of Machine Intelligence, a collaboration between Northwestern University and UL Research Institutes, released an Ethical Framework to “Reduce Bias in Data-Driven Policing,” aimed toward both private companies building AI solutions and the agencies adopting them.

“The spirit of these recommendations and their goal is to help police become better at their job. They are being given a new tool to use, and they need to understand how to incorporate the tool into their work,” said Ryan Jenkins, associate professor of philosophy at California Polytechnic State University.

The impact AI-assisted decision-making

Law enforcement officers make critical, life-and-death decisions nearly every day. These decisions impact themselves and members of their community. For departments incorporating AI into their decision-making, the importance of understanding the tool’s impact can’t be understated.

The National AI Advisory Committee (NAIAC) to the President recently visited the Miami Police Department so see what role AI was playing in their decision-making as part of their efforts to better understand AI in law enforcement. Per FedScoop:

“During a trip to South Florida earlier this year, Law Enforcement Subcommittee members on the National AI Advisory Committee asked MPD leaders how many times they used facial recognition software in a given year. The answer they got was ‘around 40.’

The report continued:

“Based in part on that Miami fact-finding mission, [a subcommittee] on Thursday will recommend to the full NAIAC body that federal law enforcement agencies be required to create and publish yearly summary usage reports for safety- or rights-impacting AI. Those reports would be included in each agency’s AI use case inventory, in accordance with Office of Management and Budget guidance finalized in March.”

The key theme that emerges from the NAIAC trip is transparency. Both communities and external organizations will expect to be made aware of how departments are leveraging AI in their decision-making, which is something departments and Chiefs will need to be cognizant of moving forward.

Additionally, the prioritization of “human-in-the-loop” solutions, which do not allow AI to operate without human input, should be considered as a potential standard. By keeping human eyes on the technology, agencies can increase their ability to be transparent and maintain effective and safe standard operating procedures.

What are the benefits of using AI in law enforcement?

As Chiefs and departments prioritize their awareness of potential bias within AI-powered tools and aim to be transparent in their adoption of those tools, they’ll also be keen to note that distinct benefits provided by AI. Ultimately, the tools that will make waves are the ones that will be able to effectively improve key aspects of police work. Here are a few of those potential areas.

Suspect identification

If you’ve attended an event at a venue owned by Madison Square Garden Entertainment (MSGE) over the last few years, you’ve had your biometric information scanned. MSGE is not shy about this fact either, with prominent signs guarding entryways and ticket-taking locations. The data functioned primarily to, “rebuff people considered dangerous,” but ultimately landed the company in hot water when it was found to have potentially been used to “remove perceived adversaries.

More importantly, MSGE is not alone in this practice. Not even close, according to Police1:

“In Chihuahua, Mexico, a massive tower pulls in feeds from thousands of cameras, biometric sensors, license plate readers, drones and other sensors from infrastructure throughout the region…Last October, Dubai Police exhibited self-driving patrol cars with 360-degree cameras, license plate readers, an onboard drone and facial recognition technology that will patrol residential neighborhoods…Nice, France, is preparing for the upcoming summer Olympics by powering its RTCC with AI to capture, cull and analyze data generated from facial recognition, advanced video analytics and other technologies.”

In the United States, “Police in the St. Cloud (Florida) RTCC use AI to sift through than 7,000 video feeds and license-plate readers to search for…individuals by what they are wearing.”

But how accurate are these technologies? Very accurate, according to the National Institute of Standards and Technology, with each of the top 150 facial recognition algorithms being over 99% accurate across both white and black females and males.

While ethical concerns remain, AI for suspect identification has begun to prove it’s efficacy.

Evidence management and analysis

In the digital age, evidence is hardly limited in supply. With seemingly limitless data sources to draw from, law enforcement agencies have a massive task to sift through data points for relevant pieces of evidence. As a result, they’ve looked to AI to help solve the problem.

According to Police1, “Europol’s Innovation Lab is using artificial intelligence (AI) to process massive amounts of data to identify trends and patterns, as well as leveraging tools such as ChatGPT to act as investigative assistants…Belgian police have developed a platform that allows investigators to cross reference more than 50 separate internal databases and yield results in seconds. New Jersey has used a similar approach to dramatically curtail gun crime.”

As shown in the previous section, real-time crime centers (RTCCs) have the potential to become the hub for AI within a given law enforcement agency. Prepared partner Flock Safety has led the way in AI-enabled RTCC technology.

Flock Safety Falcon® LPR is AI-enabled with Vehicle Fingerprint® technology, which recognizes unique vehicle attributes, so suspect vehicles can be located even when a plate isn’t visible. Condor video is AI-enabled with Visual Alerts, enhancing suspect vehicle identification and tracking across a network of cameras, significantly reducing manual review effort.

Emergency Response

Law enforcement officers are frequently the first to the scene of a variety of emergencies, meaning they don’t just have to be prepared to handle criminal investigations. Instead, they form a crucial part of emergency response protocols.

When there is a crime in progress, AI might also be able to help. Companies like ZeroEyes leverage existing cameras to monitor and detect potential gun violence, instantly alerting local authorities.

How does Prepared help Law Enforcement and Emergency Responders?

Prepared Assist and Prepared OnScene incorporate critical data into the emergency response process, helping law enforcement by providing enhanced digital evidence and giving them eyes-on-scene before their eyes are on-scene. With new AI-powered features, call-takers and dispatchers have the ability to get key information to the field faster than ever before.

Assist uses AI to process call audio and allow the telecommunicator to two-click copy and paste suspect descriptions, descriptions of violence, weapons, and more, directly to CAD to be shared with officers en route to the scene. As a result, officers approach the scene better informed and better equipped to handle the situation as it unfolds.

FlockOS® 911, powered by Prepared, helps law enforcement improve safety and accelerate case resolution by giving patrol officers access to 911 audio, transcriptions, and on-scene media en route, directly through the widely adopted FlockOS® platform.

Want to learn more about bringing these solutions to your agency? Schedule a call with a member of our team!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Bring assistive AI to your PSAP today

Get in Touch