Central Oregon police agencies are working on rules for using AI

Published 11:15 am Wednesday, November 20, 2024

Artificial intelligence technology has opened an ocean of possibilities, and many sectors — from education to politics to public safety — are beginning to tap into that brave new world, triggering a host of questions about how it can or should be used, and what the risks and benefits are for public agencies such as police departments.

Oregon State Police, the Deschutes County Sheriff’s Office and the Bend Police Department are currently using generative AI in very limited capacities. None of the agencies are using these tools yet to write reports or in any sort of policing. However, with the potential benefits that AI provides, all three agencies are actively exploring ways this new technology can be used responsibly to improve public safety.

Generative AI is a type of artificial intelligence that uses the internet to create new content, such as articles, analysis, images or music, based on human prompts. It’s generally accessed through tools like Open AI’s ChatGPT and Microsoft’s Copilot, and increasingly is built in to other software.

When it comes to policing and public safety, there are many benefits and many dangers to using these tools and agencies in Central Oregon are slowly working to regulate it.

Developing secure policies

Capt. Kyle Kennedy from Oregon State Police told The Bulletin that the agency doesn’t use AI, generative or not, in any capacity and defers to state legislation for policy. More direction could come from the state Legislature regarding AI later this year, he said, but added that Oregon State Police is also independently looking into areas where generative AI could be implemented.

“We have to consider the integrity of investigations and security of our records,” Kennedy told The Bulletin. “That being said, we are looking into the feasibility of using AI where appropriate and efficient.”

In a sector with an abundance of confidential information and sensitive investigations, law enforcement must take pains to protect private data when using tools like ChatGPT or Copilot. This is because any information put into these tools becomes the property of the company that operates the AI.

When developing policies, many public sector agencies are creating guidelines that prohibit employees from inputing personal data into generative AI tools. The Deschutes County Sheriff’s Office is one such agency, implementing a new policy regulating artificial intelligence at the beginning of this year.

“Currently, the Sheriff’s Office does not have access to any generative AI tools or platforms in which the data input/output is governed by the Sheriff’s Office. Therefore, care must be taken when utilizing generative AI tools or platforms, especially when that use involves Sheriff’s Office data,” the agency’s AI policy states.

The policy goes on to say that any queries input into generative AI tools must be anonymized, follow state and federal privacy guidelines and cannot be restricted organizational information, such as information pertaining to an internal investigation.

Not diving in too quickly

Oregon State Police is not the only law enforcement agency deferring to civilian branches of government for direction on AI. The Bend Police Department also does not have its own policy regarding how to use artificial intelligence, and is instead relying on the city of Bend as its AI working group develops the city’s generative AI policy.

Adam Young, who heads the working group, said the team — which includes someone from the Bend Police Department — is being meticulous in how it approaches the benefits and dangers of generative AI. The goal is to create a comprehensive policy that addresses all contingencies, and not to throw something together that will end up being obsolete as more use cases for generative AI develop.

More Coverage

More coverage

Editorial: When is it OK for the city of Bend to use artificial intelligence?

“Right now it’s been so slow going that we really haven’t focused or talked about specific things the police might be able to leverage (artificial intelligence) for. We have seen things in the news of what it can be used for — maybe to help consume a large amount of data — but right now we haven’t had any specific use cases that we are planning to look at for police.”

Another example of how generative AI might be used by law enforcement is to help officers save time writing reports. Instead of taking the time to write it themselves, generative AI can be fed the relevant information and then independently generate a narrative account of events. AI can also parse out information from audio recordings, transcribing it or even creating a summary of important talking points. But there are concerns about the accuracy of these outputs.

On a statewide level, legislators are also taking their time to fully explore the implications of artificial intelligence. In 2023, Gov. Tina Kotek established an artificial intelligence advisory council. So far, Oregon has enacted a law that requires political campaigns to disclose when they have used AI to create synthetic media, but has not addressed how AI might impact law enforcement and policing policies.

Concerns about racial bias

The NAACP has recently called on lawmakers to regulate AI’s role in predictive policing and limit its implementation in law enforcement. The concern is that, without human interaction, people of color might be adversely affected by the content created by generative AI.

The sheriff’s office’s policy and the preliminary guidelines put out by the city of Bend while it develops its official guidelines both address these biases by stating how officers should use generative AI responsibly and ethically.

“Members must actively work to identify and mitigate biases produced by AI systems. They should ensure that the output utilized from the AI systems is fair, inclusive, and does not discriminate against any individuals or groups. Members will be held solely responsible for any AI output that is utilized in their work product that is biased or discriminatory,” the sheriff’s office policy states.

The city of Bend also says in its guidelines that employees should “be aware of existing bias in Generative AI that may skew results for specific groups. Generative AI is not immune to human bias and has a tendency to produce distorted outputs, which could potentially be harmful for specific groups and foster mistrust.”

Marketplace