By Rachel Wright, Policy Analyst
With the advent of generative artificial intelligence systems such as ChatGPT, AI has increasingly become a topic of conversation. As these systems advance, concerns grow regarding the safety and effectiveness of these tools as well as the potential impacts of these systems on the workforce and the economy.
AI systems are advancing rapidly but have been around for decades. The term AI was first coined in the 1950s and refers to “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” AI systems work by processing and analyzing large data sets and making a prediction based on patterns or correlations in the data.
AI systems have brought about advancements and new discoveries in nearly every sector. For example, health researchers have used AI algorithms to analyze biometric data to improve the diagnosis and treatment of disease. More commonly, entertainment companies have also used AI to analyze customer viewing habits and provide recommendations.
Although private sector uses of AI garner much attention, these systems are also used by the public sector to streamline service provision and support public officials. AI systems are commonly used in fields such as law enforcement, elections, transportation, public finance and government administration.
State and Local Law Enforcement Agencies and Predictive Policing
Since the 1990s, state and local law enforcement agencies have used AI systems to augment existing crime prevention efforts. More specifically, machine learning algorithms have been used to analyze historical crime data and predict where future crimes may occur. Law enforcement agencies have then allocated crime prevention resources based on this prediction. This practice is commonly referred to as predictive policing.
Today, law enforcement agencies in Arizona, California, Illinois, New York, South Carolina, Tennessee and Washington have implemented predictive policing programs.
State and Local Election Offices and Signature Matching Tools
State and local election officials have started using AI to streamline and improve the accuracy of the signature matching process for absentee/mail-in ballot return documents. As of 2022, 27 states require election workers to match a voter’s signature on a returned absentee/mail-in ballot with the signature on record for that voter.
Traditionally, election workers learn handwriting analysis techniques to manually compare and verify voter signatures. However, this process is resource- and time-intensive. Mistakes can also arise if an election worker is not comprehensively trained in analysis techniques.
Automated signature verification (ASV) tools powered by AI machine learning-based software have helped states address these issues. ASV tools are trained on large data sets and utilize pattern recognition, geometrical analysis and other analytical tools to verify that a voter’s signature on their ballot matches their signature on record. ASV applications further ease administrative burdens by integrating with existing mail ballot sorting equipment and voter registration systems.
As of 2020, at least 29 counties in eight states – California, Colorado, Florida, Hawaii, Oregon, Nevada, Utah and Washington – use AI systems to enforce signature matching rules on absentee/mail-in ballots.
State Transportation Agencies: Traffic Signals, Road Treatment and Bridge Deterioration
State governments have also employed AI systems to improve transportation infrastructure. For example, Maryland has sought to alleviate traffic congestion by rolling out AI software-controlled traffic lights. These lights utilize dynamic timing based on factors such as the number of vehicles on the road, vehicle collisions and construction to synchronize corridors of traffic and combat congestion in real-time.
Vermont has also used AI for several transportation-related projects. According to the Vermont Chief Information Officer, the state is using AI-powered modeling and predictive analytics to understand how long road treatments will last as well as predict bridge deterioration.
Navigating the Future: AI Regulation and Expanding Public Sector Use of AI
AI systems are not new, and recent integrations show they are not going anywhere. In the coming years, states will have to consider how to regulate the design, development and use of AI systems — in both the private and public sector — to ensure that they are safe, effective and unbiased. They will also have to explore how the public sector can further utilize AI systems to improve service provision.
To guide these efforts, states can turn to the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. This document was issued by the Whitehouse Office of Science and Technology Policy in October 2022 and introduces five principles and associated practices to help guide the design, use and deployment of automated systems that are aligned with democratic values and protect civil rights, civil liberties and privacy. These principles include ensuring that the public is:
- Protected from unsafe or ineffective systems.
- Free from discrimination by algorithms and inequities in the design and use of AI.
- Protected from abusive data practices.
- Informed of how an AI system is being used and how it contributes to outcomes that impact them.
- Free to opt out of an automated system in favor of a human alternative.
More information about the Blueprint for an AI Bill of Rights can be accessed at https://www.whitehouse.gov/ostp/ai-bill-of-rights/what-is-the-blueprint-for-an-ai-bill-of-rights/.
This is the first article in a three-part series on artificial intelligence and its role in the states. The second article, “Artificial Intelligence in the States: Challenges and Guiding Principles for State Leaders,” is available by visiting https://csgovts.info/AI_Challenges_Guiding-Principles.