Guiding principles help states successfully legislate on artificial intelligence

By Rachel Wright and Lexington Souers

For years, the use of artificial intelligence has been ingrained into the everyday lives of Americans. Users can unlock their computers or mobile phones using AI-powered biometrics, while social media platforms employ complex algorithms to personalize and moderate content. Even email uses natural language processing powered by AI to detect spam and filter emails.

Today, AI systems continue to advance — and show no signs of slowing down. According to a study conducted by OpenAI in 2018, the amount of computational power used for AI training has doubled every year since 2012. Computing power is one of the main factors driving the advancement of AI, leading to the development of new AI systems.

Challenges of AI for State Governments: Governing Design, Development and Use
The rapid advancement of AI systems has posed significant regulatory challenges for states. State policymakers must decide what role their state will play in governing the design, development and use of AI systems, as well as how to ensure compliance with these laws. This will require state officials to answer complex policy questions with limited precedent to follow.

“Any emerging technology is challenging for policymakers because it presents new questions that might not have been raised by previous technologies,” said Matt Perault, director at the Center on Technology

Policy at the University of North Carolina at Chapel Hill. “The other issue that is important to keep in mind is that AI is not just about risks, it’s also about benefits for our society. You want to ensure with any new regulation that you are mitigating harms, but not to such an extent that you’re having a disproportionate negative impact on the potential for realizing benefits as well. It’s important for any regulation to adequately balance risk mitigation and benefit maximization.”

For example, both public and private sector employers have begun using automated systems powered by AI to assist them in evaluating, rating and making other significant decisions about job applicants and employees. Using these tools can save employers money in terms of the time and manpower allotted to these tasks; however, some of these tools have been shown to discriminate against certain candidates, including candidates with disabilities.

“There’s been a focus on using existing civil rights law to address issues related to bias and discrimination,” Perault said. “That seems appropriate and helpful, from my standpoint, that there are elements of current law that we can use to address some of the potential harms.”

While issue-specific approaches are helpful, Perault added that there may be a need for industry-wide comprehensive approaches.

Guiding Principles for States
As a starting point, policymakers can address AI governance by identifying the principles with which the design, development and use of AI systems should align. To guide states in these efforts, the White House Office of Science and Technology Policy issued the “Blueprint for an AI Bill of Rights” in October 2022. The Blueprint states that AI systems should be designed, developed and deployed according to principles that bolster democratic values, protect civil rights, preserve civil liberties and ensure privacy.

Those principles, as defined by legal scholars, AI technologists and the White House, can assist state policymakers in trying to ensure that AI governance aligns with the objectives detailed in the Blueprint. They include interdisciplinary collaboration, protection from unsafe or ineffective systems, data privacy, transparency, protection from discrimination and accountability.

An additional resource, the “Artificial Intelligence Risk Management Framework,” was published in January 2023 by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST). This guidance document, which was developed with direction from Congress, included input from private- and public-sector organizations that may seek guidance in managing the risks associated producing AI technologies. NIST has also since published its “AI RMF Playbook, “AI RMF Roadmap” and “AI RMF Crosswalk.

AI GUIDING PRINCIPLES
Commonalities in the recently published work of many legal scholars, AI technologists and the White House suggest that sound policy governing the design, development and use of AI: 
Ensures that the design, development and use of AI is informed by collaborative dialogue with stakeholders from a variety of disciplines. 
Protects individuals from the unintended, yet foreseeable, impacts or uses of an unsafe or ineffective AI system.
Protects individuals from abusive data practices and ensure that they have agency over how an AI system collects and uses data about them. 
Ensures that individuals know when and how an AI system is being used and provide users with an option to opt out of an AI system in favor of a human alternative. 
Protects individuals from discrimination and ensure that AI systems are designed in an equitable way. 
Ensures that those developing and deploying AI systems are complying with the rules and standards governing AI systems and are being held accountable if they do not meet them. 

Addressing AI Regulation in the States
As laboratories for innovation, states have taken many different approaches to regulating AI systems that align with these guiding principles. Over the past five years, 17 states have enacted 29 bills that focus on AI regulation. Of these bills, 12 have focused on ensuring data privacy and accountability. This legislation hails from California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia and Washington.

STATE LEGISLATION: INTERDISCIPLINARY COLLABORATION
Illinois, New York, Texas and Vermont have enacted legislation that seeks to ensure the design, development and use of AI is informed by collaborative dialogue with stakeholders from a variety of disciplines.

Through this legislation, states have established task forces, working groups or committees to study the potential impacts of AI systems on consumers, as well as to identify potential public sector uses and cybersecurity challenges. These bodies have also been tasked with recommending legislation or regulation to protect consumers.

In June 2023, the Texas Legislature enacted HB 2060 (2023), establishing an AI advisory council consisting of public and elected officials, academics, and technology experts. The council was tasked with studying and monitoring AI systems developed or deployed by state agencies, as well as issuing policy recommendations regarding data privacy and preventing algorithmic discrimination.

STATE LEGISLATION: PROTECTION FROM UNSAFE OR INEFFECTIVE SYSTEMS
Four states have enacted legislation to protect individuals from any unintended, yet foreseeable, impacts or uses of unsafe or ineffective AI systems.

As part of such legislation, policymakers have directed state agencies to analyze AI systems currently in use and issue a report to their respective governor regarding potential unintended or emerging effects and potential risks of these systems. As part of this report, states are asked to outline policies or “codes of ethics” that seek to address identified concerns.

Connecticut policymakers passed SB 1103 (2023) with bipartisan support. The bill requires state agencies to review any use of AI and to ensure the technology does not discriminate against anyone and that it is fiscally responsible. The bill also established a working group to make recommendations for future AI use.

Connecticut Rep. Holly Cheeseman cosponsored the bill and said acknowledging the benefits of AI, while also maintaining an ongoing assessment of its use, was an important part of the bill that also helped encourage bipartisan support.

“I think this was something that people were pretty comfortable with. We’re not saying you can’t use artificial intelligence in government departments, but if you’re going to do it, you have to do it in this very thoughtful and reasonable way that requires a continuous assessment. You can’t just buy something off the shelf and say, ‘That’s it. We’re done.’You have to be aware that even with good things, there can be effects that you hadn’t foreseen that may, in some cases, discriminate or otherwise hurt the people you were aiming to help.”

— Rep. Holly Cheeseman, Connecticut 

STATE LEGISLATION: DATA PRIVACY
In 11 states, enacted legislation protects individuals from abusive data practices and ensures that they have agency over how an AI system collects and uses data about them. Abusive data practices include the inappropriate, irrelevant or unauthorized use or reuse of consumer data.

State approaches to ensuring data privacy center around the ability of consumers to opt-out of “profiling” if it furthers a system’s automated decision-making processes in a way that produces “legal or other similar significantly effects.” Effects may include unfair or deceptive treatment of consumers; negative impacts on an individual’s physical or financial health; and provision of financial and lending services, housing, insurance or education, among other things.

Indiana Sen. Liz Brown, who authored SB 5 (2023), said the legislation focused on how consumers can protect their personal information.

“I think that we need to continue to move into this space and maybe do some regulation, but we have to be soft,” Brown said. “With the legislation I introduced last year, we wanted to start slowly, but carefully. We don’t want to create a David and Goliath [situation] where new businesses can have mistakes because we’ve made it so prohibitive. On the other hand, we want to make sure that it’s not the Wild West. I think I think everybody agrees that we don’t know enough about the good and bad of AI.”

STATE LEGISLATION: TRANSPARENCY
Three states and New York City (2021/144, 2021) have enacted legislation to ensure that individuals know when and how an AI system is being used.

State approaches ensuring transparency in the use of AI systems include requiring employers or businesses to disclose when and how an AI system is being used. In some instances, an employer may be required to receive consent from an employee prior to utilizing an AI system that collects data about them.

In Illinois, HB 2557 (2019) requires an employer to notify and receive consent from job applicants prior to conducting a video-recorded interview that may be analyzed by AI to determine their fitness for the position. Employers are also required to provide applicants with advanced notice of how an AI system will be used and outline what types of characteristics the system will use to evaluate applicants.

STATE LEGISLATION: PROTECTION FROM DISCRIMINATION
Three states have enacted legislation to protect individuals from discrimination and ensure that AI systems are designed in an equitable way. This includes algorithmic discrimination, whereby an AI system contributes to the unjustified different treatment of people based on their race, color, ethnicity, sex, religion or disability, and more.

State legislators have primarily sought to protect consumers from algorithmic discrimination by requiring those that develop or deploy AI systems to assess these systems for potential bias.

California is one such example. In 2019, the California Senate enacted SB 36 (2019), which requires criminal justice agencies that utilize AI-powered pretrial risk assessment tools to analyze whether they produce disparate effects or biases based on gender, race or ethnicity. Pretrial risk assessment tools are used to determine the likelihood that a defendant will fail to appear in court or carry out new criminal activity, thereby impacting whether they are released or detained.

Further, some states have expressly prohibited those deploying AI systems from using biased data generated by AI in various

decision-making processes. For example, Colorado SB 21-169 (2021) prohibits insurers from using consumer data and information gathered by AI systems in a way that discriminates based on race, color or sexual orientation, among other things.

STATE LEGISLATION: ACCOUNTABILITY
Legislation enacted in 12 states ensures that those developing and deploying AI systems are complying with the rules and standards governing AI systems and are being held accountable if they do not meet them.

To ensure accountability, states are incorporating compliance and accountability measures into existing data privacy laws governing AI systems.

Tennessee is one instance in particular, with HB 1181 (2023) implementing various data privacy measures related to“profiling,” and requiring those developing or deploying AI systems to conduct impact assessments that are intended to identify foreseeable negative or disparate impacts resulting from the collection and use of consumer data. The bill also gives the Tennessee Office of the Attorney General and Reporter authority to impose civil penalties on those who do not comply with the law.

Washington is yet to pass data privacy laws that explicitly govern AI systems. However, the state has established a working group to study how “automated decision-making systems can be reviewed and periodically audited to ensure they are fair, transparent and accountable.” Such working groups can help state policymakers identify effective compliance and accountability frameworks prior to inclusion in AI data privacy laws.

Looking Forward: What Can We Expect to See from States?
Since 2018, the number of AI-related bills proposed and enacted in state legislatures has grown. States such as California, Colorado and Virginia have laid the groundwork for establishing AI-related data privacy laws, as well as measures aimed at enforcing these laws. States can turn to policy recommendations from existing AI-focused task forces and working groups to better understand how to address AI-related issues in their local context.

Backed by his own experiences and work, Perault has come to believe curiosity is among the keys to the ongoing AI-related efforts of state leaders.

“Curiosity means recognizing that there’s an enormous amount we don’t know. Therefore, we need to have an affirmative regulatory position that enables us to learn as we go into generating information. So, curiosity doesn’t mean doing nothing; it doesn’t mean just letting the industry do whatever the industry wants to do. It means using this period to gather information to understand how the products and policies work in practice, so, as we move forward, we can get smarter and smarter about regulation over time.”

— Matt Perault, director at the Center on Technology Policy at the University of North Carolina at Chapel Hill 

ASK CSG
Do you have a policy issue that you would like to know more about? Ask CSG! We’re happy to assist you with your request. Reach us via email at [email protected].

Recommended Posts