By Rachel Wright, Policy Analyst

Snapshot: Since 2019, 17 states have enacted 29 bills focused on regulating the design, development and use of artificial intelligence. These bills primarily address two regulatory concerns: data privacy and accountability. Legislatures in California, Colorado and Virginia have led the way in establishing regulatory and compliance frameworks for AI systems.

States face growing challenges governing the design, development and use of artificial intelligence. In 2023, Congress held committee hearings and proposed several bills concerning AI that have yet to pass. This leaves the task of establishing regulatory and compliance frameworks for AI systems to the states.

As laboratories for innovation, states can take many different approaches to regulating AI systems. However, the White House and industry stakeholders urge states to consider several guiding principles to ensure that AI systems are designed, developed and deployed in a way that aligns with democratic values and protects civil rights, civil liberties and privacy.

Guiding Principles for States: Governing the Design, Development and Use of AI

Policy governing the design, development and use of AI should seek to:

  • Ensure that the design, development and use of AI is informed by collaborative dialogue with stakeholders from a variety of disciplines.
  • Protect individuals from the unintended, yet foreseeable, impacts or uses of an unsafe or ineffective AI system.
  • Protect individuals from abusive data practices and ensure that they have agency over how an AI system collects and uses data about them.
  • Ensure that individuals know when and how an AI system is being used and provide users with an option to opt out of an AI system in favor of a human alternative.
  • Protect individuals from discrimination and ensure that AI systems are designed in an equitable way.
  • Ensure that those developing and deploying AI systems are complying with the rules and standards governing AI systems and are being held accountable if they do not meet them.

How are States Currently Regulating AI Systems?

Over the past five years, 17 states have enacted 29 bills that focus on AI regulation align with these principles. Of these bills, 12 have focused on ensuring data privacy and accountability. This legislation hails from California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia and Washington.

State Legislation: Interdisciplinary Collaboration
Four states — Illinois (HB 3563, 2023), New York (AB A4969, 2023, and SB S3971B, 2019), Texas (HB 2060, 2023) and Vermont (HB 378, 2018) — have enacted legislation that seeks to ensure the design, development and use of AI is informed by collaborative dialogue with stakeholders from a variety of disciplines.

To foster interdisciplinary collaboration, states have established task forces, working groups or committees that study the potential impacts of AI systems on consumers as well as identify potential public sector uses and cybersecurity challenges. These bodies have also been tasked with recommending legislation or regulation to protect consumers.

Texas HB 2060 (2023) is one such example. This bill established an AI advisory council consisting of public and elected officials, academics and technology experts. The council was tasked with studying and monitoring AI systems developed or deployed by states agencies as well as issuing policy recommendations regarding data privacy and preventing algorithmic discrimination.

State Legislation: Protection from Unsafe or Ineffective Systems
Four states — California (AB 302, 2023), Connecticut (SB 1103, 2023), Louisiana (SCR 49, 2023)and Vermont (HB 410, 2022)— have enacted legislation to protect individuals from any unintended, yet foreseeable, impacts or uses of unsafe or ineffective AI systems.

States have approached this issue by directing state agencies to analyze AI systems currently in use and issue a report to their respective governor regarding potential unintended or emerging effects and potential risks of these systems. As part of this report, states are asked to outline policies or “codes of ethics” that seek to address identified concerns.

Vermont HB 410 (2022) created the Division of Artificial Intelligence within the State Agency of Digital Services. The division is responsible for conducting an inventory of all automated systems currently developed or deployed by the state, as well as identifying any potential adverse impacts on Vermont residents. As part of this effort, the division is required to propose a state code of ethics on the use of AI.

State Legislation: Data Privacy
Eleven states — California (AB 375, 2018), Colorado (SB 21-190, 2021), Connecticut (SB 6, 2022), Delaware (HB 154, 2023), Indiana (SB 5, 2023), Iowa (SF 262, 2023), Montana (SB 384, 2023), Oregon (SB 619, 2023) Tennessee (HB 1181, 2023), Texas (HB 4, 2023)

 and Virginia (SB 1392, 2021) — have enacted legislation to protect individuals from abusive data practices (i.e., the inappropriate, irrelevant or unauthorized use or reuse of consumer data) and ensure that they have agency over how an AI system collects and uses data about them.

State approaches to ensuring data privacy center around the ability of consumers to opt-out of “profiling” if it furthers a system’s automated decision-making processes in a way that produces “legal or other similar significantly effects.” Effects may include unfair or deceptive treatment of consumers; negative impacts on an individual’s physical or financial health; provision of financial and lending services, housing, insurance or education; among other things.

State Legislation: Transparency
Three states — California (SB 1001, 2023), Illinois (HB 2557, 2019) and Maryland (HB 1202, 2020) — and New York City (2021/144, 2021) have enacted legislation to ensure that individuals know when and how an AI system is being used. To achieve this, states have required employers or businesses to disclose when and how an AI system is being used. In some instances, an employer may be required to receive consent from an employee in order to utilize an AI system that collects data about them.

State approaches to ensuring transparency in the use of AI systems include requiring employers or businesses to disclose when and how an AI system is being used. In some instances, an employer may be required to receive consent from an employee prior to utilizing an AI system that collects data about them.

In Illinois, HB 2557 (2019) requires an employer to notify job applicants before a videotaped interview that AI may be used to analyze the interview and make a determination on their fitness for the position. Employers are also required to provide applicants with advanced notice of how an AI system will be used and outline what types of characteristics the system will use to evaluate applicants. Under the bill, employers must gain an applicant’s consent prior to using the system.

State Legislation: Protection from Discrimination
Three states — California (SB 36, 2019), Colorado (SB 21-169, 2021) and Illinois (HB 0053, 2021) — have enacted legislation to protect individuals from discrimination and ensure that AI systems are designed in an equitable way. This includes algorithmic discrimination, whereby an AI system contributes to the unjustified different treatment of people based on their race, color, ethnicity, sex, religion or disability, among other things.

State legislators have primarily sought to protect consumers from algorithmic discrimination by requiring those that develop or deploy AI systems to assess these systems for potential bias. California SB 36 (2019) requires criminal justice agencies that utilize AI-powered pretrial risk assessment tools to analyze whether they produce disparate effects or biases based on gender, race or ethnicity. Pretrial risk assessment tools are used to determine the likelihood that a defendant will fail to appear in court or carry out new criminal activity, thereby impacting whether they are released or detained.

Further, some states have expressly prohibited those deploying AI systems from using biased data generated by AI in various decision-making processes. Colorado SB 21-169 (2021) prohibits insurers from using consumer data and information gathered by AI systems in a way that discriminates based on race, color or sexual orientation, among other things.

State Legislation: Accountability
Twelve states — California (AB 375, 2018), Colorado (SB 21-190, 2021), Connecticut (SB 6, 2022), Delaware (HB 154, 2023), Indiana (SB 5, 2023), Iowa (SF 262, 2023), Montana (SB 384, 2023), Oregon (SB 619, 2023), Tennessee (HB 1181, 2023), Texas (HB 4, 2023), Virginia (SB 1392, 2021) and Washington (SB 5092, 2021) — have enacted legislation to ensure that those developing and deploying AI systems are complying with the rules and standards governing AI systems and are being held accountable if they do not meet them.

To ensure accountability, states are incorporating compliance and accountability measures into existing data privacy laws governing AI systems. Tennessee HB 1181 (2023) put forth various data privacy measures related to “profiling” and required those developing or deploying AI systems to conduct impact assessments. These assessments are intended to identify foreseeable negative or disparate impacts resulting from the collection and use of consumer data. The bill also gives the Tennessee Attorney General’s office authority to impose civil penalties on those who do not comply with the law.

Washington is yet to pass data privacy laws that explicitly govern AI systems. However, the state has established a working group to study how “automated decision-making systems can be reviewed and periodically audited to ensure they are fair, transparent and accountable.”

Such working groups can help state policymakers identify effective compliance and accountability frameworks prior to inclusion in AI data privacy laws.

Looking Forward: What Can We Expect to See from States?

Since 2018, the number of AI-related bills proposed and enacted in state legislatures has grown. In the coming years, state policymakers can expect to see this trend continue. States such as California, Colorado and Virginia have laid the groundwork for establishing AI-related data privacy laws as well as measures aimed at enforcing these laws. States can turn to policy recommendations from existing AI-focused task forces and working groups to better understand how to address AI-related issues in their local context.

This is the third article in a three-part series on artificial intelligence and its role in the states. The first article, “Artificial Intelligence in the States: Harnessing the Power of AI in the Public Sector” is available at https://csgovts.info/AI_in_the_Public-Sector. The second article, “Artificial Intelligence in the States: Challenges and Guiding Principles for State Leaders,” is available at https://csgovts.info/AI_Challenges_Guiding-Principles.

Recommended Posts