By Rachel Wright, Policy Analyst

For years now, the use of artificial intelligence has been ingrained into the everyday lives of Americans. For example, iPhones rely upon AI-powered biometrics to unlock our phones. Social media platforms employ complex algorithms to personalize and moderate the content we see. Even email uses AI powered by natural language processing to detect spam and filter emails.

Today, AI systems and its role in our daily lives continues to advance and shows no signs of slowing down. According to a study conducted by OpenAI in 2018, the amount of computational power used for AI training has doubled every year since 2012. Computing power is one of the main factors driving the advancement of AI, and thus the development of new AI systems.

Challenges of AI for State Governments: Governing the Design, Development and Use of AI
The rapid advancement of AI systems has posed significant regulatory challenges for the states. State policymakers must decide what role the state will play in governing the design, development and use of AI systems, as well as how the state will ensure compliance with these laws. This will require state officials to answer complex policy questions with limited precedent to follow.

For example, both public and private sector employers have begun to use automated systems powered by AI to assist them in evaluating, rating and making other significant decisions about job applicants and employees. Using these tools can save employers money in terms of the time and manpower allotted to these tasks; however, some of these tools have been shown to discriminate against certain candidates, including candidates with disabilities.

The use of automated employment tools alone has prompted the following questions:

  • How can states govern AI systems to protect citizens from algorithmic discrimination?
  • Should employers be required to inform applicants of when and how automated employment tools are being used?
  • How can algorithmic discrimination be prevented when the design, functionality and use of AI systems vary significantly across sectors?

After all, adopting a “one-size-fits-all” regulation may appropriately govern the use of AI in one industry, but not apply to others.

Guiding Principles for States: Governing the Design, Development and Use of AI
Policymakers can begin to answer these questions by identifying the principles with which the design, development and use of AI systems should align. To guide states in these efforts, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights in October 2022. The Blueprint states that AI systems should be designed, developed and deployed according to principles that are aligned with democratic values and protect civil rights, civil liberties and privacy.

To achieve this, legal scholars, AI technologists and the White House have identified the following principles for states to consider:

Interdisciplinary Collaboration: Policy governing the design, development and use of AI should be informed by collaborative dialogue with stakeholders from a variety of disciplines. This includes professional associations, labor unions, private companies and the academic community, among others. To ensure interdisciplinary collaboration lies at the core of AI governance, states can establish interdisciplinary task forces to study AI as it relates to data privacy, potential public sector uses and cybersecurity challenges, among other things.

Protection from Unsafe or Ineffective Systems — Policy governing the design, development and use of AI should seek to protect individuals from unsafe or ineffective systems. This includes proactive protections from unintended, yet foreseeable, impacts or uses of an AI system. To protect against unsafe or ineffective systems, states can establish reporting and independent evaluation requirements regarding systems safety and effectiveness. They can also outline minimum requirements for ongoing monitoring of AI systems, among other things.

Data Privacy — Policy governing the design, development and use of AI should seek to protect individuals from abusive data practices while also ensuring that individuals have agency over how an AI system collects and uses data about them. Abusive data practices include the inappropriate, irrelevant or unauthorized use of data in the design and development of AI systems as well as the reuse of this data. To ensure data privacy, states can require AI systems to be designed with data protections as a default. They can also require AI systems to provide plain-language user consent requests regarding the collection of individual-level data, among other things.

Transparency — Policy governing the design, development and use of AI should seek to ensure that individuals know when and how an AI system is being used. Using this information, an individual should be able to opt-out of using the system in favor of a human alternative, if appropriate. To ensure transparency, states can require those who develop or deploy an AI system – such as an automated hiring tool –  to notify individuals that the system is being used. They can also require these entities to inform individuals – in accessible, plain language – of how the AI system functions and how its use impacts them.

Protection from Discrimination Policy governing the design, development and use of AI should seek to protect individuals from discrimination and ensure that AI systems are designed in an equitable way. This includes algorithmic discrimination, whereby an AI system contributes to the unjustified different treatment of people based on their race, color, ethnicity, sex, religion or disability, among other things. To protect individuals from discrimination, states can require AI systems to undergo equity assessments or require that developers use representative data sets when training AI algorithms.

Accountability — Policy governing the design, development and use of AI should seek to ensure that those developing and deploying AI an system are complying with the rules and standards governing AI systems and are being held accountable if they do not meet them. To ensure accountability, states can establish ongoing reporting and impact assessment requirements for developers as well as outlining clear oversight and enforcement mechanisms for those developing and deploying an AI system.

This is the second article in a three-part series on artificial intelligence and its role in the states. The third article, “Artificial Intelligence in the States: Emerging Legislation,” is available by visiting

Recommended Posts