Skip Navigation

Sovereign Snapshot - Tribal Nations and AI Governance: A Selected Overview of the AI Risk Regulation Landscape

OU Native Nations Center, The University of Oklahoma website wordmark

Sovereign Snapshot

Tribal Nations and AI Governance: A Selected Overview of the AI Risk Regulation Landscape

Tana Fitzpatrick, J.D., Director, Native Nations Center for Tribal Policy Research

As of September 2025, no federally recognized tribe (referred to as tribe or Tribal Nation) is believed to have adopted laws regulating the potential risk and benefits posed by artificial intelligence. This Sovereign Snapshot provides a selected overview of laws developed and enacted by governments, including international, federal, and state laws that regulate artificial intelligence and machine learning (referred to collectively as AI). It also identifies the common themes among the selected AI governance laws, reviews the current discussion of AI use and impacts to Tribal Nations, and concludes with tribal policy considerations.

Note: This article reviews selected statutory laws or legislation and does not discuss adopted policies, regulations, court cases, or executive actions. Information to learn AI basics, including generative AI, AI essentials, and AI prompting, may be found on Google AI’s, “Learn essential AI Skills.”

Landscape Review: AI Law and Legislation Regulating Risk

European Union

In June 2024, the European Union (EU), comprised of 27 European countries, became the first governing body to regulate AI when it enacted the EU AI Act of 2024. The EU Act describes four levels of risk:

  • minimal risk (e.g., spam filters),
  • limited risk (e.g., chatbots),
  • high-risk (e.g., AI use in transportation job application review, robot assisted surgery, court ruling preparation), and
  • unacceptable risk (e.g., social scoring, expanding public facial recognition databases).

The EU’s law focuses primarily on regulating the high-risk category. The law strives to balance the potential usefulness of high-risk AI applications against the risk it may pose to society. To achieve this, the EU adopted a “regulatory sandbox” approach by providing a controlled environment that encourages innovation balanced with transparency, regulation, and review processes.

United States

To date, the U.S. has not signed into law any comprehensive legislation that regulates AI risks. However, Congress has enacted laws involving AI. For example, in January 2021, Congress enacted the National Artificial Intelligence Initiative (P.L. 116–283). This law primarily focuses on AI research and workforce development. In August 2022, Congress enacted the CHIPS and Science Act (P.L. 117-167), which also focuses on AI research but with an emphasis on supporting higher education. Notably, tribes are listed throughout both laws as potential stakeholders.

Importantly, both laws amended the National Institute of Standards and Technology (NIST) Act by adding a section titled, “Standards for artificial intelligence.” These additions require the NIST director to support activities that promote the trustworthiness of AI, as well as to develop a voluntary framework for risk management (published by NIST in January 2023).

In the 119th Congress, the President signed the Big Beautiful Bill Act (P.L. 119-21, July 4, 2025), which includes provisions primarily to provide for AI investment, but does not regulate risk. Additionally, as of September 12, 2025, Congress introduced at least 65 other bills covering a range of AI topics: energy, deepfakes, foreign interference, public awareness and education, extreme weather, health, agriculture, economic development, as well as managing risk, such as through the establishment of a federal regulatory sandbox program. However, none of the 65 bills have been enacted to date.

Selected State Laws

In the absence of federal AI regulation, commentators suggest that states are stepping in as the primary regulators of AI use. Many states are taking a range of governance approaches to regulating AI. (See Orick’s U.S. AI Law Tracker for state specific laws.) One approach is to enact targeted AI governance legislation, which can focus on sector- or issue-specific AI bills. For example, states have enacted laws to address AI risks for specific purposes, such as to regulate AI in the creation of deep fakes in political advertisement; AI in healthcare services; and AI use in governmental services; among a wide range of other governance priorities of the state.

Another, less common, approach is to enact comprehensive AI governance legislation, which, much like the EU AI Act, focuses on a broad, overarching framework that provides general oversight and may seek to regulate developers and deployers of AI. To date, at least two states have adopted a comprehensive law seeking to govern AI, and in May 2024, Colorado became the first state to do so. The Colorado Artificial Intelligence Act (SB24-205):

  • Defines and regulates developers and deployers of high-risk AI systems;
  • Requires deployers to establish AI risk management policies, take reasonable measures to prevent algorithmic discrimination, and provide public notices to consumers, particularly about data usage; and
  • Grants the attorney general exclusive authority to enforce its provisions.

In June 22, 2025, the Texas governor signed into law the Texas Responsible Artificial Intelligence Governance Act (H.B. 149). Provisions of the act include:

  • Prohibitions on AI use, such as for social scoring and manipulation of human behavior;
  • Establishes a “Regulatory Sandbox Program” to encourage the development and testing of innovative AI systems by providing exemptions for sandbox program participants focused on research, training, and testing; and
  • Provides the attorney general exclusive authority to enforce the act’s provisions.

Tribal Nations and AI

Although no Tribal Nation is known to have enacted laws regulating AI risks to their nation, a top discussion point involving AI risks posed for tribal consideration is data sovereignty. For example, the possibility of tribal cultural or traditional information used for unauthorized training of AI systems. Likewise, government employees may input data about tribes into publicly available generative AI tools (e.g., Open AI’s ChatGPT), such as for grant writing purposes, without specifying that the information provided may not be used for future model training. This practice is also a consideration for regulation. For more information on generative AI and tribal considerations, see the video titled, “AI, Data Sovereignty and Tribal Issues.”

Despite potential risks, AI presents significant opportunities for tribes, such as in healthcare (like medical charting), education, language and cultural preservation, and the democratization of knowledge, among many others. Generative AI holds the promise of opening access to knowledge-based opportunities that may have been previously limited by cultural, social, and financial barriers. For example, AI offers efficiency where tribes operate as commercial entities, such as in gaming or healthcare and where tribes wish to address workplace operations and services (e.g., Cherokee Nation’s AI policy). Further, tribes and their citizens may see increased efficiencies as the intended beneficiaries to certain federal services.

Tribal Policy Considerations

Tribes interested in regulating AI could first consider foundational, nation-centered questions to guide their approach: What does AI mean to us? How do we want to engage with AI? What risks and benefits are we willing to accept?

Next, tribes could consider the common AI legislative approaches of other sovereigns, which have included:

  1. Defining risks to their citizenry,
  2. Adopting a legislative approach that is targeted, comprehensive, or both, and
  3. Encouraging a competitive market that balances innovation with enforcement and penalties.

Under this framework, and in consideration of present concerns and opportunities AI provides to tribes, tribes may consider legislating on potential risks posed by AI to their communities and government when AI applications apply to or impact culture, language, and data sovereignty. For use in commercial functions, tribes may consider adopting a “regulatory sandbox” approach that supports innovation while simultaneously monitoring for high-risk usage.


Keywords: Artificial Intelligence, Legislative Landscape, Governance, Sovereignty, AI Risk Management, Policy

Citation: Fitzpatrick, Tana. 2025. “Sovereign Snapshot Tribal Nations and AI Governance: A Selected Overview of the AI Legislative Landscape.” Native Nations Center for Tribal Policy Research: The University of Oklahoma, September 15. https://www.ou.edu/nativenationscenter/research/sovereign-snapshot-tribal-nations-and-ai-governance.

Published: September 16, 2025

Externally Peer Reviewed by: 

John Hassell, Ph.D., MBA, Associate Professor of Software Development and Integration, Polytechnic Institute, The University of Oklahoma

Christina Kracher, Esq., 2025 Emergence Circle Fellow

M. Alexander Pearl, J.D., Chickasaw Nation Endowed Chair in Native American Law, Professor of Law, College of Law, The University of Oklahoma

Notice of Use Statement: Copyright © 2025 - This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-NC 4.0). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Keywords: Artificial Intelligence, Legislative Landscape, Governance, Sovereignty, AI Risk Management, Policy

Citation: Fitzpatrick, Tana. 2025. “Sovereign Snapshot Tribal Nations and AI Governance: A Selected Overview of the AI Legislative Landscape.” Native Nations Center for Tribal Policy Research: The University of Oklahoma, September 15. https://www.ou.edu/nativenationscenter/research/sovereign-snapshot-tribal-nations-and-ai-governance.

Published: September 16, 2025

Edited by: See the NNCTPR Research Webpage for a general description of our editing process.

Externally Peer Reviewed by:
John Hassell, Ph.D., MBA, Associate Professor of Software Development and Integration, Polytechnic Institute, The University of Oklahoma
Christina Kracher, Esq., 2025 Emergence Circle Fellow
M. Alexander Pearl, J.D., Chickasaw Nation Endowed Chair in Native American Law, Professor of Law, College of Law, The University of Oklahoma

Correspondence: Tana Fitzpatrick, J.D., nnc@ou.edu

Disclaimer: This work has been created on behalf of the Native Nations Center for Tribal Policy Research (NNCTPR) which seeks to create high quality, non-partisan, neutral research related to Tribal Nations and their citizens. The NNCTPR endeavors to ensure the information presented is authoritative and accurate, but make no claims, promises, or guarantees about the completeness or adequacy of the content contained within this document. All claims expressed in this article do not necessarily represent those of the NNCTPR affiliated organizations, or those of the publisher, the editors and the reviewers.