Regulation of artificial intelligence
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology. Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks. The European Union adopted in 2024 a common legal framework for AI with the AI Act.
Background
According to Stanford University's 2025 AI Index, legislative mentions of AI rose 21.3% across 75 countries since 2023, marking a ninefold increase since 2016. The U.S. federal agencies introduced 59 AI-related regulations in 2024—more than double the number in 2023. In 2024, nearly 700 AI-related bills were introduced across 45 states, up from 191 in 2023.There is currently no broad consensus on the degree or mechanics of AI regulation. Several prominent figures in the field, including Elon Musk, Sam Altman, Dario Amodei, and Demis Hassabis have publicly called for immediate regulation of AI. In 2023, following ChatGPT-4's creation, Elon Musk and others signed an open letter urging a moratorium on the training of more powerful AI systems. Others, such as Mark Zuckerberg and Marc Andreessen, have warned about the risk of preemptive regulation stifling innovation.
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".
In 2023 the United Kingdom started a series of international summits on AI with the AI Safety Summit. It was followed by the AI Seoul Summit in 2024, and the AI Action Summit in Paris in 2025.
Perspectives
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI. Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, regulation of artificial superintelligence, the risks and biases of machine-learning algorithms, the explainability of model outputs, and the tension between open source AI and unchecked AI use.There have been both hard law and soft law proposals to regulate AI. Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges. Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits. Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope. As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising, as they offer greater flexibility to adapt to emerging technologies and the evolving nature of AI applications. However, soft law approaches often lack substantial enforcement potential.
Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights in certain AI objects and delegating enforcement rights to a designated enforcement entity. They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct..
Prominent youth organizations focused on AI, namely Encode AI, have also issued comprehensive agendas calling for more stringent AI regulations and public-private partnerships.
AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values. AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare, the financial sector, robotics, autonomous vehicles, the military and national security, and international law.
Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI.
In 2025, the UK and US governments declined to sign an international agreement on AI at the AI Action Summit in Paris. The agreement was described as proposing an open, inclusive and ethical approach to AI development, including environmental protection measures. US Vice President JD Vance argued that the agreement would be detrimental to the growth of the AI industry. The UK government added that the agreement "didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security".
As a response to the AI control problem
Regulation of AI can be seen as positive social means to manage the AI control problem, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism techniques like brain-computer interfaces being seen as potentially complementary. Regulation of research into artificial general intelligence focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into AI safety, together with the possibility of differential intellectual progress or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.Global guidance
The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development. In 2019, the Panel was renamed the Global Partnership on AI.The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence. The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members. The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic.
The OECD AI Principles were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.
At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. In partnership with INTERPOL, UNICRI's Centre issued the report AI and Robotics for Law Enforcement in April 2019 and the follow-up report Towards Responsible AI Innovation in May 2020. At UNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of a Recommendation on the Ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled. UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021; this was subsequently adopted. While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited.
Recent research has indicated that countries will also begin to use artificial intelligence as a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries.
In recent years, academic researchers have made more efforts to promote multilateral dialogue and policy development, advocating for the adoption of international frameworks that govern the deployment of AI in military and cybersecurity contexts, with a strong emphasis on human rights and international humanitarian law. Initiatives such as the Munich Convention on AI, Data and Human Rights, which brought together scholars from various academic institutions, have called for a binding international agreement to protect human rights in the age of AI. A key element of such initiatives is identifying common ground between different regional approaches, such as those of the African Union and the Council of Europe.