🤖 Where are the rules for AI? - Legge Zero #111
AI developers gathered in New Delhi to ask governments for global rules. In this issue, we take stock of the state of play worldwide on AI laws.

🧭 TL;DR: ecco di cosa parliamo in questo numero
⏰ Move fast! AI CEOs are calling for international governance, modeled after the International Atomic Energy Agency.
🌐 Everyone agrees (or almost everyone). Guterres, Modi, and Macron agree on the need for international rules and standards, but the U.S. says no. Amnesty International criticizes cooperation between governments and providers, while India sets a Guinness World Record for AI literacy.
🗺️ The rules map. From the European AI Act to Vietnam, and from South Korea to China, we’ve mapped out four distinct regulatory models. You can find them all in the updated version of Legge Zero’s global map.
🇺🇸 The American paradox. There’s no federal AI law, yet nearly 80 proposals are pending in Congress, and 189 state laws have already been adopted across 47 states. Federal deregulation, local hyper-regulation.
😂 To end with a smile. The meme of Altman and Amodei refusing to shake hands.
⏰ “Hurry up!”
“Democratization of AI is the best way to ensure that humanity flourish. On the other hand, centralization of this technology in one company or country could lead to ruin. This is not to suggest that we won’t need any regulation or safeguards. We obviously do, urgently, like we have for other powerful technologies”.
These words were not uttered by a civil rights activist, the AI Act rapporteur in the European Parliament, or any of the authors of this newsletter. They come from a speech by Sam Altman, the CEO of OpenAI, the man who, more than anyone else, has put generative AI into the hands of hundreds of millions of people. In recent days Altman was in New Delhi, where the Indian government hosted India AI Impact 2026, the fourth global AI summit after Bletchley Park (2023), Seoul (2024), and Paris (2025): the first hosted by a country in the Global South, with delegations from over a hundred countries, twenty heads of state including French President Macron (Italy was represented by Minister of Enterprises Urso), and the participation of the CEOs of some of the most important providers and experts from around the world.
This time, Altman revived a long-standing proposal: the proposal to create an international body, similar to the IAEA, for global AI coordination. The IAEA, the International Atomic Energy Agency, is the United Nations body that has been overseeing the peaceful use of nuclear energy since 1957. It inspects facilities, verifies compliance with non-proliferation treaties, and can intervene quickly in the event of an emergency. Altman envisions something similar for AI: an authority capable of monitoring the most important labs, conducting security audits, and responding promptly to emerging risks. This isn't the first time he's proposed it. In May 2023 (a few months after the launch of ChatGPT), in a post co-signed with two other OpenAI co-founders (Greg Brockman and Ilya Sutskever), he wrote that "we are likely to eventually need something like an IAEA for superintelligence efforts". He even suggested a threshold for computational capacity beyond which a system should be subject to international oversight.
However, Altman wasn't the only one discussing rules in New Delhi. In fact, the topic was at the center of many remarks. For example, Demis Hassabis, CEO of Google DeepMind, emphasized the urgent need for shared rules due to the global nature of AI: “this technology is going to affect everyone, it’s digital technology, so it can’t be contained by borders” and “is going to require international dialogue and maybe, ideally, a minimum set of standards that is agreed internationally”.
This is a politically significant fact: the call for regulation is coming from the very top of the major companies leading the development of AI models. This signal can be interpreted from multiple perspectives: marketing (not appearing hostile to rules that currently still seem far off), risk awareness (misinformation, malicious use, impacts on jobs, concentration of technological power), and an interest in preventing regulatory fragmentation that multiplies divergent obligations across jurisdictions. However, there is another possible option, well known in the economics of regulation: a regime of international rules and audits imposes costs that only large players can absorb, potentially solidifying the current competitive status quo. This is known as regulatory capture — the regulator being captured by the regulated — and it’s worth keeping in mind whenever the industry is the one asking for rules.
🌐 Everyone agrees (or almost)
In New Delhi, therefore, the main providers discussed the urgent need for global rules. But what was the response from the institutions?
UN Secretary-General António Guterres warned against concentrating governance in the hands of a few, stating that the future of artificial intelligence cannot be decided by “the whims of a few billionaires” or left to a handful of countries: “AI must belong to everyone”, he said.
In his speech, however, the host – Prime Minister Narendra Modi – stated that “artificial intelligence must be democratized” so that humans do not become simply a data point for AI or remain mere raw material for AI. "It must become a tool for inclusion and empowerment, particularly for the Global South", he posted later on X.

French President Emmanuel Macron echoed Modi’s shared-governance line, stating, “We are determined to continue to shape the rules of the game with our allies, such as India” confirming Europe’s commitment to work with strategic partners to help define global rules for artificial intelligence.
So, is everyone in agreement? Not quite. From the stage came a voice that couldn’t have been more discordant. Michael Kratsios, director of the White House Office of Science and Technology Policy, stated that the United States “totally rejects global AI governance” because innovation cannot thrive “if it is subject to bureaucracies and centralized controls.” A statement that—for now—has effectively undercut the appeals of the American CEOs in the room, revealing an unexpected divide between Silicon Valley and the Washington administration.
It is therefore unsurprising that the Summit concluded with the adoption of a rather bland document, the New Delhi Declaration. The declaration is a statement of principles that lays out seven pillars, ranging from democratizing AI resources to developing human capital. It also includes a commitment to build a “Trusted AI Commons,” a collaborative global repository of tools and best practices. To be sure, it’s notable that 86 countries signed the Declaration, a significant increase from the approximately 60 that signed the final document of the Paris Summit a year ago. However, the regulatory approach remained very soft, relying solely on voluntary principles (dependent on the goodwill of states and providers) and lacking any binding mechanisms.
That’s also why Amnesty International criticized the Summit for failing to curb the destructive practices of governments and technology companies. Some have noted that the meeting kicked off a new season of multilateralism, where AI giants now stand on an equal footing with sovereign governments. However, civil society organizations lack an equivalent platform, leaving them in a subordinate position within this debate.
🗺️ Where AI rules stand
For now, global governance is still just an aspiration. But there’s an important contextual difference compared with the first Global Summit (the one held in the UK in 2023): several countries have already adopted their own national rules on artificial intelligence, and many others are discussing them right now.
Loyal readers will recall that, back in LeggeZero #28, we launched our project for a global map of AI laws — the AI Global Regulatory Map — kept under constant update. Looking at it today, in the latest version, it shows a richer landscape than you might think, but also one that is progressively more fragmented across different models (which could become a hard obstacle to overcome in the effort to build a unified global governance framework).

You can use this image to argue with anyone who claims — during a conference or in a social media post — that “only in Europe” are we writing rules for artificial intelligence. Regulation (EU) 2024/1689 (the much-discussed AI Act) was the first structured regulatory framework in this area, but it’s no longer the only one.
At least 10 countries worldwide have adopted actual laws on artificial intelligence: South Korea, Kazakhstan, Taiwan, Vietnam, Italy, Japan, Peru, El Salvador, Colombia, and Uzbekistan. Add to these China, which so far has issued only sector-specific regulations (and where scholars are working on a proper framework law); Spain, which set up AESIA (the first AI oversight agency in the European Union) with Real Decreto 729/2023; and the United States, with the executive order signed by President Trump in December 2025.
📐 Not all rules are the same
Obviously, simply counting the laws is not enough. Upon closer examination, it becomes clear that not all countries are adopting the same approach. The models are different.
The first approach involves comprehensive and legally binding regulation: a dedicated AI law with obligations, penalties, and a supervisory authority. The European AI Act is the prototype, but it’s not the only example. In the wake of the EU AI Act, South Korea approved the AI Basic Act in January 2025. This act takes a different approach, classifying systems based on their social impact rather than their “risk” (as is the case in the EU). This implies a different perspective on the role of AI within society. While the European model recognizes the risks this new technology can pose and therefore adopts a restrictive and preventive approach, the Korean model maintains a neutral and analytical perspective. So-called “high-impact AI” must meet more stringent requirements regarding human oversight, risk management, and transparency. Like the AI Act, Korean law also places significant emphasis on generative AI, requiring visible labels or specific watermarks to prevent confusion or deception. In practice, the AI Basic Act extends beyond national companies. Its extraterritorial reach requires foreign companies offering AI products or services in the South Korean market, and exceeding certain revenue or user thresholds, to appoint a local representative responsible to authorities for compliance and oversight.
More recently, Taiwan also approved its AI Basic Act, inspired by the European model but without immediate obligations for the private sector (the implementing rules will arrive in the next two years). Similarly, Vietnam has adopted its own AI Law, which – according to some commentators – is the closest match to the European model.The second legislative model involves regulations that govern AI through general principles, governance, and obligations, but without classifying or requiring compliance based on risk or impact levels. This category includes the laws of Perù (one of the first in Latin America), El Salvador, and Uzbekistan. These types of regulations – in addition to transparency obligations when using AI – generally affirm principles such as the fact that decisions affecting rights and freedoms cannot be based solely on outputs generated by AI. Japan can also be included in this group. In June 2025, the country adopted the AI Promotion Act, a law that outlines general principles without imposing specific obligations or penalties. Instead, it relies on a name-and-shame mechanism—a kind of media pillory—as its sole enforcement tool.
The third model is progressive sectoral regulation, with China being the most significant example. Beijing does not yet have a comprehensive framework law on artificial intelligence, but since 2022 – before the AI Act – it has developed a system of specific regulatory measures that, when combined, cover a large portion of the AI spectrum, including algorithmic recommendation, labeling, child protection, and human-machine interaction. This is a pragmatic and incremental approach: rather than attempting to regulate AI as a whole, it focuses on specific areas deemed worthy of intervention based on technological progress. However, it risks becoming an overly fragmented model, leading some Chinese academics to advocate for a more comprehensive regulatory framework.
Finally, the fourth model involves targeted amendments to existing legislation: rather than creating a standalone AI law, it involves modifying current regulations to account for the impact of artificial intelligence. Colombia, for example, has amended its criminal code with Ley 2502/2025 to introduce a specific aggravating circumstance for fraud committed using deepfakes and AI tools. A comprehensive bill on artificial intelligence is still under discussion in the South American country’s Congress.
In addition to this already complex landscape, at least 15 countries are actively discussing AI laws within their respective institutions – including Brazil, Turkey, Indonesia, Mexico, Argentina, and the United Kingdom. Furthermore, more than 30 other countries have adopted strategies, guidelines, or non-binding soft law frameworks, such as India, Singapore, Australia, Israel, and the United Arab Emirates.
🇺🇸 The American paradox
The United States deserves a separate discussion. Right now there is no federal AI law, as we have recently written. But this vacuum doesn't mean there's no initiative; if anything, the opposite is true.
On the one hand, between 2025 and 2026, nearly 80 AI-related bills—some quite different from one another—were introduced in the U.S. Congress. On the other hand, to date, at least 189 legislative measures have been adopted by 47 of the 50 U.S. states. State legislators have focused most on protecting minors (about 30 laws), protections against AI-generated nonconsensual images (29 laws), regulating electoral deepfakes (22 laws), and the right to opt-out of automated processing (18 laws).
California—and it could hardly be otherwise, given that it’s home to many AI providers—tops the list of the most active states with 29 regulatory measures, followed by New York (14) and Texas (13). So far, a sector-by-sector approach prevails, but some states have tried comprehensive AI governance: among them Colorado, with its AI Act, Connecticut and Texas, with TRAIGA (Texas Responsible AI Governance Act).
To give a sense of the scale of the phenomenon, in 2025 alone more than 210 state bills were introduced, pointing to not only the "usual themes" but also the next frontiers of regulation (such as AI agents and algorithmic pricing).
This is a regulatory environment that creates difficulties for businesses and, at the same time, uncertainty for individuals. An Ipsos survey is telling: among the 30 countries analyzed, the United States ranks last for citizens’ trust in their government’s ability to regulate AI responsibly.
The country that has so far chosen deregulation is also the one where citizens trust institutions the least. This confirms that AI regulation is not—nor should it become—just a bureaucratic exercise or a brake on innovation, but rather a social necessity: the way a country’s leadership confronts one of the greatest revolutions in human history.
While we await a global governance that seems far off, the path – as our map illustrates – is being built from the ground up: country by country, law by law, with varying approaches and at vastly different speeds. It's a messy and fragmented process that, sooner or later, will need to be consolidated.
😂 AI Meme
On the stage of the New Delhi summit, Indian Prime Minister Modi asked the tech leaders to raise their hands together, like actors at the end of a show: Altman (CEO of OpenAI) and Amodei (CEO of Anthropic and formerly at OpenAI), next to each other, raised their fists, stubbornly refusing to touch. That made for the most awkward moment of the summit, which went viral in a few hours.
The OpenAI social media team, however, tried to jump on the trend, posting a photomontage (made with AI?) of Altman with lobster claws instead of hands. This is a clear reference to the OpenClaw agentic AI project, initially created as a tribute to Claude (Anthropic’s AI) and then acquired by OpenAI just last week.
🙏 Thanks for reading
That’s all for now.
If you enjoyed our newsletter, support us: like, comment and share!









