đ¤ Our Appeal for the Future of AI (and Humans)
Some of the worldâs leading AI experts have drafted an appeal on the global governance of AI. In this issue, we explain how it came about, whom it addresses, and what it says.

đ§ A special week
September 11, 2025 - San Francisco (California, USA)
Guido Reichstadter, a 45 year old activist, is standing outside Anthropicâs headquarters (the provider of the Claude chatbot), where he arrived on the ninth day of a hunger strike. He decided to subsist solely on vitamins and electrolytes, refusing any caloric intake, to protest what he sees as a catastrophic race toward artificial superintelligence. Reichstadter is demanding a meeting with Anthropicâs CEO, Dario Amodei, to explain his reasons and to call for a global moratorium on AI development, similar to the 1960s treaty that banned nuclear weapons in space. His position is simple: âCurrently, there is no safe way to build systems capable of exceeding human capabilities. No one has a plan about that. â

Reichstadterâs action is inspiring other activists. In London, for example, outside Google DeepMindâs headquarters, Michael Trazzi - a 29âyearâold French researcher working on AI safety - began a similar hunger strike a few days ago. He says he will end it only when Demis Hassabis (the companyâs CEO) commits to a moratorium on the development of frontier systems.

Thank you for your interest in LeggeZero. Sign up to receive the newsletter!
September 11, 2025 - Vatican City (Rome)
While these activists are protesting, I am in the Vatican, in the ancient rooms of the Fabric of Saint Peter. For several weeks, I have been part of a working group bringing together experts from around the world to meet in Rome on September 12â13, as part of the World Meeting on Human Fraternity organized by the Fratelli Tutti Foundation. Our task is far from simple: to reflect on âAI and fraternity,â focusing on what it means to remain human in the age of artificial intelligence. After several stimulating online meetings, we decided to meet to write â together â a shared document to be delivered to Pope Leo XIV, who is certainly sensitive to these issues, also given that he chose his papal name precisely in relation to AI (as we discussed in LeggeZero #73).

Around a long table in the Vatican, with me, are Paolo Benanti (Luiss professor and former AI advisor for Pope Francis), Yoshua Bengio (professor at the University of Montreal, winner of the 2018 Turing Prize, the most cited researcher in the world), Stuart Russell (professor at the University of California-Berkeley, a leading figure in the field of AI security), Max Tegmark (researcher at Boston's MIT and president of the Future of Life Institute), Abba Birhane (cognitive scientist and researcher at the Mozilla Foundation and the Trinity College in Dublin), Nnenna Nwakanma (Nigerian activist for digital rights and open Internet access), Marco Trombetti (AI expert and entrepreneur, co-founder and CEO of Translated), Jimena SofĂa Viveros Ălvarez (lawyer president of the HumAIne Foundation, committed to reducing inequalities), Alexander Waibel (scientist and professor at the Karlsruhe Institute of Technology and Carnegie Mellon University, pioneer in the field of machine translation), Lorena Jaume-PalasĂ (co-founder of The Ethical Tech Society), Cornelius Boersch (investor and entrepreneur active in the field of emerging technologies), Valerie Pisano (economist and president of the Quebec Institute of AI), Antal Kuthy (tech entrepreneur) and Riccardo Luna (journalist and innovator - mentioned several times in this newsletter - who is the coordinator of the working group). Geoffrey Hinton (the 2024 Nobel laureate, âthe godfather of AIâ) and Yuval Harari (a professor and essayist who is very attentive to the risks of AI) also follow the work online, unable to be in Rome in person.
Line by line, a shared document took shape through intense - at times heated - discussions. Everyone brought their own perspective to the table - from technical progress to ethical reflection, from the regulatory framework to the economic dimension -creating a vibrant, multidisciplinary exchange. It was both a laboratory of inclusion and a valuable human experience: we pared down adjectives, debated the meaning of new terms (such as âsuperintelligenceâ), and even argued over commas to find a common lexicon.
At the time, we were unaware of the (peaceful and nonâviolent) protests by Reichstadter and Trazzi, but we all agreed that artificial intelligence can bring extraordinary opportunities - for scientific progress, medicine, education, and public administration - while also posing serious risks (from job loss to manipulation, from environmental impacts to threats to human wellâbeing).
For this reason, our document ultimately takes the form of an appeal calling on everyone - institutions, companies, the scientific community, and civil society - to support these principles and to begin a global dialogue on how AI can truly âserve all of humanity.â

đ The contents of Rome's appeal on AI
The documentâs title is symbolic: âFraternity in the age of AI - Our Global Appeal for Peaceful Human Coexistence and Shared Responsibilityâ.
To seize the opportunities of AI while mitigating its costs and risks, we must define fundamental principles and nonânegotiable boundaries.
Here are the ones we identified:
Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights.
Human intelligence - our capacity for wisdom, moral reasoning, and orientation toward truth and beauty - must never be devalued by artificial processing, however sophisticated.
AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.
Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. The responsibilities and obligations fall on developers, suppliers, companies, implementers, users and administrations. Neither legal personality nor ârightsâ can be granted to AI.
Life and death decisions: Artificial intelligence systems must never be allowed to make decisions about the lives of human beings, especially in military applications during armed conflicts or - in peacetime - in the areas of public order, border control, health care or justice.
Safe and ethical development: Developers must design AI with safety, transparency, and ethics at its core, not as an afterthought. Deployers must consider the context of use and potential harms and are subject to the same safety and ethical principles as developers. Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.
Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance.
No AI monopoly: the benefits of AI - economic, medical, scientific, social - should not be monopolized.
No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable.
Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.
No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.
âď¸ The rules we need
They seem like very general principles, but - if they were actually recognized - they would have practical and legal implications that are far from secondary. Here are a few examples.
Prohibition of legal personality for the IA: The appeal states unequivocally that only human beings can hold rights and responsibilities (moral and legal). Consequently, no artificial intelligence system should be considered a subject of law, in other words, AI cannot be granted legal personality. This principle would prevent, for example, attributing rights (or duties) similar to those of a person or a company to an algorithm, avoiding dangerous gaps in responsibility. If an AI causes damage, those responsible must remain its designers, providers or human users: the chain of responsibility cannot be broken by attributing the âblameâ to a machine. Inserting this prohibition means avoiding scenarios in which companies or politicians could use AI as a shield to take responsibility away (by the way, have you read about the âAI Ministerâ's stunt in Albania?). Furthermore, to reiterate that an AI cannot have its own rights protects human dignity: it avoids equating algorithms and robots with human beings and prevents future legal conflicts (the right to the preservation of an AI could conflict, for example, with some rights of the people with whom the AI comes into contact)
Limits in the military and judiciary: The appeal calls for a ban on the use of AI to make decisions that affect the lives of human beings. In particular, we affirm that no AI system should be able to autonomously decide to kill or harm: neither in the context of war (think of lethal autonomous weapons), nor in peacetime in the field of law enforcement, border control, health care or the judicial system. This principle, if recognized, would imply, for example, banning "killer robots", autonomous weapon systems capable of opening fire without significant human control. Similarly, it would mean prohibiting an algorithm from deciding alone the outcome of a trial or the medical treatment of a patient. Such decisions must remain in human hands, because they involve fundamental values and assessments of conscience that a machine must not make.
International Treaty on AI: recognizing that AI is a global challenge, the appeal calls for a binding international treaty that goes beyond the current fragmentation of the global regulatory framework and sets non-negotiable limits on the development and use of dangerous artificial intelligences. In addition, it is proposed to establish an independent supervisory authority with effective powers to monitor compliance with these limits. In practice, imagine a supranational body - on the model, for example, of the International Atomic Energy Agency (IAEA) in the nuclear field - that can inspect large AI laboratories and ensure that no one is crossing certain risk thresholds. The treaty should establish shared prohibitions and guidelines: for example, it could prohibit the creation of artificial superintelligences until there are guarantees on their controllability and safety. A global agreement of this type would also promote a level playing field between nations and companies, avoiding the reckless race towards the most powerful AI just to excel: in fact, one of the established principles is to avoid irresponsible global competition in the field of advanced AI.
Right to live without AI: an innovative element among the listed principles is the explicit recognition of the right of human beings to live even without artificial intelligence. We believe that the ultimate expression of freedom, in the age of algorithms, is not being forced to use or suffer AI systems against your will. In practical terms, this principle should translate into the protection of "human only" spaces and services: from the right to interact with a human being (for example for public services) to the possibility of receiving an education or medical assistance free from algorithmic intermediation if desired. By inserting this proposal, we wanted to affirm a concept of freedom of choice in the face of innovation: technological progress must not cancel out the option of a lifestyle not pervaded by AI. This principle completes the vision of ethical AI: not only developing the right technologies, but also making room for those who prefer not to use them. In the legislative context, it could result in provisions that oblige public and private bodies to provide non-IA alternatives for essential services and not penalise those who opt for these alternatives. It is a topic of increasing importance also in the debate on digital rights and our appeal makes it a flag of civilization, linking it to the concept of the common good: the development of AI must go hand in hand with respect for minorities and people's individual choices. This is how it is done in democracies.
âď¸ So what? Whatâs next?
This appeal is not the manifesto of a small group, nor a document addressed only to believers of a particular faith. It is the result of joint work by secular scientists, experts, entrepreneurs, activists, and lawyers from various nations. None of us, alone, could have expressed these concepts with such force and completeness; together, however, we articulated a vision that is neither technophobic nor naively enthusiastic about AI, but oriented toward governing change wisely.
In short: neither apocalyptic nor blindly optimistic - and certainly not naive. We are well aware that many of these principles will be difficult to implement and will face many obstacles. However, we are convinced of what we have written, and we know that the more these ideas are shared, the greater the chance they will become policy and regulation.
Immediately after publication, the appeal received notable support from Nobel laureates Maria Ressa and Giorgio Parisi, as well as from Professor Miguel Benasayag and will.i.am (best known as the founder of the Black Eyed Peas and also an entrepreneur in the AI sector).
That is obviously not enough. Soon, anyone will be able to join publicly: we are preparing a platform where scholars, professionals, and ordinary citizens in every country can subscribe to the appeal, adding their voices to this global appeal.
The idea is to gather as many supporters as possible and then present them to international institutions as a sign of a broad, crossâcutting movement of opinion.
In the face of enormous challenges like those posed by AI, no one suffices on their own. We need the determined activist fasting in the street to awaken CEOsâ consciences; the Nobelâwinning researcher lending scientific authority; the jurist translating values into norms; and the humanist reminding us of the indispensable dignity of the person. And, of course, we need leaders who can take up this appeal with concrete and urgent action.
đ A voice message fromâŚ
In this numberâs message, Riccardo Luna gives us the description of the journey towards the creation of the global appeal on fraternity in the age of artificial intelligence. Itâs the result of a 100-day process that brought together leading thinkers from around the world. The initiative, launched by the Vaticanâs Fratelli Tutti Foundation after the election of Pope Leo, followed the Popeâs announcement of a forthcoming encyclical on AI, which he described as disruptive as the industrial revolution.
He invited a diverse group of experts to draft the appeal, ensuring inclusivity across gender, geography, and disciplines. Among those who joined were AI pioneers Geoffrey Hinton, Yuval Noah Harari, and Yoshua Bengio, who participated in this intensive debate and drafting sessions with dedication.
The declaration, presented on September 12, was carefully worded to reflect shared concerns over artificial intelligence, including the rise of advanced AGI systems, and to call for AI to be treated as a global common good. The following day, two Nobel laureates added their endorsement, amplifying the message.
The vocal closes with a call to action: artificial intelligence is a transformative force that must be harnessed responsibly as a common good for humanity, sparking a global conversation on ethics, inclusion, and fraternity. An act of faith in the democratic process.
đ IA Meme
If the term âsuperintelligenceâ sounds futuristic and the existential risks of AI seem like sciâfi, remember the movie Her. In 2013, that was âonlyâ science fiction as well.
đ Thanks for reading!
Thatâs all for now.
If you liked our newsletter, support us: give us a like, leave a comment and share it!



