It should have been the Paris AI Security Summit
(and shout-out to the UK AISI who changed their name while I was writing this)
PARIS, once the epicentre of genuine revolutionary fervour, now plays host to technocratic pageantry. The world descended on the Grand Palais for the AI Action Summit. Little action was taken.
Laodicea
A summit is a ritual. Like the Papal Mass or a royal coronation, its purpose is not to do but to signify, and so to anoint authority in public. Roles and routines are important to rituals; there are times and places, set pieces and movements, and a bustling of activity which should happen so smoothly and so gracefully that we forget ourselves. Rehearsed months in advance,1 the Paris AI Action Summit was to be such a ritual and a reaffirmation of the great mythos of our time: peace through process, prosperity through paperwork.
Days before the summit, a draft statement leaks, titled ‘Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet’. Disappointment abounds. Without reading, you already know what it says; you could have written such thoughtless, boilerplate tripe yourself. A committee man — some stochastic suit — fills in the blanks, "inclusive multi-stakeholder dialogues", “the need for global reflection”, “digital divides”. In the end, 60 countries signed the agreement. They swore solemn vows to talk more in the future and do nothing today.
Unless something changes, all 60 made the right call. They can do little more than speak for the sake of speaking. This coalition of runners-up possesses neither the burning belief of accelerationism nor the cold conviction of AI safetyism; they lack direction and so they cannot lead. Lukewarm is as good as it gets.
Britain and America refused to sign.2
Whale Fall
Ritualised inaction is no surprise to those who have seen recent European politics. Its grand institutions and ideals — once new and vibrant, even daring — are gutless, bloodless, and lifeless. Europe now nourishes an ecosystem of nodding policymakers, blind strategists, and feeble regulatory bodies.
What underpins this ritual is a lingering faith in a particularly EU take on soft power. The great myth of our time says that cultural and institutional dominance can be won by gentle persuasion, by laws, guidelines, and frameworks. Peace through process, prosperity through paperwork. This view likely had its fullest articulation in Francis Fukuyama’s “End of History”, which saw the European Union as the prototype of a post-sovereign order: a community bound by transnational rule of law, not national interest.
“The End of History was never linked to a specifically American model of social or political organization… I believe that the European Union more accurately reflects what the world will look like at the end of history than the contemporary United States. The EU's attempt to transcend sovereignty and traditional power politics by establishing a transnational rule of law is much more in line with a "post-historical" world than the Americans' continuing belief in God, national sovereignty, and their military.”3
It was an age when progress was measured by glowing pride in the number of signatories to well‑intentioned agreements, where it seemed plausible that codes of conduct and universal values could sweep aside the messiness of realpolitik.
Europe’s faith remains strong. Its AI Act,4 now bolstered by a €200-billion InvestAI fund,5 aims to champion a moral and regulatory gold standard. Yet, the very ethic of soft governance — consensus-building, ethics committees, and compliance reviews — has paralysed the innovation Europe desperately needed. For decades, Europe has been the first to regulate and the last to innovate. More funding (which is mostly just repurposed funding) won’t change that, especially when that funding amounts to less than half of America’s $500b commitment to frontier labs. What shining beacon of morality can lead from so far behind?
French President Emmanuel Macron’s dramatic pivot from regulation to AI boosterism signals his acknowledgement of hard power. He knows Europe could catch up6 — there is certainly no lack of brilliance in France7 — but it is constrained by the very bureaucratic mindsets that once embodied “progress.” The contradiction reveals a continent torn between ideals and the salient realities of a competitive global arena where principles yield to strategic imperatives.
Melos
Beneath the muscular ‘God, guns, and government’ approach, America plays a more elusive game. A game that, on its surface, appears similar to its heyday of government-led technological leadership, but in truth has turned that old formula on its head. Once, the US government marshalled vast resources through NASA, DARPA, the Pentagon, and what is now known as the Department of Energy, giving birth to Silicon Valley in the process. Back then, the state took the lead in public-private partnership, outpacing adversaries and pouring money into R&D and national security.
Now, however, Silicon Valley leads and Washington follows. Yes, the White House continues to issue statements and fund projects, but frontier innovation occurs in the labs and boardrooms largely beyond government reach. By and large, American tech giants follow their own orders, not national directives and certainly not the collective good.
This inversion poses a new kind of risk. Superficially, it resembles a renewed public–private partnership, except now the private sphere shapes the agenda. Rather than the government issuing grand challenges — like the space race or early internet research — it is the tech companies that define both what and how. Federal agencies hover on the periphery, hoping to corral or align breakthroughs with ‘public interest’ and even the public’s interest at times. Too often, that alignment appears more rhetorical than real. The result can be what some label a feudal arrangement, in which corporate lords govern their own fiefdoms, and the nominal Sovereign — if it wants to remain relevant — must curry favour with them while managing distressed courtiers.
This subtler brand of American power is still formidable, not least because the capital, expertise, and entrepreneurial culture of Silicon Valley remains peerless, and ultimately supported by the world’s deepest capital markets, largest economy, and strongest military. But the dislocation between the national interest and corporate agendas raises pressing questions: Whose sovereignty is exercised in the AI age? Who decides what risks to accept and ethical lines to draw? How can the public shape the direction of technology if elected representatives struggle to regulate — let alone direct — the fluid and exponentially increasing might of private innovators?
For now, Washington seems content. But America may yet have to reckon with what Europe has yet to realise: a government that does not directly lead in strategic technology may one day find it can no longer protect its own interests, let alone define the rules of the game. The strong will do what they can and the weak will suffer what they must.
Perfidy
Britain joined America in refusing to sign the AI Summit statement. This was strategic pragmatism. Britain stands at a crossroads that encapsulates its own long and complicated history: once the largest empire, it spread its language, laws, and culture around the world. That empire was built and largely maintained on hard power, from the Royal Navy’s global reach to gunboat diplomacy that forced open trade routes and initiated the end of the slave trade. Yet Britain also exported culture, moral reforms, and legal traditions — an early form of soft power that was, ultimately, backed by ironclad force. It was an empire of myth and muscle.
After the Second World War, as empires dissolved, Britain leaned into its cultural clout: the Commonwealth, English as the lingua franca of commerce and science, the City of London’s financial heft. American military might acted as a convenient backstop. The UK could broadcast culture and champion noble principles — of democracy, human rights, and open societies — without bearing the full weight of global hard-power responsibilities.
But AI is changing the dynamics. For Britain, the old model of playing second fiddle to America’s hardware advantage or to Europe’s regulatory regime looks increasingly untenable. Languishing under the weight of poor economic conditions (and perhaps poorer governance8), Britain risks being squeezed between American market dominance and European bureaucratic inertia. Instead, the UK must ask whether it still has the will and capacity to regain a measure of comparative strategic advantage and influence.
This dilemma is particularly acute because British institutions continue to uphold an “ethical leadership” brand, including leading AI safety summits (Bletchley). But if the new global reality demands forging real capacities — owning the compute, funding high-risk R&D, and adopting robust security measures — then words and committees alone cannot suffice. Britain’s centuries-old memory of empire, and the hard edge that once sustained it, might now intersect uneasily with a post-imperial culture steeped in managerialism and self-effacement.
Leviathan
Regulation and security are not the same thing. Security is not consumer rights, market access, or commerce; it is power and control, or a lack of it. A sovereign that fails to protect its citizens loses its claim to power. Security is about ensuring that AI remains under the control of safe states, not poorly incentivised private interests, not hostile adversaries, and not, in the worst case, under its own control.
States do not dull their own blades for the sake of harmony. They will regulate AI only where they see a defensive advantage in doing so, where agreement makes them stronger rather than weaker. No nation will constrain itself out of goodwill. No great power will surrender its lead in AI development unless it is sure that its rivals will do the same or that restraint is in its strategic interest.9 A state will only come to the table when it knows that, without an agreement, the alternative is worse.
Crucially, states are not monoliths or living abstractions, they are people in power. Leaders stay in office by maintaining a winning coalition — the group that must be rewarded or placated for them to survive politically.10 Current policy and activism fundamentally fail to address that group and its strategic interests. If we continue down our current path, AI governance will remain a theatre of contradictions: governments smiling on stage at summits while private firms push AI to its limits. The world will continue to talk more and do less.
Furthermore, if AI safety fears are well-founded, then AI is not merely a tool to be wielded, but a force that, unchecked, may slip from human control altogether. This must be communicated differently. AI safety asks, ‘What harm might we do to others?’ AI security asks, ‘What harm might be done to us?’ The former is a moral argument, the latter is a strategic one, and in the age of realism, only the latter will drive cooperation. AI must be counted among the threats to a winning coalition and as a direct threat to states. AI is not just another contested technology — it is a sovereignty risk, a security event,11 and a potential adversary in its own right.
Cooperation will be built on recognising the calculus of self-interest, threats to power, and the fact that once an advantage is lost, it is rarely reclaimed.
It should have been the Paris AI Security Summit.
"A dedicated team has been working for almosta year to ensure the Summit’s success.” — AI Action Summit Press Kit
“In a brief statement, the UK government said it had not been able to add its name to it because of concerns about national security and ‘global governance.’” - BBC
Francis Fukuyama, speaking to The Guardian in 2007.
Von der Leyen appears to agree, too, “Global AI leadership is still up for grabs”
France has the second most Field’s Medals and the most per capita. Mistral is a French company. Europe has some of the best universities for engineering, maths, and government. Really, there are plenty of strengths.
It remains to be seen how Labour perform, but the calamity of end-stage Conservative government was a sight to behold.
OpenAI pays lip service to safety while announcing updates on Twitter without warning and without change logs. Anthropic owns the ‘ethical AI’ space, but it’s not obvious to me this is for any other reason than differentiating an inferior product (if benchmarking is reliable, of course).
The Dictator’s Handbook: Why Bad Behaviour is Almost Always Good Politics
Dario Amodei, of Anthropic (who just partnered with the UK AISI), “International conversations on AI must more fully address the technology’s growing security risks.”