A Coalition For The Future
If we can keep it
Here’s another guest post inspired by The Curve. This one is by Anton Leicht, a Visiting Scholar with the Carnegie Endowment’s Technology and International Affairs team. Anton writes about the political economy of AI progress on his excellent Substack, Threading the Needle. He writes about the possibilities of cooperation that come into view at an event like The Curve, and the need to extend that spirit beyond the boundaries of the conference. (Note that the Golden Gate Institute for AI is not endorsing specific policy proposals.)
A Coalition For The Future
Arriving in the Bay Area, it’s often hard not to feel like you’ve been missing out on something profound. It’s suggested to you in a myriad of ways that only here, you get to partake in some hidden conversation – about how technology, AI and thereby the world will unfold. And that only here, the true deliberations about the how and the why and the what-for can happen. Nowhere is that feeling stronger than at The Curve – a three-day conference bringing together leading figures in frontier AI: policy thinkers and lab researchers, safetyists and accelerationists, writers and readers, developers and power users. Everyone there has something interesting to say about AI, many carry real influence, some even outright power – and so it’s easy to feel the spark of “something happening”. This essay offers some reflections on how we could keep that spark.
The mood at The Curve has a way of drawing you in. Many of my fellow attendees have written of their own experiences at this conference. Kylie Robison describes checking into the conference venue – and Yoshua Bengio, godfather of AI, is next in line. Afra Wang mentions looking across the table, seeing Ben Buchanan, President Biden’s former Special Advisor on AI, and surmising that this is “the room where it happens”. To me, even more striking than the makeup of the conversations was the common ground they repeatedly managed to find – between all corners of the frontier AI policy conversation, thoughtful exchange and meaningful progress felt genuinely possible. It’s the repeated exposure to moments like these that makes me share many attendees’ appraisal: that The Curve catches lightning in a bottle, and it’s easy to walk away from it feeling optimistic about the state of AI policy.
The Curve and the World Outside
The AI world outside The Curve has gotten uglier since last year’s conference – since the state of play Dean Ball has dubbed the AI Republic of Letters last year. As a result, I found myself noticing quite a contrast between The Curve and the world outside. My piece the week before had been on the politically thorny and compelling issue of child safety – where I expressed my worry that harms to children were prone to be instrumentalised in ways that make the AI conversation worse, leading to solve-nothing policies designed to capitalise on grief and public attention. On the one hand, The Curve was a welcome reprieve from that: Attendees of course would disagree on whether the child safety issue was a big problem or which policy would be suitable to address it – but few were tempted to take cheap shots or easy political wins rather than furthering the substantive questions underneath.
On the other hand, I found myself wondering how The Curve’s sophistication might endure as the world outside grows grittier. The discourse outside the walls of Lighthaven, The Curve’s secluded Berkeley venue, cares less and less for nuance. It increasingly pulls toward discussing what plays best in 30-second clips of Senate hearings or topics that intersect most obviously with voters’ anxieties come the midterms. Trends like these have often drowned out nuanced policy conversations on emerging technologies once they’ve become interesting to the mainstream and susceptible to political projection. In the face of that threat, The Curve – as a group of people united by a fundamental understanding of this technology and a desire to see its potential realized – faces a risk. It could get swept away by politics and break down entirely into the factions of the political fights to come: its safety advocates part of a broader pro-regulation coalition, its advocates for addressing current harms aligned with anti-tech sentiment, its pro-market and techno-optimist forces subscribed to naive accelerationism. That risk is real, but not insurmountable. We can still address it by consolidating around shared beliefs and commitments, and by negotiating eye-to-eye around our differences.
Deals, Defections and Distractions
This admittedly vague prescription becomes concrete in a policy question that dominated many backroom conversations at The Curve: Federal preemption. The issue on the table was a “grand bargain” across the breadth of The Curve’s attendees: the safetyist types get safety-focused federal regulation of frontier AI models, while the accelerationists get a broader federal preemption of the state-level laws they detest – in short, a deal setting up the necessary guardrails to let us speed ahead. In many ways, I feel like the debate around this proposal encapsulates the dilemma of The Curve.
On the one hand, I truly believe a deal like this is fundamentally good. It brings together the attendees’ best impulses: their shared faith in the prosperity and progress that new technology can bring, their shared allegiance to a better future. A deal like this would create a cross-cutting coalition that opposed the political grifters and growing technosceptical forces. And for all the political uncertainties and coalitionary risks, I did not talk to many people at The Curve who would not put this deal on the books if they could. Facing the prospect of uglier AI politics from 2026 onward, a deal would let today’s incumbents establish a framework – while they still can.
But internal conflicts threaten this way forward. The Curve is not made up of one tribe or one clique, and its attendees frequently find themselves on opposite ends of important policy discussions. Distrust runs deep: can we really trust the other side with a deal? Do they really want this, or are they just out to get us? Oh, but maybe we can get an even better outcome by pulling a fast one on the other camp in the process? It’s very hard to surmount instincts like these under ideal circumstances. And we’re far from ideal circumstances: The Curve’s attendees are a heterogenous mix, many of whom were on opposite sides in last year’s battle over California’s SB-1047. People remember the sometimes-ugly past policy fights. As their broader influence wanes, they sometimes take solace in the prospect of a fight they can win, even if it is only within a small corner of the broader conversation.
Surmounting this, I believe, is a matter of recognizing the threat of mutual marginalization, and responding by joining forces wherever possible. Don’t mistake that advice for a sentimental sense of “everyone was so nice, and I wish they all worked together” – of course the world is more complicated than that. But the narcissism of small differences has a way to play tricks on your mind, and getting ahead of it begins with naming it and realizing its pervasive influence. Translated into policy terms, that means that “beating the accelerationists” or “beating the safetyists” respectively should not qualify for a primary policy objective for either side in the political fights to come. If you take away anything at all from this conference, it should be this: think twice before unleashing your PAC onto someone almost as optimistic about technology as yourself, or calling up the spirits of populist backlash against the few political elements that share your sense of AI’s transformative potential.
Back to Reality
I finish this piece a few weeks after The Curve, after pieces of mine and others on the prospects of a deal, and many conversations and discussions. That allows me to end by asking: How is all that going? In the weeks since The Curve, we’ve seen the threat and promise of both paths ahead in equal part. A divisive public statement calling for the prohibition of superintelligence has confirmed many observers’ biggest concerns about the safety movement; vociferous social media posts about Anthropic and California’s SB-53 leave just as many observers doubting the inclination of the accelerationist crowd to compromise. But we’ve also seen rare signs of interest in agreement and movement towards common ground. The White House’s Sriram Krishnan reduced the heat through a thoughtful post on his disagreements with the safety movement and ways to reduce them. Many safety advocates have engaged in kind. It’s to The Curve’s extraordinary credit that one of Krishnan’s asks was more conversations about AI scenarios between fast-timeline advocates on AI progress and a comparative skeptic like Sayash Kapoor – just what had happened at an in-depth three-hour session at the conference.
While I write these final lines, quite a lot is happening all at once. People from different camps are meeting and having conversations, in Berkeley and on Capitol Hill. Deals are being tested, options being scoped. And everyone is gearing up: coming up with favourable political wedges, funneling donations to candidates, arming and aiming their super-PACs. I am not certain which way all this will go. But I do know what I think we should do: try to find common ground rather than trying to win the fight within the walls of Lighthaven. I also know that The Curve is a force pulling us in the right direction: it encourages just the right conversations and often succeeds in getting just the right people in just the right room. If, by next year’s conference, we look back on this year as one of détente and rapprochement, The Curve will have played a vital part. I, for one, will spend that year working to keep the conversations intact, the channels open, and our worst instincts at bay. I look forward to meeting again next year, believing, still, that we can get this right.
Thanks again to Anton Leicht for helping to place The Curve in a larger context. For more of Anton’s writing, check out his newsletter, Threading the Needle.



