Announcing the Golden Gate Institute for AI
And why "Am I Stronger Yet?" is now "Second Thoughts"
This is the newsletter formerly known as “Am I Stronger Yet?”. Read on for an explanation of the new name.
I once heard Jeff Bezos say that an entrepreneur is someone who’s never satisfied. That’s been the story of my career. It was dissatisfaction that led me to launch the startup that became Google Docs – my Writely co-founders Sam, Claudia, and I were fed up with constantly emailing Word files back and forth. And it’s dissatisfaction with the state of AI discourse that is motivating my latest venture (a nonprofit this time!), the Golden Gate Institute for AI.
What am I so dissatisfied about?
It’s Impossible To Make Sense of What’s Being Written About AI
Do you feel like you’re able to keep up with developments in AI? I don’t. The impossibility is a running joke among insiders.
If you’re following events, you’ll have seen the launch of AI 2027, a detailed scenario of how AI may unfold. As you might guess from the title, the analysis projects the arrival of ASI – artificial superintelligence – in as little as three years. The highly qualified authors include Daniel Kokotajlo, an ex-OpenAI employee who wrote a prescient forecast of AI progress in 2021, and superforecaster Eli Lifland.
Meanwhile, Dwarkesh Patel’s always-excellent podcast just dropped an interview with two equally qualified researchers from Epoch AI, titled “AGI is Still 30 Years Away”.
These are two of the best sources of information available regarding AI timelines, and they’re starkly contradictory. Is transformative AI 3 years away, or 30? We’re left to our own devices to decide what to believe.
Pick any relevant topic, and you’ll find an equally confusing barrage of contradictory takes. There is an enormous amount of good work going into analysis of AI capabilities, impacts, and policy solutions. But these questions are so complex, evolving so rapidly, and tied into so many subjects of expertise, that it’s impossible to keep up.
I have a 40-year background in tech. I spend a big part of every day following developments. And I don’t understand how AI 2027’s projection of superintelligence in 3 years relates to the Epoch researchers’ projection of human-level AGI in 30. That’s why I’m frustrated.
When I think about the potential consequences of all this confusion, my frustration escalates to fear.
This Impacts Everything
AI sits at an unfortunate intersection. It’s moving too quickly for expert consensus to emerge or laypeople to keep up, and it’s simultaneously very high stakes.
The potential applications of AI are so numerous they’re hard to even summarize. It could revolutionize health care, turbocharge the economy, and provide a personalized full-time tutor to every child… if we don’t cripple it with unnecessary restrictions. It could also disrupt labor markets, unleash a wave of bioterrorism, and enable surveillance states the likes of which Orwell could never have imagined… if we don’t find ways to head that off.
Already, we’re beginning to see positive effects, such as startups developing software much more quickly; negative effects, like automation of cyberattacks; and policy actions, such as the ban on exports of high-end chips to China.
Policymakers, corporate leaders, civil society, and others need to understand what’s going on. When the experts are talking past one another, decision-makers can’t navigate. I’m worried that we’ll miss opportunities, stumble into disasters, and generally fumble this enormously impactful technological transformation. Confusion is leading to unnecessary fights and bad decisions, such as a failure to act on basic transparency measures that would help inform future action.
I’ve been searching for a way to do something about the problem. It turns out that we can make a big contribution just by getting the right people talking to one another – in the right environment.
The Curve Showed The Way
One of our core activities will be an annual conference, The Curve. This was held for the first time last November (predating the Golden Gate Institute), bringing together several hundred people from AI labs, DC think tanks, Silicon Valley startups, AI safety organizations, academia, and elsewhere – groups that don’t often mix.
It was a smash success. Topics ran the gamut from applications to zero-days (cybersecurity exploits), from timelines to risk management frameworks, from economic benefits to national security. New York Times writer Kevin Roose said “It felt like an event where history was happening.” Senior White House policy advisor Dean Ball participated in a debate on liability for AI models, summarizing his experience: “I had a great time at [The Curve]. Excellent group of people, and much high quality conversation/debate.” AI policy consultant Dave Kasten wrote:
The Curve led to some of the best conversations on AI policy I've ever had. I feel like I understand in much more detail the inside view perspectives of folks ranging from Jack Clark at Anthropic to Daniel Kokotajlo (formerly of OpenAI) to Dean Ball of Mercatus. Just overall one heck of a useful and fun conference.
Eric Gastfriend, head of Americans for Responsible Innovation:
One of my favorite conferences I've been to. The mix of people from different "tribes" / ideologies made it much more fascinating than the usual conferences of people reinforcing each others' ideas. The quality of the participants was extremely high and it was surprisingly easy to talk to major players in the field.
I could go on, but I’ll finish by linking to Nathan Young’s list of things he changed his mind about, as a result of talking to people from tribes he doesn’t normally get to mingle with. The Curve showed that there is an enormous opportunity to advance discussions of AI by making connections between people and groups that don’t normally talk.
To my delight, the woman who founded and ran The Curve, Rachel Weinberg, is now my co-founder for the new Golden Gate Institute for AI. I’m equally delighted to be joined by Taren Stinebrickner-Kauffman, who has spent her career at the intersection of technology and social impact, from running a social impact VC fund to building an AI management consulting practice for nonprofits. Together, we’re going to be making The Curve into an annual event, hosting other events in the Bay Area, DC, and elsewhere, and publishing accessible analysis of developments in AI.
Wait, So Your Solution is “Get People Talking To Each Other”?
Sometimes no one is doing a thing because it’s too obvious. We keep finding evidence that this is one of those times; there’s a lot of room to add value to the AI discourse just by putting people in a room together. The Curve was highly productive. We’ve already held several smaller events that were also very successful. Enthusiasm has been high.
We don’t just put people in a room. We select a topic, curate a high-quality group, and do our homework so we can facilitate a productive, goal-oriented discussion. As often as possible, publish the results of these conversations, such as this post on the range of human activities that AIs aren’t yet much good at.
That gets to our other focus: accessible analysis – taking ideas that are circulating within the AI community, and presenting them for a broad audience without dumbing them down. This is what I’ve been trying to do in this newsletter, and now we’ll have the resources to do more of it. We’ll draw on the discussions that take place at our events, as well as synthesizing other published work. For example, we’re working on a post that will highlight the contrasting ideas in AI 2027, Situational Awareness, AGI is Still 30 Years Away, and AI as Normal Technology.
This is the motivation for the new newsletter title: Second Thoughts. When there’s a new development in AI, we won’t be the first to write about it. Our role is to synthesize the myriad takes that inevitably emerge, and present a coherent picture, finding the common threads and identifying fundamental questions that underlie disagreements. In other words, our job is to help you make sense of the tumult. I say “our” – you’ll be seeing Taren and Rachel writing here as well.
These two activities – expert convenings, and accessible analysis – go together. The discussions we facilitate will help us publish nuanced, accurate overviews of confusing topics. The people best positioned to explain what’s happening in and around AI are the people doing the work. When we reach out to them, they’re eager to participate.
(For more about the confusion that surrounds AI, and the power of cooperation to combat it, see Grounding the Conversation About AI.)
We’ll be focusing on four broad topics:
Timelines & Capabilities – how rapidly will AI development advance?
Economic Impacts – how quickly will AI be adopted, and what impact will this have on the economy? How can we ensure AI creates broad-based economic benefits?
Democracy and Governance – how must democratic and other key institutions adapt to the challenges and opportunities that AI brings?
Realizing Benefits – what can we do to unlock and facilitate adoption of beneficial uses of AI?
Meeting the Challenges of the AI Transition
We spend most of our time in bubbles, surrounded by people who share our viewpoint, our incentives, our field of expertise. AI touches many different bubbles, and not enough information flows between them. If we are to make sense of the changes AI is bringing, if we are to meet the challenges of the AI age, we will need to collaborate with people we don’t normally come into contact with. It won’t happen by default.
That’s where we come in. The Golden Gate Institute for AI will be bridging disciplines, convening experts, hosting events, and publishing overviews and reconciliations. We’ll be standing on the shoulders, not of giants, but of the myriad individuals and organizations that are leading the charge to develop AI and make sense of how it will fit into the world. In this way, we can help the world meet the challenges of the AI transition, promoting security, democracy, and shared prosperity.
When there are seven conflicting takes on a key question, we want to be the organization you can turn to to make sense of it. Because, Goddamn it, I want to make sense of it too.
Thanks to Taren and Rachel for being my collaborators in writing this post… and in everything else we’re doing.
Help Us Out!
You can support our mission by spreading the word. I always end these posts with Subscribe and Share buttons, because Substack is very disappointed in me if I don’t. This time, though, I really mean it. The quality and quantity of information on this newsletter is going to increase, and we’d like to reach as many people as possible. “Please like, share, and subscribe” is trite, but there’s a reason people say it. If you think what we’re doing is worthwhile, please recommend that people subscribe to Second Thoughts. You can also follow me on Twitter or LinkedIn for Golden Gate updates, or sign up here to join our events and announcements list.
Finally, we’d love to hear from you! Drop us a line at info@goldengateinstitute.org.
As one who participated in the Arpanet project in the 1960's and have watched what the Internet has done for and to humanity for 50 years now, I consider this effort supremely important. Will follow and encourage others to track and participate.
Sounds great. I can’t figure out what anyone is talking about half the time 😂