Skip to content
FOR OUR POSTERITY

Hi, I'm Leopold Aschenbrenner. I recently founded an investment firm focused on AGI, with anchor investments from Patrick Collison, John Collison, Nat Friedman, and Daniel Gross.

Before that, I worked on the Superalignment team at OpenAI.

In a past life, I did research on economic growth at Oxford's Global Priorities Institute. I graduated as valedictorian from Columbia at age 19. I originally hail from Germany and now live in the great city of San Francisco, California.

My aspiration is to secure the blessings of liberty for our posterity. I'm interested in a pretty eclectic mix of things, from First Amendment law to German history to topology, though I'm pretty focused on AI these days.

Follow me on Twitter. You can email me here.

Featured Posts

Members Public

SITUATIONAL AWARENESS: The Decade Ahead

Virtually nobody is pricing in what's coming in AI. I wrote an essay series on the AGI strategic picture: from the trendiness in deep learning and counting the OOMs, to the international situation and The Project.

Members Public

Dwarkesh podcast on SITUATIONAL AWARENESS

My 4.5 hour conversation with Dwarkesh. I had a blast!

Members Public

Weak-to-strong generalization

A new research direction for superalignment: can we leverage the generalization properties of deep learning to control strong models with weak supervisors?

Members Public

Nobody’s on the ball on AGI alignment

Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)

Members Public

Burkean Longtermism

People will not look forward to posterity, who never look backward to their ancestors.

Members Public

My Favorite Chad Jones Papers

Some of the very best, and most beautiful, economic theory on long-run growth.

Members Public

Europe’s Political Stupor

On the European obsession with America, the dearth of the political on the Continent, and the downsides of homogeneity.

Members Public

The Risks of Stagnation (Article for Works in Progress)

Human activity and new technologies can be dangerous, threatening the very survival of humanity. Does that mean economic growth is inherently risky?

Recent Posts

Members Public

Superalignment Fast Grants

We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.

Members Public

Response to Tyler Cowen on AI risk

AGI will effectively be the most powerful weapon man has ever created. Neither “lockdown forever” nor “let ‘er rip” is a productive response; we can chart a smarter path.

Members Public

Want to win the AGI race? Solve alignment.

Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.

Members Public

What I've Been Reading (June 2021)

Religion, faith and the future, level vs. growth effects, the Cuban Missile Crisis, science fiction, and more.

Members Public

Benjamin Yeoh Interviews Me (Podcast)

Covering what Tyler gets wrong about existential risk, economic growth, declining fertility rates, Germany's "tall poppy syndrome," and more.

Members Public

The Economics of Decoupling

America’s economic dependence on China creates a security vulnerability. But tariffs are royally ineffective at mitigating this vulnerability. I consider the underlying informational problem.

Members Public

Against Netflix

Too many great minds waste away their time watching Netflix. Worse, we have made that culturally acceptable. For a TV-temperance movement.