Skip to content

AI

Members Public

Response to Tyler Cowen on AI risk

AGI will effectively be the most powerful weapon man has ever created. Neither “lockdown forever” nor “let ‘er rip” is a productive response; we can chart a smarter path.

Members Public

Want to win the AGI race? Solve alignment.

Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.

Members Public

Nobody’s on the ball on AGI alignment

Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)