Videos » Regulating the existential dangers of AI in light of Molochian game theory

Regulating the existential dangers of AI in light of Molochian game theory

Posted by admin
Some think the misaligned incentives of Moloch stlye multipolar traps means that it will be extremely hard to avoid a race to build superintelligent AIs that might pose existential threats to humanity. Indeed this was a key theme of a recent podcast by Liv Boeree with Daniel Schmachtenberger. And the recent blog post by Sam Altman and OpenAI on the governance of superintelligence suggests that they think this technological race means we should assume that someone is going to build a superintelligence so we need to prepare to live in such a world. In this video I argue for a different perspective, suggesting that regulating the most dangerous existential threats might not suffer from a multipolar trap as the incentives of companies and countries line up with the societal level goal of avoiding the intelligence explosion. That still would leave many other dangers from AI, but at least we might be able to delay the intelligence explosion and thereby at least delay the advent of superintelligence. \u25ac\u25ac Chapters \u25ac\u25ac 00:00 - Intro 01:12 - Moloch and multipolar traps 04:52 - Moloch and the dangers of AI 10:04 - Preventing an intelligence explosion
Posted July 2, 2023
click to rate

Embed  |  204 views