Well that’s one way to get attention. According to a statement issued today by a bunch of leading names in AI, “mitigating the risk of extinction from AI should be a global priority.” Right. It’s hard to imagine anyone out there disagrees—who but the most extreme misanthrope wants to extinguish the human race? The question, then, is why the very people responsible for AI’s development are suggesting it should be a priority to stop the new technology from killing off every human. Aren’t they the best people to ensure that doesn’t happen?
Yes and no. AI scientists want governments’ help, including via an “ambitious global coordination,” according to Dan Hendrycks, the director of the Center for AI Safety, which was responsible for the statement’s publication. Otherwise there’s a danger of “unchecked competitive pressures” leading to “bad outcomes,” he told me. That makes sense. But let’s face it: Given the difficulty that climate activists have had in getting nations globally to respond to the threat of global warming, and the current chilling of relations between the U.S., China and Russia, it will take more than this statement to bring about a globally coordinated regulatory response to AI.