It’s no secret politicians are worried about artificial intelligence. And while AI executives suggest they’re just as concerned, they may have another agenda in creating that perception.
Recent proposed federal legislation with cheery names like the “Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023” or the “Artificial Intelligence and Biosecurity Risk Assessment Act,” suggest a possible future in which criminals use AI to launch nuclear bombs or create chemical weapons. And AI leaders including OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei are fanning the flames by warning of such dangers.
But there may be more to it. Consider, for instance, the more than 1,400 submissions made in recent months to the Biden Administration’s proposals for the regulation of AI, which have poured in from companies, academic groups and individuals. The responses, analyzed by AI research company Generally Intelligent and shared with The Information, confirmed a belief as old as time: Companies weighing in on AI are looking out for themselves (not that there’s anything wrong with that).