top of page
white purple broker course (160 x 600 px).gif

Report Warns of Extinction Level Risks from AI and Calls for Swift Action

Updated: Mar 13


AI Risks


A government-commissioned report titled "An Action Plan to Increase the Safety and Security of Advanced AI" has raised alarming concerns about the potential national security risks posed by artificial intelligence (AI), particularly advanced AI and artificial general intelligence (AGI). The report, obtained by TIME ahead of its publication, urges the U.S. government to act "quickly and decisively" to mitigate these risks, which could escalate to an "extinction-level threat to the human species" in the worst-case scenario.


Key Findings and Recommendations

  1. Urgent and Growing Risks - The report states that "current frontier AI development poses urgent and growing risks to national security," likening the potentially destabilizing impact of advanced AI and AGI to the introduction of nuclear weapons.

  2. Weaponization Risk - One category of risk highlighted is the "weaponization risk," where advanced AI systems could potentially be used to design and execute catastrophic biological, chemical, or cyber-attacks, or enable unprecedented weaponized applications in swarm robotics.

  3. Loss of Control Risk - The second category is the "loss of control" risk, referring to the possibility that advanced AI systems may outmaneuver their creators and become uncontrollable.

  4. Sweeping Policy Actions - To address these risks, the report recommends a set of sweeping and unprecedented policy actions that could radically disrupt the AI industry. These include:

  • Making it illegal to train AI models using more than a certain level of computing power, with the threshold set by a new federal AI agency.

  • Outlawing the publication of powerful AI models' "weights" or inner workings, with potential criminal penalties for violations.

  • Stricter controls on the manufacture and export of AI chips.

  • Increased federal funding for research focused on AI safety and security.

  1. Industry Concerns - The report's authors spoke with over 200 government employees, experts, and workers at frontier AI companies like OpenAI, Google DeepMind, Anthropic, and Meta. Some accounts suggest that many AI safety workers within these companies are concerned about perverse incentives driving decision-making by executives.


The report was commissioned by the U.S. State Department in November 2022 and produced by Gladstone AI, a company specializing in AI technical briefings for government employees. While the report's recommendations do not reflect the official views of the U.S. government, it serves as a stark warning about the potential risks of advanced AI and the need for prompt and decisive action to address them.

bottom of page