The US AI Safety Institute is on shaky ground

[ad_1]

One of the only U.S. government offices dedicated to evaluating the safety of artificial intelligence is in danger of being dismantled if Congress does not choose to authorize it.

The American Artificial Intelligence Safety Institute (AISI), a federal government body that studies risks in artificial intelligence systems, was created in November 2023 as part of President Joe Biden’s AI Executive Order. AISI operates within NIST, an agency of the Department of Commerce that develops guidelines for deploying different classes of technologies.

But while AISI has a budget, a director, and a research partnership with its U.K. counterpart, the AI ​​Safety Institute UK, it could be ended with a simple repeal of Biden’s executive order.

“If another president comes into office and rescinds the AI ​​executive order, he will dismantle AISI,” Chris McKenzie, senior communications director at Americans for Responsible Innovation, an AI lobbying group, told TechCrunch. “(Donald) Trump promised to rescind the Amnesty International executive order. So Congress formally authorizing the AI ​​Safety Institute will ensure its continued existence regardless of who is in the White House.

Beyond ensuring AISI’s future, delegating the office could also lead to more stable, long-term funding for its initiative from Congress. AISI’s budget is currently about $10 million, a relatively small amount given the concentration of major AI labs in Silicon Valley.

“Congressional professionals tend to give higher budget priority to entities that are formally authorized by Congress, on the basis that these entities have broad buy-in and are around for the long term, rather than just a one-time administration,” McKenzie said. priority.”

In a letter Today, a coalition of more than 60 companies, nonprofits and the University of Congress called for legislation codifying AISI before the end of the year. The signatories below include OpenAI and Anthropic, both of which have done this Signed agreements With AISI to collaborate on AI research, testing and evaluation.

Both the Senate and House have introduced bipartisan bills to allow AISI activities. But the proposals have faced some opposition from conservative lawmakers, including Sen. Ted Cruz (R-Texas), who has called for a Senate version of the AISI bill to roll back diversity programs.

To be sure, AISI is a relatively weak organization from an implementation perspective. Its standards are voluntary. But think tanks and industry alliances – as well as technology giants like Microsoft, Google, Amazon, and IBM, all of which signed the aforementioned letter – see artificial intelligence as the most promising path to AI standards that could form the basis of future policy.

There is also concern among some interest groups that allowing AISI to withdraw would risk ceding AI leadership to foreign countries. During the AI ​​Summit in Seoul in May 2024, international leaders agreed to form a network of AI safety institutes that includes agencies from Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union as well as the United Kingdom. And the United States

“While other governments are moving forward quickly, members of Congress can ensure that the United States does not fall behind in the global AI race by permanently authorizing the AI ​​Safety Institute and providing certainty for its critical role in advancing AI innovation and adoption in the United States,” Jason Oxman said, President and CEO of the Information Technology Industry Council, a trade association for the IT industry, in a statement. “We urge Congress to heed today’s call to action from industry, civil society, and academia to pass the necessary bipartisan legislation before the end of the year.”

TechCrunch has an AI-focused newsletter! Register here Get it in your inbox every Wednesday.

[ad_2]

Leave a Comment