US laws regulating artificial intelligence have proven elusive, but there may be hope

[ad_1]

Can the United States meaningfully regulate AI? It’s not at all clear yet. Policymakers have made progress in recent months, but they have also faced setbacks, illustrating the difficult nature of laws that impose guardrails on technology.

In March, Tennessee become The first country to protect voice artists from unauthorized cloning by artificial intelligence. Colorado this summer adopted A graded, risk-based approach to AI policy. In September, California Governor Gavin Newsom signed dozens of AI safety bills, a few of which require companies to disclose details about their AI training.

But the United States still lacks a federal AI policy comparable to EU AI law. Even at the state level, the organization still faces significant obstacles.

After a long battle with special interests, Governor Newsom vetoed SB 1047, a law that would have imposed broad safety and transparency requirements on companies developing artificial intelligence. Another California bill targeting AI deepfakes spreaders on social media was put on hold this fall pending the outcome of a lawsuit.

However, there is reason for optimism, according to Jessica Newman, co-director of the Artificial Intelligence Policy Center at UC Berkeley. Speaking to a panel on AI governance at TechCrunch Disrupt 2024, Newman noted that many federal bills may not have been written with AI in mind, but still apply to AI — such as anti-discrimination and consumer protection legislation.

“We often hear about the United States being this kind of ‘Wild West’ compared to what’s happening in the European Union, but I think that’s exaggerated, and the reality is more nuanced than that,” Newman said.

In Newman’s view, the FTC has forced companies that surreptitiously collect data to delete their AI models, and now they are Investigation Whether AI startup sales to big tech companies violate antitrust regulation. Meanwhile, the Federal Communications Commission declared AI-generated robocalls illegal and introduced a rule requiring disclosure of AI-generated content in political ads.

President Joe Biden has also tried to put certain AI rules on the books. Nearly a year ago, Biden signed the AI ​​Executive Order, which supports the voluntary reporting and benchmarking practices that many AI companies were already choosing to implement.

One result of the executive order was the creation of the American Artificial Intelligence Safety Institute (AISI), a federal body that studies risks in artificial intelligence systems. Operating inside National Institute of Standards and TechnologyAISI has research partnerships with major AI labs such as OpenAI and Anthropic.

However, AISI could be ended with a simple rescission of Biden’s executive order. In October, a coalition of more than 60 organizations called on Congress to enact legislation codifying AISI before the end of the year.

“I think we all, as Americans, share an interest in making sure that we mitigate the potential downsides of technology,” said AISI Director Elizabeth Kelly, who also sat on the panel.

Is there hope for comprehensive regulation of artificial intelligence in the United States? The failure of SB 1047, which Newman described as a “light touch” bill with input from industry, is not exactly encouraging. SB 1047 was authored by California Senator Scott Wiener, and has been opposed by many in Silicon Valley, including prominent technology experts such as Meta’s chief AI scientist, Yann LeCun.

In this case, Weiner, another member of the Disrupt committee, said he couldn’t have crafted the bill differently — and he’s confident that broad regulation of AI will ultimately prevail.

“I think this paves the way for future efforts,” he added. “We hope we can do something that can bring more people together, because the fact that all the big labs have already acknowledged is that the risks (of artificial intelligence) are real and we want to test them.”

In fact, Anthropy last week to caution It could be an AI catastrophe if governments do not implement regulation within the next 18 months.

Opponents have doubled down on their rhetoric. Last Monday, Vinod Khosla, founder of Khosla Ventures, called Weiner “completely ignorant” and “unqualified” to regulate the true risks of AI. Microsoft and Andreessen Horowitz issued a rallying statement against AI regulations that might affect their financial interests.

However, Newman posits that the push to standardize the growing patchwork of state-by-state AI rules will eventually lead to a stronger legislative solution. Instead of agreeing on a model of regulation, state policymakers did just that foot Nearly 700 pieces of AI legislation have passed this year alone.

“My feeling is that companies don’t want a patchwork regulatory environment where every state is different, and I think there’s going to be increasing pressure to get something at the federal level that provides more clarity and reduces some of that uncertainty,” she said.

[ad_2]

Leave a Comment