a16z VC Martin Casado explains why many AI regulations are so wrong

[ad_1]

The problem with most attempts to regulate AI so far is that lawmakers are focused on some mythical future AI experiment, rather than truly understanding the new risks that AI actually presents.

So argued Andreessen Horowitz, general partner of VC Martin Casado, to a crowd at TechCrunch Disrupt 2024 last week. Casado, who leads the $1.25 billion a16z infrastructure practice, has invested in AI startups such as World Labs, Cursor, Ideogram, and Braintrust.

“Transformational technologies and regulation have been the rhetoric for decades, right? So the thing with all the AI ​​rhetoric is that it seems to come out of nowhere.” “They’re kind of trying to conjure up new regulations without taking those lessons.”

For example, he said, “Have you actually seen the definitions of AI in these policies? Like, we can’t even define that.”

Casado was among a sea of ​​voices in Silicon Valley who rejoiced when California Gov. Gavin Newsom vetoed the state’s AI governance law, SB 1047. The law wanted to put a so-called kill switch on very large AI models — aka something That would turn them off. Those who opposed the bill said it was so poorly worded that instead of saving us from a fictional future AI monster, it would simply confuse and derail California’s hot AI development scene.

“I routinely hear founders refuse to move here because of what that says about California’s position on AI — that we prefer bad legislation based on sci-fi fears rather than tangible risks.” He posted on X Two weeks before the bill was vetoed.

Although that state law is dead, the fact of its existence still bothers Casado. He worries that more bills, worded in the same way, could materialize if politicians decide to pander to the general population’s fears of artificial intelligence, rather than control what the technology actually does.

He understands AI technology better than most people. Before joining the famous venture capital firm, Casado founded two other companies, including the network infrastructure company, Nicira, which he sold to VMware for $1.26 billion a little over a decade ago. Before that, Casado was a computer security expert at Lawrence Livermore National Laboratory.

He says many of the proposed AI regulations did not come from, nor were supported by, many of those who understand AI technology best, including academics and the commercial sector that builds AI products.

“You have to have a different idea of ​​marginal risk,” he said. “Like, how is today’s AI different from someone using Google? How is today’s AI different from someone just using the Internet? If we had a model of how it would be different, you would have some idea of ​​how different it is.” marginal risks, and then you can implement policies that address those marginal risks.”

“I think we’re still a bit early before we start taking a hard look at a set of regulations to really understand what we’re going to regulate,” he says.

The counterargument, put forward by many in the audience, was that the world had not really seen the kinds of harms the Internet or social media could cause before they fell upon us. When Google and Facebook launched, no one knew they would dominate online advertising or collect so much data about individuals. No one understood things like cyberbullying or echo chambers when social media was new.

Advocates of AI regulation now often point to these past circumstances and argue that these technologies should have been regulated sooner.

Casado’s response?

“There is a strong regulatory system in place today that has been developed over 30 years,” which is well equipped to build new policies for AI and other technologies. That’s right, at the federal level alone, regulatory agencies include everything from the Federal Communications Commission to the House Committee on Science, Space, and Technology. When TechCrunch asked Casado on Wednesday after the election whether he stood by that view — that AI regulation should follow the path already set by existing regulatory bodies — he said he did.

But he also believes that AI should not be targeted because of issues with other technologies. The technologies causing the problems should be targeted instead.

“If we get it wrong with social media, you can’t fix it by putting it on AI,” he said. “The people in charge of regulating AI say, ‘Oh, we got it wrong on social media, so we’ll get it right on AI,’ which is a nonsensical statement. Let’s go fix it on social media.”



[ad_2]

Leave a Comment