[ad_1]
Many fear that the 2024 election will be affected, and perhaps even decided, by artificial intelligence-generated misinformation. Although some items were found, they were much less than expected. But don’t let that fool you: The threat of disinformation is real, but you are not the target.
At least that’s what Oren Etzioni, a longtime AI researcher, and his non-profit foundation say Trumedia She placed her finger on the pulse of the generated disinformation.
“There are, for lack of a better word, a variety of deepfakes,” he told TechCrunch in a recent interview. “Each one serves its own purpose, and some we’re more aware of than others. Let me put it this way: For every thing you actually hear about, there are a hundred things that aren’t aimed at you. Maybe a thousand. It’s really just the tip of the iceberg that you’re getting at.” Mainstream press.
The truth is that most people, Americans most of all, tend to believe that what they experience is the same as what others experience. This is not true for many reasons. But in the case of disinformation campaigns, America is actually a difficult target, given its relatively well-informed population, readily available factual information, and trustworthy journalism at least most of the time (despite all the noise to the contrary).
We tend to think of deepfakes as something like a video of Taylor Swift doing or saying something she’s not doing. But the truly dangerous deepfakes are not about celebrities or politicians, but about situations and people who cannot be easily identified and confronted.
“The biggest thing people don’t get is diversity. ‘Today I saw one of the Iranian planes over Israel,’ which doesn’t happen but can’t be easily refuted by someone who’s not on the ground there. ‘You don’t see it because you’re not in the Telegram channel, or In certain groups on WhatsApp – but there are millions out there.”
TrueMedia offers a free service (via web and API) to identify images, video, audio, and other items as fake or real. It’s not a simple task, and it can’t be fully automated, but they are slowly building a foundation of ground truth material that feeds back into the process.
“Our primary mission is detection. Academic standards (for evaluating fake media) have long been passed,” Etzioni explained. “We train on things that have been uploaded by people all over the world; we see what different sellers are saying about it, and what our models are saying.” about it, and we come to a conclusion. As a follow-up, we have a forensics team that conducts a deeper, more thorough, and slower investigation, not on all of the items but on a large portion of them, so we have a ground truth that we don’t assign a truth value to unless we’re absolutely sure of it We may be wrong, but we are much better than any other single solution.
The primary task is in the service of measuring the problem in three main ways identified by Etzioni:
- How much is there? “We don’t know, there’s no Google for this. You see various indications of its spread, but it is very difficult, perhaps impossible to measure accurately.
- How many people see that? “It’s easier because when Elon Musk shares something, you see that 10 million people have seen it.” So the number of eyeballs easily reaches into the hundreds of millions. I see items every week that have been viewed millions of times.
- How effective is it? “This is probably the most important. How many voters did not go to the polls because of Biden’s fake calls? We are not prepared to measure that. The Slovakian campaign (a disinformation campaign that targeted a presidential candidate there in February) came in at the last minute, and then he lost. Maybe This may tip the balance in favor of those elections.
He stressed that all of these works are underway, and some of them have just begun. But you have to start somewhere.
“Let me make a bold prediction: Over the next four years, we will become much more adept at measuring this,” he said. “Because we have to. Now we’re just trying to cope.”
As for some industrial and technological attempts to make generated media more visible, such as watermarking images and text, they are harmless and perhaps helpful, but they don’t even begin to solve the problem, he said.
“The way I like to put it is, don’t watermark a gun fight.” These voluntary standards are useful in collaborative ecosystems where everyone has a reason to use them, but they offer little protection against malicious actors who want to avoid detection.
It all sounds pretty dire, and it is, but the most important election in modern history just took place without much AI shenanigans. This is not because generative misinformation is not common, but rather because its purveyors have not felt the need to share it. Whether that scares you more or less than the alternative is entirely up to you.
[ad_2]