The Biden Deepfake Robocall Is Only the Beginning

[ad_1]

“In American politics, disinformation has unfortunately become commonplace. But now, misinformation and disinformation coupled with new generative AI tools are creating an unprecedented threat that we are ill-prepared for,” Clarke said in a statement to WIRED on Monday. “This is a problem both Democrats and Republicans should be able to address together. Congress needs to get a handle on this before things get out of hand.”

Advocacy groups like Public Citizen have petitioned the Federal Election Commission to issue new rules requiring political ad disclosures similar to what Clarke and Klobuchar have proposed, but have yet to make any formal decision. Earlier this month, FEC Chairman Sean Cooksey, a Republican, told the Washington Post that the commission plans to make a decision by early summer. By then, the GOP will have likely already chosen Trump as its nominee, and the general election will be well under way.

“Whether you are a Democrat or a Republican, no one wants to see fake ads or robocalls where you cannot even tell if it’s your candidate or not,” Klobuchar told WIRED on Monday. “We need federal action to ensure this powerful technology is not used to deceive voters and spread disinformation.”

Audio fakes are especially pernicious because, unlike faked photos or videos, they lack many of the visual signals that might help someone identify that they’ve been altered, says Hany Farid, a professor at Berkeley School of Information. “With robocalls, the audio quality on a phone is not great and so it is easier to trick people with fake audio.”

Farid also worries that phone calls, unlike fake posts on social media, would be more likely to reach an older demographic that’s already susceptible to scams.

“One might argue that many people figured out that this audio was fake, but the issue in a state primary is that even a few thousands votes could have an impact on the results,” he says. “Of course, this type of election interference could be carried out without deepfakes, but the concern is that AI-powered deepfakes makes these campaigns more effective and easier to carry out.”

Concrete regulation has largely lagged behind, even as deepfakes like the one used by the robocall become cheaper and easier to produce than ever before, says Sam Gregory, program director at Witness, a nonprofit that helps people use technology to promote human rights. “It doesn’t sound like a robot anymore,” he says.

“Folks in this area have really wrestled with how you mark audio to show that its provenance is synthetic,” he says. “For example, you can oblige people to put a disclaimer at the start of a piece of audio that says it was made with AI. If you’re a bad actor or someone who is doing a deceptive robocall, you obviously don’t do that.”

Even if a piece of audio content is watermarked, it may be done so in a way that’s evident to a machine, but not necessarily to a regular person, says Claire Leibowicz, head of media integrity at the Partnership on AI. And doing so still relies on the good will of the platforms used to generate the deepfake audio. “We haven’t figured out what it means to have these tools be open source for those who want to break the law,” she adds.

[ad_2]

Source link