Misinformation And Disinformation Coupled With New Generative AI Tools Creating Unprecedented Threat We’re Ill-Prepared For

545
Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday
Over the weekend, voters in New Hampshire and New England received a call from a voice that closely resembled Biden, urging them not to vote in the Tuesday primaries and emphasizing the importance of saving their vote for the November election. File photo: G0d4ather, ShutterStock.com, licensed.

NEW YORK, NY – The New Hampshire primary is being shaken up by a deepfake robocall that impersonates President Biden with its origins remaining a mystery. Over the weekend, voters in New Hampshire and New England received a call from a voice that closely resembled Biden, urging them not to vote in the Tuesday primaries and emphasizing the importance of saving their vote for the November election.

The call, which was initially reported by NBC News, appeared to come from Kathy Sullivan, a former New Hampshire Democratic Party Chair leading a super PAC advocating for writing in Biden’s name on the ballot. Various individuals and organizations, including Sullivan, the Biden campaign, and President Trump, have all denied any involvement in the robocalls.

The incident has raised concerns about the use of deepfake technology in political campaigns and the need for regulations to address AI accountability and transparency. Deepfakes have the potential to sow confusion and perpetuate fraud, as evidenced by their use in spreading disinformation during the conflict between Israel and Hamas. While some progress has been made in regulating AI-generated content, concrete regulations are lacking.

Tech companies such as Microsoft, OpenAI, and Google have made voluntary commitments to watermark manipulated videos and photos to differentiate them from organic content. Additionally, the Biden administration issued an executive order providing guidance on AI technology development, and discussions on AI regulation have taken place in AI Insight Forums led by Senate majority leader Chuck Schumer. However, only a few bills have emerged from these discussions.

Senators Amy Klobuchar and Representative Yvette Clarke have advocated for legislation that would ban the use of AI to create deceptive content and require political ads to disclose the use of AI. Advocacy groups like Public Citizen have petitioned the Federal Election Commission for new rules on political ad disclosures, but no formal decision has been made yet.

One of the challenges in regulating deepfakes is their audio aspect, as faked audio lacks visual signals that would help detect manipulation. Hany Farid, a professor at UC Berkeley, emphasizes that audio deepfakes can be particularly deceptive, especially when targeting an older demographic that may be more susceptible to scams.

Regulating audio deepfakes poses unique challenges due to the lack of clear markers to indicate synthetic provenance. Marking audio with disclaimers is one possible approach, but bad actors are unlikely to comply with such measures. Additionally, watermarking techniques may not be easily detectable by the average person.

he impersonation of Joe Biden demonstrates the ease with which deepfake audio can be created, using readily available voice cloning tools. The lack of legal obligations for platforms and content distributors to verify or block such content further exacerbates the problem.

The New Hampshire Attorney General’s Office is currently investigating the robocalls, recognizing them as an unlawful attempt to disrupt the primary process. The Biden campaign is actively considering further action against the dissemination of disinformation and voter suppression.

Comment via Facebook

Corrections: If you are aware of an inaccuracy or would like to report a correction, we would like to know about it. Please consider sending an email to [email protected] and cite any sources if available. Thank you. (Policy)