US President Joe Biden, pictured above [FIle]
| Photo Credit: Reuters
A company that sent deceptive calls to New Hampshire voters Artificial intelligence is being used to mimic President Joe Biden’s voice The two sides agreed to pay a $1 million fine on Wednesday, federal regulators said.
Lingo Telecom, the voice service provider that transmitted the robocalls, agreed to a settlement to resolve an enforcement action taken by the Federal Communications Commission, which initially sought a $2 million fine.
Many see the case as a disturbing early example How AI can be used to influence groups of voters and democracy overall.
Meanwhile, Steve Kramer, the political consultant who planned these calls, is still facing a $6 million FCC fine and criminal charges at the state level.
These phone messages were sent to thousands of New Hampshire voters on January 21. They contained a voice identical to Biden’s, falsely suggesting that voting in the state’s presidential primary would disqualify them from voting in the November general election.
(Unravel the complexities of our digital world Interface PodcastWhere business leaders and scientists share insights that shape future innovation. Interface is also available on YouTube, Apple Podcasts, and Spotify.)
Kramer, who paid a magician and self-described “digital nomad” to make the recordings, told The Associated Press earlier this year that he wasn’t trying to influence the outcome of the primary election but rather wanted to highlight the potential dangers of AI and spur lawmakers to take action.
If convicted, Kramer could face up to seven years in prison on the voter suppression charge, and up to one year in prison on the charge of impersonating a candidate.
In addition to agreeing to the civil penalty, Lingo Telecom also agreed to stricter rules and requirements for caller ID authentication and to more rigorous scrutiny of the accuracy of information provided by its customers and upstream providers, the FCC said.
“Every one of us deserves to know that the voices on the line are exactly as they claim,” FCC Chairwoman Jessica Rosenworcel said in a statement. “If AI is being used, that should be clearly communicated to every consumer, citizen, and voter who encounters it. The FCC will take action when trust in our communications networks is on the line.”
Lingo Telecom did not immediately respond to a request for comment. The company previously said it strongly disagreed with the FCC’s action, calling it an attempt to impose new rules retroactively.
The nonprofit consumer advocacy group Public Citizen applauded the FCC’s action. Co-chair Robert Weissman said Rosenworcel was “absolutely right” in saying that consumers have a right to know when they’re getting authentic content and when they’re getting AI-generated deepfakes. Weissman said the case shows how such deepfakes pose “an existential threat to our democracy.”
FCC Enforcement Bureau Chief Loyan Egal said the combination of caller ID spoofing and generative AI voice-cloning technology poses a significant threat, “whether at the hands of domestic operatives seeking political advantage or sophisticated foreign adversaries seeking malign influence or election interference activities.”