The US Federal Communications Commission (FCC) on Thursday banned robocalls with voices generated by artificial intelligence, a decision that sends a clear message that the exploitation of technology to deceive consumers will not be tolerated.
The unanimous decision targeted robocalls made using AI voice cloning tools under the Telephone Consumer Protection Act, a 1991 rule that prohibits unsolicited calls that use artificial or artificial voice messages. prerecorded.
The announcement comes as authorities in New Hampshire advance their investigation into AI-made robocalls in which President Joe Biden’s voice was imitated to prevent people from voting in the state’s primary. This happened last month.
The rule, which takes effect immediately, gives the FCC the power to fine companies that use AI voices in their calls or block service providers from sending them. It also opens the door for callers to file lawsuits and gives state attorneys general a new mechanism to take strict action against violators, according to the FCC.
Jessica Rosenworcel, president of the organization, said malicious people have used AI-generated voices in robocalls to misinform voters, impersonate celebrities and extort family members.
“It seems like something from the distant future, but this threat is here,” Rosenworcel told The Associated Press on Thursday, as the Commission weighed the measures. “Any of us can get these scam calls, and that’s why we think we have to act now.”
Consumer protection laws generally state that telemarketing companies cannot use automated dialers or prerecorded voice messages to call cell phones and cannot call landlines without the written consent of the person calling. them.
The new ruling classifies AI-generated voices in robocalls as “artificial” and therefore falls under the same standards, the FCC explained.
Those who violate the law are subject to hefty fines, with maximum amounts exceeding $23,000 per call, the FCC said. The agency has used consumer protection laws in the past to crack down on robocalls that interfere with election processes, even fining two conservative scammers $5 million for falsely alerting people to areas where majority population. debt collection, or forced vaccination.
The law also gives recipients of these calls the right to take legal action with the possibility of collecting compensation of up to $1,500 for each unwanted call.
Josh Lawson, director of AI and democracy at the Aspen Institute, said that even with the FCC’s ruling, voters should be prepared for the possibility of receiving unwanted calls, messages and social media posts.
“The really bad people usually ignore the warnings and know what they’re doing is wrong,” he said. “We have to understand that these people will continue to stir the hornet’s nest and go to the limit.”
Kathleen Carley, a professor at Carnegie Mellon who specializes in computer disinformation, says that in order to detect the abuse of voice technology by AI, it is necessary to clearly identify that the audio is produced by AI. .
This is possible now, he said, “because the technology to generate these calls has been around for a long time. It is well known and makes common mistakes. But that technology will improve.”
Sophisticated generative AI tools, from voice cloning software to image generators, have been used in elections in the United States and around the world.
Last year, when the US presidential race began, many campaign ads used AI-generated audio or images, and some candidates experimented with using chatbots to communicate with voters. .
Bipartisan efforts are underway in Congress to regulate the use of AI in political campaigns. But nine months before the general election, no federal law has been approved.
Rep. Yvette Clarke, who introduced a proposal to regulate the use of AI in politics, praised the FCC’s decision but said it was up to Congress to act.
“I think Democrats and Republicans can agree that AI-generated content to mislead people is a bad thing, and we need to work together to help give people the tools they need to help them distinguish what is real from what is fake.” said Clarke, a Democrat from New York.
The AI-generated robocallers that tried to influence the New Hampshire primary election on January 23 used a voice similar to Biden’s, used his standard phrase “What a load of nonsense,” and falsely it is explained that participation in the primary will prevent the population. from voting in the November election.
“New Hampshire got a taste of how AI can be inappropriately used in the election process,” said New Hampshire Secretary of State David Scanlan. “We should really try to understand the use and application, so as not to mislead the voters in a way that could harm our elections.”
State Attorney General John Formella said Tuesday that investigators have identified Life Corp, a Texas-based company, and its owner, Walter Monk, as the source of the calls, which were made to thousands of residents. Monk confirmed that another Texas company, Lingo Telecom, forwarded the calls.
According to the FCC, Lingo Telecom and Life Corp. has previously been investigated for illegal robocalls.
Lingo Telecom issued a statement Tuesday saying it “acted immediately” to assist in the investigation of robocalls impersonating Biden. The company says it “has nothing to do with the development of the content of the calls.”
A person who answered the phone at Life Corp. declined to comment Thursday.