Scams that use new technologies, from WhatsApp to TikTok, for their criminal practices are the order of the day. And everything seems to indicate that many cybercriminals are also exploiting artificial intelligence to do all kinds of fraud. So some countries have already taken the first step to try to fight against these practices.
In particular, it is the United States that has declared war on the so-called “deepfake” and banned calls that use AI voice cloning.
War on deepfakes
On several occasions, we have warned about this: voice cloning can have many uses, but some of them can be very harmful. For example, this technology that allows you to copy or imitate anyone’s voice just by listening to a recording in a few seconds is used even at the commercial level. Surely you have seen it in many advertisements.
What does this really mean? Well, from now on, everyone should be more skeptical about believing what they hear. Especially if you don’t want to fall for scams that, unfortunately, are the order of the day and have artificial intelligence as the main protagonist.
This is not what we say, but what he himself said. The United States Federal Communications Commission (FCC), which is very concerned about the spread of this type of crime in many states,. According to New Hampshire authorities, around 20,000 residents have fallen victim to this scam.
His reaction to this event, however, was short-lived. Or at least, not much. The US FCC has declared all robocalls that use artificial intelligence voice cloning illegal. Or what is the same: more is done to cheat people and get their money.
The risk of AI voice cloning
You’ve probably seen television commercials that use the voices (and looks, in many cases) of long-dead celebrities as a marketing claim. Well, cybercriminals do the same, in many cases impersonating the voice of a loved one.
Remember that, thanks to artificial intelligence, almost anyone can imitate a voice without having to be a computer genius or anything like that. In many cases, scammers speak pretending to be a child or family member in trouble, almost always to ask for a sum of money in the event of a fictitious emergency. A practice that has become common.
Will other countries follow suit and take similar actions? What is clear is that artificial intelligence creates new possibilities, both at work and personally, but also threats that the law must face. The authorities have always warned about this, and the EU itself has already taken steps in this regard. They certainly aren’t the only ones, from what we’ve seen.