Can fraudsters ‘hack’ Russians’ accounts using deepfakes?

Experts are certain that in the coming years, audio and video deepfakes will be impossible to distinguish from genuine recordings

Can fraudsters ‘hack’ Russians’ accounts using deepfakes?
Photo: Динар Фатыхов

This year, the number of deepfakes — fabricated audio and video materials — in Russia has tripled. In the future, this type of fraud is expected to grow even further. Already today, AI-generated videos are not always easy to tell apart from real ones. Automatic detection of deepfakes is unlikely to be possible, experts warn, as generation technologies will constantly outpace recognition tools. How to protect yourself from deception under these circumstances — in the report by Realnoe Vremya.

The number of deepfakes in Russia has tripled — and this is not the limit

This year, the number of deepfakes in Russia has tripled. “These are not just memes with voiceovers, not just pictures or collages — they are genuine deepfakes. The figure is three times higher than for the entire year of 2024, and 2025 is not even over yet — this is only data for its first half,” said in September the official representative of the Ministry of Foreign Affairs, Maria Zakharova.

According to the ministry, since the beginning of the year, 231 unique deepfakes and 29,000 of their copies have been detected. The overwhelming majority — 88% — concern the political sphere and national security.

Last week, residents of Kazan were warned about a deepfake allegedly featuring the city’s mayor, Ilsur Metshin.

“The mayor is a public figure, so there are many photos and videos of him online that can be used to create such clips. The fake gives itself away through speech — the audio has a noticeable accent,” the city administration’s press service warned.

As the Ministry of Internal Affairs reported, scammers have been “impersonating” Tobey Maguire, Tom Holland, Robert Downey Jr., Fyodor Bondarchuk, and other film stars to deceive residents of Tatarstan. Using social engineering and deepfakes, fraudsters fully control their choice of victim.

“They use face-swapping technology. They can appear as Tom Cruise or Keanu Reeves, and pensioners who have never seen these actors may completely trust such people, — said Radis Yusupov, an operative officer of the Criminal Investigation Department of the Ministry of Internal Affairs.

“Such cases will only become more frequent”

Every second Russian, according to a study by “MTS Link”, does not know what a deepfake is. However, experts are certain that we are soon to face an epidemic of fake videos.

“A great deal of content on social media today is generated by artificial intelligence (AI). But when creating a deepfake, AI not only generates but, for example, substitutes someone else’s face. Let’s say I decide to “show off” and post a video as if Jennifer Lopez congratulated me on my birthday. Of course, she didn’t actually come or even know me. Such cases will only increase. The tools will continue to develop, and people will try to use them for their own purposes — often criminal ones. That’s what we should expect,” warned Azamat Sirazhitdinov, CEO of GO Digital, in an interview with Realnoe Vremya.

Experts advise recognising deepfakes through visual cues — but soon, this may no longer help.

Saradasish Pradhan на Unsplash

“For now, deepfakes can still be recognised by visual signs. You should pay attention to unnatural facial expressions, possible artefacts such as blurred contours or image jitter. Lip movements may not fully synchronise with sound. You should also remember about so-called “watermarks” that may be present in some generated videos if they haven’t been removed. The outlook for these technologies is clear: in just a few years, audio and video deepfakes will become so realistic that it will be impossible to distinguish them visually from genuine recordings,” said Mikhail Seryogin, Head of the Information Security Centre at Innopolis University.

Moreover, under low-resolution conditions — for example, when imitating footage from intercoms or surveillance cameras — distinguishing a deepfake from a real video is already nearly impossible.

A similar opinion was voiced by Azamat Sirazhitdinov. According to him, AI-generated videos have significantly evolved just over the past year.

“On some websites, especially those of banks, you can log in using biometrics. What’s to stop someone from showing a large screen with a highly realistic deepfake? For instance, displaying on a tablet a face resembling mine and authenticating with it,” the speaker illustrated.

“Generation technologies will constantly outpace recognition tools”

Reliable and widespread automatic detection of deepfakes is unlikely to be achievable, believes Mikhail Seryogin.

“Even analysing the authenticity of a signature or detecting photo editing still requires detailed expert examination with specialised tools — and even then, mistakes are possible. Generation technologies will constantly outpace recognition tools. The situation in Russia mirrors the global one: companies are actively working on deepfake detection technologies but face the same fundamental challenges as their international counterparts,” explained the head of the Information Security Centre.

How, then, can citizens and businesses protect themselves from increasingly sophisticated deepfakes? “Only through legislation,” says Azamat Sirazhitdinov. The term “deepfake” has not yet been legally defined. But even if it appears in law, existing articles — such as those on fraud or defamation — will likely be applied.

Zac Wolff на Unsplash

Major AI platforms could play a key role in limiting the spread of deepfakes.

“For example, they could implement mechanisms to verify the origin of content and determine whether a video was generated using their services. In most cases where the video has not undergone heavy post-processing, such methods could be an effective tool and help users independently verify the authenticity of content,” explained Seryogin.

The most important factor, he said, is the development of digital literacy and healthy scepticism. The main rule: avoid impulsive actions, even if a call or message seems entirely credible.

“In the past, people believed everything printed in newspapers; now they believe everything they see online. I think that as deepfakes spread, we’ll go through a kind of civilizational learning phase — and by 2027–2028, we’ll look back on this period as mere “child’s play,” concluded Sirazhitdinov.

Galia Garifullina

Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube и «Дзене».