Liability for deepfake: ‘The proposal is certainly interesting, but rather premature’

Experts assessed the proposal to introduce criminal liability for the use of deepfake technologies for criminal purposes

A bill on criminal liability for the use of deepfake technologies for criminal purposes is going to be submitted to the State Duma this week. For this, one may face a fine of up to 1.5 million rubles and imprisonment for up to 7 years, depending on the article of the Criminal Code. According to experts, the initiative is premature and its implementation may entail a large number of criminal cases for “absolutely humorous things”. What else market participants say about the initiative — in the material of Realnoe Vremya.

There are many examples when people lose everything because of fakes of their voice

The initiator of the bill is the head of the Committee on Labour, Social Policy and Veterans Affairs, Yaroslav Nilov. He proposed to introduce definitions into a number of articles of the Criminal Code of the Russian Federation, such as “Slander”, “Theft”, “Fraud”, “Extortion” and “Causing property damage by deception or abuse of trust”, an additional qualifying feature — “committing a crime using an image or voice (including falsified or artificially created) and biometric data”. It's about using deepfake technology, “digital masks”, etc.

Depending on the article under which the crime was committed using deepfakes, the perpetrator will face a fine of up to 1.5 million rubles or in the amount of other income for a period of up to two years, or imprisonment for up to seven years.

“There are many examples when people lose everything because of fakes of their voice Therefore, it is proposed to provide for modern realities in the Criminal Code," the author of the initiative explained and recalled the case during Vladimir Putin's direct line, when his deepfake asked the president of the country, who introduced himself as a “student of St. Petersburg State University”.

“A prankster may face criminal punishment, but this is not exactly what is needed for the development of modern technologies”

As Alexey Lukatsky, a business consultant on information security at Positive Technologies, told Realnoe Vremya, it is too early to punish for deepfakes.

“This proposal is certainly interesting, but, in my opinion, it is rather premature. The thing is that so far we do not have very developed deepfake technologies, and in general, it is very difficult to separate a toy related to the use of artificial intelligence for comic purposes and its use with some malicious intent," he said.

According to him, the initiative may have unpleasant consequences:

“There is a possibility that an inaccurate definition, which will be specified in the Criminal Code, may entail an unjustified number of a large number of criminal cases for seemingly absolutely humorous stories that are quite common on the Internet, when a person or a well-known person is replaced by a face or voice, and so on. If these people consider that their rights have been infringed, then the joker may face criminal punishment, and this is probably not exactly what is needed for the development of modern technologies.

“There is a possibility that an inaccurate definition, which will be specified in the Criminal Code, may lead to an unjustified number of a large number of criminal cases for seemingly completely comic stories.”. Динар Фатыхов / realnoevremya.ru

Since the beginning of 2024, cases of fraudulent schemes using deepfake technologies have been recorded in Russia

Nevertheless, as the Angara Security press service told the publication, regulation of the use of technologies for creating deepfakes is necessary from the point of view of information security. In February, the company's analysts revealed a significant increase — by 45% — in messages offering to voice an “advertisement” or “film”. They are distributed via Telegram, Habr or spam calls. The authors offer to make money on a “big project” and then ask for names or set a condition that the recorded audio file should look like a phone call. A fee of 300 to 5,000 rubles is offered for participation in such projects

“As a result of collecting voice data, cybercriminals have the opportunity to improve the tactics of phishing attacks on individuals and businesses for which audio and video devices are used," the company's analysts concluded.

In general, since the beginning of 2024, cases of fraudulent schemes have been recorded in Russia, in which social engineering and deepfake techniques are used together.

“The purpose of such an attack is to receive funds from employees of the company who receive messages from the fake account of the head in Telegram. For example, in January, a similar technique was used against one of the companies. First, several Telegram user accounts were stolen, then audio files (voice messages) were received. This data was used to generate fake records in which scammers, on behalf of the account owner, extorted funds from users who were in various chat rooms and workgroups with him, Angara Security said.

In February, the company's analysts revealed a significant increase — by 45% — in messages offering to voice an “advertisement” or “film”. They are distributed via Telegram, Habr or spam calls. Реальное время / realnoevremya.ru

Also, deepfakes can be used in video communication — for example, criminals used audio and video technology to convince an employee to transfer a large amount of money on behalf of “trusted” persons.

“We expect that the trend for such attacks will only gain momentum with the development of AI technologies. Therefore, it is extremely important to develop methods for recognising fake materials and resolve the issue at the legislative level in order to reduce the risks to cybersecurity of ordinary users of digital services and businesses," the experts concluded.

Static deepfakes can be detected, but dynamic ones -much more difficult

It is now impossible to identify crimes using deepfakes with one hundred percent probability, Alexey Lukatsky explained.

“There are quite expensive technologies that can partially detect and block deepfakes. As a rule, they are static. It is much more difficult to identify dynamic deepfakes. And it is even more difficult to do this as artificial intelligence technologies develop, which begin to combine various modalities, combine voice, video and lip movement, and facial expressions. In this case, of course, it will be impossible to detect with one hundred percent probability at all. As a result, there may also be distortions and excesses in terms of criminal prosecution," the expert expressed his opinion.

Angara Security clarified that in order to identify traces of AI work, including audio and video fakes, tools for detecting fakes have begun to be developed:

“For example, the Russian project Zephyr, presented last summer, is capable of detecting artificially created (audio, video images) with high probability. The creation of new tools and developments will make it easier to identify and distribute such materials in the near future.

There will be restrictions, but the market will survive

According to the company, the regulation of deepfakes in the production of advertising and marketing materials will require solving the issue of labelling content created by AI. Let us remind that such bill is currently being developed in the State Duma. According to Anton Nemkin, a member of the State Duma Committee on Information Policy, deputies are studying the legislative experience of other countries on labelling and consulting with experts. According to the parliamentarian, at the moment it is becoming difficult for a person to distinguish the real from the unreal.

“If we consider the long-term perspective, the development of AI services without the necessary control carries the danger of huge arrays of not only unverified texts, but even completely invented facts, figures and data," he said.

According to the company, the regulation of deepfakes in the production of advertising and marketing materials will require solving the issue of labelling content created by AI. . Максим Платонов / realnoevremya.ru

At the same time, Angara Security sees prospects for the prosecution of criminals who use deepfakes to scam messengers:

“For example, the Central Bank notes an increase in the theft of citizens' funds from banks through the Faster Payment System (SBP), which fraudsters began to use, including using deepfake technologies. For example, in the first quarter of 2024, 1.13 billion rubles were stolen through the SBP, while a year ago it was just over 550 million rubles. Therefore, we expect that the regulation of the industry will have a positive impact on the regulation of the legal AI content market and the fight against cybercrime.

As for the information technology market, it will survive “absolutely in any conditions," Alexey Lukatsky believes.

“Certainly, there will be some restrictions, including appropriate disclaimers in various license agreements. Of course, the cost of this kind of technology, both the generation of video images and the detection of this kind of deepfakes, will increase," concluded the business consultant on information security at Positive Technologies.

Daria Pinegina

Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube, «Дзене» и Youtube.

Tatarstan