AI content labelling — visual noise, copyright protection, or safety measure?

Experts claim that it is impossible to fully label such content. Is there a solution?

As early as last year, the State Duma discussed the development of a bill that would require the labelling of content created with the help of neural networks. No results followed, but half of Russians believe such measures are now necessary. All experts interviewed by Realnoe Vremya are convinced: it is impossible to label all content generated by artificial intelligence (AI). However, a number of countries are discussing or have already introduced mandatory labelling. Whether new requirements will hinder technological progress, and whether the market can regulate this issue without state intervention, is discussed in the publication.

Half of Russians support AI content labelling

Every second Russian — 56% — believes that the country’s authorities should adopt a law mandating the labelling of content generated by neural networks. This applies to video, audio, images, and text materials. These are the findings of a study conducted by SuperJob.

“There is a legal conflict in copyright law. Essentially, the author today is the person who uses artificial intelligence in their work,” respondents believe.

Only 15% of respondents opposed the initiative, while another 29% found it difficult to answer. “Any labelling will increase the cost of the product,” they fear.

Артем Дергунов / realnoevremya.ru

Young people are the most cautious about the proposal: 17% of users under 34 opposed it, compared with only 10% of those over 45. Moreover, older generations were more likely to find it difficult to give an answer — every third respondent in this age group chose this option. The number of supporters of the initiative increases with income level. For instance, 60% of Russians earning more than 100,000 rubles per month expressed support.

Labelling is needed, but not yet

It should be recalled that the issue of introducing labelling has already been discussed within the government, but no results followed. In May last year, Anton Nemkin, a member of the State Duma Committee on Information Policy, stated that work was under way to develop the concept of the corresponding law.

“In particular, the legislative experience of other countries is being studied, consultations with experts are under way, and work continues on the wording of basic definitions. <...> In my view, labelling should be implemented through graphic or watermarks. The key is that such labelling should be unobtrusive, yet clear and noticeable to any user, so that they understand what kind of content they are viewing and can analyse it more critically,” RIA Novosti quoted him as saying.

As early as 2023, the Russian Technological University asked the Ministry of Digital Development to introduce mandatory labelling for content created with the help of neural networks. A few months ago, Andrey Shurikov, Director of the Institute for Socio-Economic Analysis and Development Programmes, made a similar request to Russian Prime Minister Mikhail Mishustin. He warned that by 2026, around 90% of internet content would be produced using AI.

Максим Платонов / realnoevremya.ru

In response, the Ministry of Digital Development stated that the top priority at this stage is to establish in law a “legal framework for the legitimate creation, distribution, and use” of AI-generated content. As Izvestia reported, there are still no legal definitions for the terms “deepfake,” “artificial intelligence,” or “synthetic content.”

“At present, the Ministry of Digital Development of Russia, together with Roskomnadzor and other relevant federal executive authorities, is working on proposals to establish requirements for the labelling of content that is legally produced and distributed through information technologies, including content created using artificial intelligence technologies and digital substitution of human photo and video images,” the letter stated.

“The idea is good, of course, but there’s no need to label absolutely everything”

“The idea is good, of course, but there’s no need to label absolutely everything,” said GO Digital CEO Azamat Sirazhdinov in a conversation with Realnoe Vremya. “In the near future, AI will be used to create content almost everywhere. Accordingly, if we start labelling absolutely everything, every website will simply be covered with these labels. It will [create] unnecessary noise.”

In his view, content generation by neural networks will become as commonplace as creating content with a pen or a laptop. “We don’t write every time that we created this content using such-and-such a pen,” the expert noted.

“We’ll need to find some balance here. I hope our legislators will manage this task. Moreover, this issue will most likely have to be addressed at the international level,” Azamat Sirazhdinov concluded.

The world is already seeing the first attempts to legislate the publication of content created using neural networks. In China, the first mandatory national standard for its labelling came into effect on 1 September this year. The EU approved a comprehensive AI law back in 2024, with full implementation of all provisions scheduled for 2026. South Korea has adopted a similar document. Kazakhstan is also working on an AI law, which, among other measures, plans to introduce mandatory labelling of products developed using such technologies.

The hardest part is drawing the line between creative use of AI and automated content production

Ivan Nikanov, Deputy Director of the AI Institute at Innopolis University, agrees that it is impossible to label all generative content due to its sheer volume.

“Labelling AI-generated content is necessary, but it should be applied selectively. The key is to ensure transparency in areas where the authenticity of information is critically important: in news, politics, and the use of real people’s images. At the same time, in creativity and education, AI has become just another tool, like Photoshop or a video editor, and excessive control here would only be harmful,” the expert emphasized.

The solution lies in creating a sensible system. Large platforms already exist that can automatically label such materials. “Our task is to adapt these technologies, making the labelling understandable for users and not hindering the development of the digital industry,” Ivan Nikanov said.

At the same time, the hardest part is drawing the line between creative use of AI as a tool and fully automated content production, the deputy director of the AI Institute warned. For some, a neural network is a digital brush that expands capabilities; for others, it is a content-generating assembly line. Applying a single set of rules to both is not only unfair but counterproductive, so regulation must be flexible and sensible.

“I don’t see the need for legal regulations. This will be a natural motivation for the social networks themselves”

Bulat Ganiyev, co-founder of the Technocracy Group, expressed scepticism about the idea of legally mandating the labelling of AI content. In his view, the market will regulate this issue on its own.

“Large platforms like YouTube and Instagram are already handling labelling. So, overall, labelling generated content is not that difficult. I don’t see why we need legal regulations. It seems to me that this will rather be a natural motivation for the social networks themselves. If AI content is not labelled, content factories could emerge that, so to speak, exploit the lower biological behavioural patterns of people online, manipulating algorithm growth by publishing trash videos that children will watch or adults will open. YouTube is currently combating this. Substantively, such videos convey nothing; they are a dopamine trap,” said the co-founder of Technocracy.

Such a development is not in the platforms’ interest: if they become flooded with “junk” content, they will lose users.

According to Bulat Ganiyev, labelling may make sense when it comes to AI assistants interacting with people in banks or multifunctional service centres. Incidentally, as SuperJob analysts found, 60% of Russians support this idea, while only 12% of respondents viewed it negatively.

Galia Garifullina

Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube и «Дзене».

Reference

* Owned by Meta, which has been designated an extremist organisation, making its activities banned in the territory of the Russian Federation.