‘Made using AI’: do we need a label for artificial intelligence?
Russia's State Duma is thinking about how to protect the country from malicious content created by neural networks
Does artificial intelligence pose a threat to the emergence of a huge array of unverified information, and how to filter the content created by AI, the deputies of the State Duma are thinking. There is no salvation from the potential threat yet. Perhaps, Russia will follow the path of Western countries, obliging technology giants to independently label content created by neural networks. In general, this should be done, experts agreed, but carefully so as not to slow down the development of advanced technologies. What the minister of digital development and his virtual assistant AiratAI, created on the basis of ChatGPT, think about the legislative initiative — in the material of Realnoe Vremya.
Artificial intelligence will be slightly limited
The State Duma has begun drafting a bill on labelling content created by a neural network. The idea was announced a year ago at the Russian University of Technology. According to Anton Nemkin, a member of the State Duma Committee on Information Policy, deputies are studying the legislative experience of other countries on labelling and consulting with experts. According to the parliamentarian, at the moment it is becoming difficult for a person to distinguish the real from the unreal.
“If we consider the long-term perspective, the development of AI services without the necessary control carries the danger of huge arrays of not only unverified texts, but even completely invented facts, figures and data," RIA Novosti quotes him.
The deputy said that at first Russian AI services should be required to automatically label any generated texts. This is due to that it is not yet clear how to recognise the degree of participation of machines and artificial intelligence in the material.
Following the path of technology leaders
The USA has already followed this path. In July 2023, the White House announced an agreement with major American technology companies to implement additional security measures in the field of AI. Among the key measures is the development of a watermark system to identify AI-generated content, which is part of the fight against misinformation and other risks associated with this rapidly evolving technology.
Seven of the largest AI companies — Amazon, Anthropic, Google, Inflation, Meta (recognized as extremist and banned in the Russian Federation), Microsoft and OpenAI — voluntarily assumed additional obligations. As part of the agreement, American tech giants must develop and implement a watermark system for visual and audio content.
Facebook*, Threads, and Instagram* applications owned by American Meta Corporation will be labelled with AI content, the company already announced. Both users and the company itself will have the opportunity to mark “Made using AI” if they detect signs of neural networks working. Previously, Meta banned videos with generated people if they did something in the video that they did not do in life.
Since March 18, YouTube content authors are required to tag their videos created using artificial intelligence. The requirement only applies to realistic content that can be misleading. The platform does not require authors to tag obviously unrealistic content created using AI. This includes animations, colour adjustments or light filters, special effects, beauty filters, or other visual enhancements.
“It is important to work out the details carefully”
“On the other hand, it is also important to take into account the potential challenges and risks associated with labelling," the minister adds. “For example, what will happen to the content created jointly by a person and a neural network? How will it be determined what is a sufficient human contribution so that the content is not considered to be created by a neural network? Will the labelling be applied to content created for personal purposes, or only to content related to commercial purposes?"
The head of the Ministry of Finance of Tatarstan believes that the draft law on labelling content created by neural networks can be useful, but it is important to carefully work out the details of its implementation so as not to put unnecessary barriers to innovation and scientific progress.
Airat Khairullin himself considers artificial intelligence technology to be an assistant rather than a threat to people. Last February, the minister introduced a free chatbot based on artificial intelligence called AiratAI. According to the minister, it was created to help students and schoolchildren — to get tips or search for theoretical information; journalists or content creators — to create a content plan, suggest ideas for articles or write press releases; programmers — to search for errors in the code; all those who need advice in finding an interesting book, film or exhibitions.
“I don't have a personal opinion, but I can say”
We asked AiratAI what it thinks about labelling content created by neural networks.
Today, labelling of content created by neural networks is mainly carried out in countries where laws and regulations in the field of artificial intelligence and automated content have been adopted, AiratAI said. For example, such measures may be taken in the United States, the European Union, Japan, South Korea and other countries that have developed technological infrastructure and are working to regulate the use of artificial intelligence in various fields.
AiratAI assesses the penetration of neural networks in Russia as fast and successful. “Every year more and more companies in Russia are starting to use neural networks in various fields such as medicine, finance, technology, marketing and others. This allows them to automate processes, improve the efficiency and quality of their work," said the bot.
Artificial intelligence can potentially pose a threat to people if it is used improperly or unethically, the virtual assistant to the minister agreed. “For example, autonomous military AI systems can lead to an increase in military conflicts and loss of lives. There is also a danger of using AI for mass control and surveillance, which can threaten the privacy and freedom of citizens," it cited an example.
“However, when used correctly, artificial intelligence can bring huge benefits to people by helping them solve complex problems in medicine, science, business and other fields. It is important to develop and use AI in compliance with ethical norms and laws to minimise potential threats and maximise benefits for society," he concluded.
Bulat Ganiev, a technology entrepreneur, investor and co-founder of Technocracy, believes that such labelling is the right decision.
“Tempting, but hard to implement”
“Labelling the content created by the neural network looks tempting, but is hard to implement in practice," says Vasil Zakiev, the co-founder of Sputnik, ex-project manager of the business incubator of the Kazan and Chelyabinsk IT parks. “Even now, professionally created AI content is indistinguishable from real content, and we are at the very early stage of technology development.”
“Instead of expensive and frontal restrictions, it is better for the government to focus on raising public awareness of the opportunities and risks of AI, promoting the responsibility of AI technology developers. This will help to achieve a balanced approach to innovation and security, avoiding a negative impact on technological development," the entrepreneur suggests.
His comment, in particular, was created using AI, Vasil added. “Did it make it worse? I don't think so," Zakiev noted. “The illustration on Instagram (recognised as extremist and banned in the Russian Federation) does not get worse from using filters.”
*The activities of the resources are recognised as extremist and banned in Russia.
Подписывайтесь на телеграм-канал, группу «ВКонтакте» и страницу в «Одноклассниках» «Реального времени». Ежедневные видео на Rutube, «Дзене» и Youtube.