Google is actively integrating content from its vast YouTube library to train its artificial intelligence technologies such as Gemini and the new video and audio generator Veo 3. CNBC reports this based on its sources.

One source from the publication indicated that a selection of 20 billion videos is being used for training. Google confirmed this information but clarified that it involves only a portion of the content under agreements with creators and media companies.

A YouTube representative explained that the company has always utilized its own content to enhance its services - the emergence of generative AI has not changed this practice. "We recognize the importance of guarantees, so we have developed reliable protection mechanisms for creators," the company stated.

However, experts express concerns about the implications for copyright. They believe that using others' videos to train AI without the creators' consent could lead to a crisis in intellectual property. Although YouTube claims to have previously communicated this, most creators were unaware that their content was being used for training.

Google does not disclose how many videos have been used for training the models. But even if it's just 1% of the library, that amounts to over 2.3 billion minutes of content - 40 times more than its competitors.

Creators, by uploading videos, grant YouTube broad permission to use the content. However, there is no option to opt-out of having their videos used for Google's models.

Representatives from organizations that protect digital rights believe that the years of hard work by creators are being used to develop AI without compensation or even notification. For instance, the company Vermillio has created a service called Trace ID that identifies the similarity of AI-generated videos to original content. In some cases, the match reached over 90%.

Some creators are not opposed to their content being used for training, viewing new tools as opportunities for experimentation. However, most believe that the situation lacks transparency and requires clearer rules.

YouTube has even entered into an agreement with Creative Artists Agency to develop a system for managing AI-generated content that mimics famous individuals. However, the mechanisms for removing or tracking similar content are still imperfect.

Meanwhile, there are already calls in the US to provide authors with legal protections that would allow them to control the use of their creativity in the realm of generative AI.

Recently, Google also amended its internal content moderation policies on YouTube - now videos that partially violate rules may remain online if deemed socially important.

4572 image for slide