A platform built on creators is testing the limits of trust
The central issue in YouTube’s latest controversy is not simply whether artificial intelligence can improve video or accelerate product development. It is whether a platform built on creator labour can quietly repurpose that labour for new commercial and technological ends without obtaining clear, informed consent. The backlash facing Google and YouTube stems from that deeper tension. What began as criticism over the use of vast quantities of uploaded material to train AI models has now widened into a broader question about who really controls the creative work once it enters the platform’s ecosystem.
Table of Contents
That question became harder to dismiss after reports emerged in June that Google had allegedly used more than 20 billion YouTube videos to help train its Veo3 AI model, relying on existing terms and agreements as its justification. A month later, research published by Proof News added sharper detail to those concerns, finding that subtitles from 173,536 YouTube videos taken from more than 48,000 channels had been used in a training dataset accessed by major technology groups including Anthropic, Nvidia, Apple and Salesforce. The scale alone transformed what might have looked like a technical policy dispute into a structural conflict between platforms and creators.
Training data has become a new battleground
The significance of the Proof News findings lies not only in the number of videos involved, but in what they reveal about the asymmetry between creators and the systems built around their work. The dataset reportedly included material from some of YouTube’s most recognisable personalities, among them MrBeast, Marques Brownlee, Jacksepticeye and PewDiePie. It also drew from political commentary and other forms of online media, including videos from David Pakman, who said no one had approached him to request permission. In that context, the concern is not abstract. Creators are confronting the possibility that years of accumulated output may now function as raw material for tools they neither approved nor control.
That is why the language surrounding the issue has become so severe. Dave Wiskus, chief executive of Nebula, described the practice as theft and framed it as a direct sign of disrespect toward the people whose work sustains the online video economy. His criticism goes beyond the immediate question of consent. It points to a larger fear shared by many creators: that their own output is being used to train systems that could ultimately reduce the value of human creative labour and make it easier for companies to automate parts of the production chain. In that reading, the dispute is not only about compensation or credit. It is about whether the next generation of media tools is being built by extracting value from the very people those tools may later displace.
The editing controversy makes the problem more visible
If AI training exposed one layer of unease, the separate revelation that YouTube has been altering some videos without clearly notifying creators has made the issue more tangible. According to a BBC report from August 24, creators began noticing subtle visual changes in their content, particularly on YouTube Shorts. Music commentator Rick Beato said he first spotted something unusual in his own appearance, describing a polished, artificial look that seemed to alter how he was presented on screen. Rhett Shull reached a similar conclusion after examining his own uploads and objected not only to the visual quality of the edits, but to what they implied. For creators whose relationship with audiences depends on recognisability and authenticity, even minor unseen alterations can feel like a direct intrusion into their public identity.
That reaction matters because the objection is not merely aesthetic. A creator can tolerate compression, formatting or technical optimisation when those processes are understood as part of distribution. The concern here is different. Shull’s complaint was that the result looked AI-generated and therefore misrepresented both his work and his voice online. Once a platform begins modifying the appearance of a video in ways that resemble synthetic intervention, it moves into far more sensitive territory. Viewers may not know whether what they are seeing reflects the creator’s choices, YouTube’s processing, or some emerging hybrid of both. That ambiguity is corrosive precisely because trust on creator platforms is built on the idea that the person on screen remains accountable for what the audience sees.
YouTube’s response leaves the core concern unresolved
YouTube has since confirmed that it is running an experiment on a limited number of Shorts using what it described as traditional machine learning to unblur, denoise and improve clarity during processing, comparing the approach to enhancements commonly applied by modern smartphones. Framed that way, the company’s explanation sounds procedural rather than transformative. But the comparison only partially answers the criticism. A smartphone applies enhancement at the moment of capture, under the user’s control. A platform that changes videos after upload, without an explicit opt-in, introduces a different power relationship altogether.
That is why the most consequential part of this story is not the existence of machine learning inside YouTube’s workflow, but the absence of a clearly communicated boundary around its use. From AI training to post-processing experiments, the pattern described by creators is one in which decisions affecting their work appear to be made first and explained later, if at all. For a platform whose value depends on creator participation, that is a risky posture. The more YouTube treats uploaded content as infrastructure for its own AI ambitions, the more it invites a basic but destabilising conclusion: that the platform may see creators not as partners, but as inputs.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Source: YouTube is editing billions of users’ videos and training AI without their consent



