The boom of nonsense AI videos is unlikely to end in one dramatic crash. My reading of the market is that it starts losing visible force in late 2026, looks materially weaker on major platforms in 2027, and by 2028 no longer feels like a cultural surge. That is not an official forecast from any company or regulator. It is a synthesis of what the market already shows: video models are getting better fast, but the surrounding system is turning against low-value synthetic clutter. Google’s Veo line now pushes native audio, stronger realism and tighter prompt control. OpenAI’s Sora 2 is pitched around more physical accuracy, synchronized dialogue and sound effects. Runway’s Gen-4 is built around scene consistency and repeatable characters. At the same time, YouTube has clarified that repetitive, mass-produced “inauthentic content” is not monetizable, TikTok and Meta keep expanding labels for AI-generated media, and the EU’s transparency rules for AI-generated content are due to apply from 2 August 2026.
Table of Contents
That tension matters more than any single model release. The boom was never only about technology. It was about cheap production meeting weak friction. A nonsense AI clip does not need sharp writing, real reporting, or careful editing. It only needs to stop a thumb for a few seconds, look odd enough to be shareable, and fit the logic of feeds built around endless novelty. That is why so much early synthetic video has felt empty, repetitive, and strangely hypnotic. The threshold for “worth watching” on a feed is much lower than the threshold for “worth making.” Yet that gap narrows once money, trust, policy, and verification start pressing on the same weak spots at once.
The better question is not “When will AI video end?” It will not. The right question is when the mass reward system for pointless AI video breaks down. That breakdown has already started in pieces. Audiences are getting more suspicious. Platforms are building disclosure and provenance layers. Regulators are turning transparency into a compliance issue. Advertisers still care about credibility. Scams, election abuse, and child-directed sludge are bringing harsher scrutiny than meme culture alone ever would. The boom ends when nonsense clips stop being an easy way to buy reach, harvest watch time, and pass as harmless filler.
The boom exists because the tools stopped looking broken
For a while, AI video was a joke machine because it was visibly bad. Fingers melted, faces slipped, motion collapsed, and every clip carried the unmistakable smell of a demo. That phase is over. Today’s mainstream video models are not perfect, but they are well past the point where obvious technical failure protects the audience. Google describes Veo 3 and 3.1 around realism, prompt adherence, creative control, extended video, and native audio. OpenAI describes Sora 2 as more physically accurate, more realistic, and more controllable, with synchronized dialogue and sound effects. Runway frames Gen-4 around consistent characters, locations, objects, and narrative continuity across scenes. That combination is enough to produce clips that look coherent on a phone screen, especially in short bursts where nobody inspects every frame.
That does not mean the tools suddenly became wise, tasteful, or narratively rich. It means they became good enough for feed-native spectacle. A surreal fake news clip, a cursed animal short, a fake celebrity reaction, a pseudo-documentary fragment, or a toddlers’ channel packed with synthetic motion does not need to hold up under close reading. It only needs surface consistency. The newest generation of tools is built to supply exactly that: enough visual and sonic plausibility to keep the illusion alive for a few seconds at a time. The result is a huge production asymmetry. One person can make, remix, and schedule a volume of strange moving images that used to require a team.
That is why so much “AI slop” arrives in forms that do not ask much of the model. It tends to cluster around content types where coherence is optional and repetition is invisible. Children’s loops, uncanny motivational shorts, fake wildlife, celebrity nonsense, disaster simulations, pseudo-history, hyperbolic finance clips, and emotional bait all benefit from the same thing: viewers do not stay long enough to audit the logic. This is not a side effect. It is the economic shape of the medium at this stage. Low-effort synthetic video spread first because it matched the strengths of the tools and the weaknesses of the feed at the same time.
That first advantage will not vanish just because models improve further. Better models can just as easily make bad content scale harder. In fact, that is part of the current trap. Quality gains do not automatically produce meaning; they often produce more convincing emptiness. A sharper uncanny clip can be worse for the information environment than a clumsy one because it removes the visual cues that once warned people off. UNESCO’s description of deepfakes as a broader “crisis of knowing” gets at the heart of the problem. The issue is not only that synthetic media can fool people. It is that it weakens basic confidence in what counts as evidence at all.
That is why the present boom feels bigger than a normal meme cycle. It is not just viral trash. It is viral trash backed by industrial-grade generation tools. The first phase of the boom was driven by novelty. The second phase is driven by capability. The third phase, which is starting now, will be determined by whether the rest of the ecosystem still finds it profitable to distribute, monetize, and half-trust that output at scale. That is where the real end of the boom begins.
Platforms still reward volume long after novelty fades
A feed does not need to “like” nonsense for nonsense to thrive. It only needs to fail to penalize it early enough. That distinction matters. Platforms rarely kill a new content form because it is shallow. They kill it when it starts creating reputational, legal, or monetization problems. Until then, a giant volume of low-value material can keep flowing because the cost of publication is tiny and the system still treats engagement signals as evidence of relevance. That is why absurd AI video can remain common even after users start mocking it.
TikTok’s own help pages show the limit of labels as a cure. The platform requires creators to label realistic AI-generated content and can automatically apply AI labels when it detects qualifying content or reads C2PA metadata. It also bans certain harmful uses, such as fake authoritative sources or crisis events, and the unauthorized use of minors’ likenesses. Yet TikTok also says that turning on the AI-generated content setting does not affect the distribution of the video so long as it complies with the rules. That is a revealing line. It tells you the platform’s basic position: disclosure is important, but disclosure alone is not a throttle. A labeled nonsense clip can still spread if viewers stop for it.
YouTube draws a similar line, though in a different way. Its disclosure tool is aimed at realistic altered or synthetic content that a viewer could mistake for a real person, place, scene, or event. It explicitly says creators do not need to disclose content that is clearly unrealistic, animated, built with ordinary special effects, or made with generative AI for production assistance such as scripts or captions. That leaves a large safe zone for absurdity. If a clip is obviously fake, grotesque, fantastical, or stylistically cartoonish, it may sit outside the stricter disclosure demand even while still being mass-produced sludge.
That is one reason the boom will fade unevenly. The middle category gets hit first: content that wants the authority of reality without the burden of proof. Obviously fictional weirdness can keep circulating for longer because platforms treat it as less deceptive. The system is far more alarmed by fake evidence than by fake nonsense. That is sensible from a harm-reduction view, but it also means a large zone of worthless synthetic material can persist even while tougher rules appear everywhere else.
Still, the revenue model is starting to change. YouTube’s monetization policy update from July 2025 renamed “repetitious content” as “inauthentic content” and clarified that repetitive or mass-produced material is ineligible for monetization. YouTube did not suddenly discover the problem; it said this material had always been ineligible under its originality rules. The important part is the clarification. Once platforms start naming a pattern more directly, creators and content farms get a signal that the old arbitrage is closing. Even before removals rise, the cheap-money logic weakens.
That shift does not shut the faucet immediately. Some producers will still use affiliate schemes, off-platform funnels, fake virality, or simple brute-force volume. Some will move to niches where demonetization matters less. Some will stay just ahead of enforcement by changing format faster than policy language. Yet the long arc is clear. A content form can survive audience ridicule. It struggles once reach remains uncertain and payment gets harder. That is the point where a boom turns into residue.
Audience trust is fraying faster than synthetic video is improving
A boom can outlive criticism for a surprisingly long time. It cannot outlive indifference and distrust at scale. That is where the current AI video cycle looks weaker than it did a year ago. Ofcom’s 2026 report found that 57% of adults aware of AI would trust an AI-generated news story less than one written by a person, while only 7% would trust it more. The same report found that confidence in spotting AI-generated content remains mixed: 44% felt confident, while a large share remained unsure or neutral. Ofcom also found that 56% of social media users said they had seen false or misleading news in the past year. That is not a climate of relaxed curiosity. It is a climate of ambient suspicion.
Pew Research’s 2025 work on AI attitudes points in the same direction. Its survey material highlights strong concern among both the public and AI experts around deepfakes and inaccurate information. That broad alignment matters. Usually, hype cycles survive because elite enthusiasm outruns public caution. Here, the caution is spreading across both groups. People may still watch synthetic clips, but they are watching them inside an environment that increasingly assumes deception is normal. That raises the reputational cost of publishing anything that looks synthetic but asks to be believed.
The Reuters Institute’s 2025 report adds another useful clue. It found that using generative AI for getting information had overtaken using it for creating media such as text, images, and video. That suggests the center of gravity is shifting. When a technology leaves the novelty phase, the market starts sorting by utility. People stop being impressed that something exists and start asking what it is for. That is bad news for nonsense AI video. Empty spectacle thrives while the medium itself is the story. It fades once the medium becomes ordinary and users begin judging outputs by usefulness, trust, and clarity rather than sheer weirdness.
Advertisers are part of this correction, even if they move quietly. Deloitte’s 2025 Digital Media Trends report says social platforms are extending generative AI tools to creators, but it also notes that creators offer more credibility and authenticity to brands and advertisers. That sentence carries more weight than it looks. It means the commercial market still places a premium on a recognizable author, a trusted niche voice, and accountable human presence. A feed can be flooded with synthetic junk and still send brand money toward people who feel real. That split matters because money tends to discipline formats long before culture does.
This is where a lot of predictions about endless “AI slop” miss the mood of the market. They assume that better generation automatically creates stronger demand. The evidence points somewhere messier. Yes, synthetic video is easier to make. No, that does not mean audiences will keep rewarding the flimsiest version of it forever. Repetition kills surprise. Suspicion kills authority. Once surprise and authority both weaken, only genuine entertainment, clear utility, or strong authorship remain. The nonsense middle starts to sag.
The moderation turn has already started
It is easy to treat platform policy as theater. Sometimes it is. Even so, the direction of travel matters because platforms rarely build this much disclosure machinery unless they believe a category is becoming a long-term risk. YouTube now requires creators to disclose realistic altered or synthetic content, and its “How this content was made” system can show viewers whether a video includes altered or synthetic material. In some cases, YouTube may proactively apply such a label when content is undisclosed, and the creator cannot remove it. Neal Mohan’s 2026 letter goes further: labels are not enough on their own, and YouTube says it removes harmful synthetic media that violates guidelines.
That sounds modest until you place it next to YouTube’s newer trust signals. The company now has a “Captured with a camera” disclosure for some videos, indicating that specific technology verified origin and confirmed the audio and visuals had not been altered. It also introduced Likeness Detection, which helps creators find content where their face appears altered or generated by AI and then decide whether to seek removal. Put those pieces together and the pattern is obvious: YouTube is building a ladder of trust, from synthetic disclosure on one side to verified-origin capture on the other. That does not ban slop. It does make slop easier to mark off from material with stronger provenance.
TikTok is moving along the same track. It requires labels for realistic AI-generated content, can auto-label AI media, reads C2PA Content Credentials to identify content from other platforms, and has begun adding invisible watermarks to AI-generated content made with TikTok tools and to content uploaded with C2PA credentials. The platform also says some harmful AI-generated content is prohibited even if labeled, including fake authoritative sources, fake crisis events, and certain uses of public figures or minors’ likenesses. That is not a permissive novelty stance anymore. It is a layered integrity system.
Meta’s position has also become more structured. It said it wanted people to know when they were seeing content made with AI and has shifted its cross-app labeling toward “AI Info.” Meta also says its misinformation standards require people to disclose organic content with photorealistic video or realistic-sounding audio that was digitally created or altered by AI. At the same time, Meta has admitted that industry-standard indicators can lead to edge cases, such as minor AI retouching being swept into labels that felt too broad, which is why it adjusted its labeling language. That admission matters because it shows the industry is still refining the mechanics, not retreating from the goal.
This is the part many people underestimate. The future does not require a sweeping ban on absurd AI video. It only requires enough friction to make the low-value version less attractive to produce. Labels reduce ambiguity. Provenance tools create a premium on verified origin. Likeness complaints raise the risk of impersonation. Harm-based moderation carves away the most dangerous uses. Monetization rules squeeze repetitive output. Once all of those pressures stack up, the market no longer looks like a playground. It starts to look like compliance.
Law, labels and provenance will squeeze the middle
The next major pressure point is not a new model. It is governance. The European Commission says the AI Act entered into force on 1 August 2024 and will be fully applicable two years later, on 2 August 2026, with staggered exceptions. In December 2025, the Commission also published the first draft of its Code of Practice on marking and labelling AI-generated content and stated plainly that the transparency rules for AI-generated content would become applicable on 2 August 2026. Article 50 is central here because it deals with marking AI-generated or manipulated content and with labeling deepfakes and certain AI-generated publications of public interest.
That matters because disclosure stops being a platform preference and starts becoming part of a wider compliance environment. The law is not trying to outlaw synthetic media as such. It is trying to make deceptive synthetic media harder to pass off as ordinary reality. That distinction is why the biggest pressure will land on pseudo-real clips, fake authority, fake evidence, and unlabeled manipulations that want the social advantage of authenticity without paying the cost of transparency. The broad swath of bizarre, obviously fictional AI junk may remain visible longer because it sits outside the highest-harm zone. The legal squeeze hits the deceptive middle first.
The technical layer under this is just as important. C2PA describes itself as an open standard for establishing the origin and edits of digital content. Adobe’s Content Credentials system presents those credentials as a kind of digital nutrition label, carrying information about whether something was captured by a camera, generated by AI, or edited in specific ways. TikTok already reads C2PA credentials for auto-labeling. YouTube has its own verified-origin disclosure. This is the shape of the coming environment: a messy but growing ecosystem where platforms and tools exchange provenance signals rather than relying only on a creator’s honesty.
The pressures most likely to end the boom
| Pressure | What it changes | Earliest visible effect |
|---|---|---|
| Better disclosure and auto-labeling | Synthetic content loses some of its ambiguity | Already underway |
| Demonetization of mass-produced uploads | Low-effort volume farms lose easy upside | Already underway |
| Provenance systems such as C2PA and Content Credentials | Verified origin becomes easier to signal | 2026 onward |
| EU transparency rules | Disclosure becomes a compliance issue, not a courtesy | From 2 August 2026 |
| Advertiser preference for credible creators | Brand budgets drift toward authored, accountable media | Already visible |
These are not official milestones. They are a reasoned forecast built from platform rules, trust signals, provenance standards, advertiser incentives, and the EU timeline now on the table.
The big consequence is a three-way split. Clearly fictional AI video survives. Clearly verified real footage gains value. The hardest place to stay is the murky middle. That middle powered much of the current boom because it borrowed the social force of reality while retaining the cheapness of fabrication. Once labels, watermarking, provenance, and legal duties converge, that advantage shrinks. Not overnight. Not perfectly. Enough to change the economics. That is usually how booms end.
The child-content and scam problem will force harder enforcement
If the story were only about spammy entertainment, the response would stay slow. It will not stay slow because the ugliest uses of synthetic video are already pulling institutions into the fight. Fairplay said in April 2026 that its letter to Google and YouTube about “AI slop” for children was backed by 230-plus organizations and experts. The group argued that such material harms children’s development by distorting their sense of reality, overwhelming learning processes, and hijacking attention. Advocacy campaigns do not prove harm by themselves, and their numbers should be read as advocacy claims, not neutral audits. Even so, the political signal is unmistakable: child-directed synthetic junk is no longer being treated as a funny side effect of AI tools. It is becoming a platform liability.
The scam angle is even more direct. The FTC proposed stronger protections against AI impersonation of individuals in 2024, explicitly connecting the move to the harms posed by AI-generated deepfakes. A year later, the FTC said that in the first year since its Impersonation Rule took effect, it had already brought five cases and shut down 13 websites illegally impersonating the agency online. Those actions were not all about video, but the message is broader than any single format: once AI-generated impersonation becomes a consumer-protection issue, the regulatory mood changes. Fraud pushes policy faster than aesthetics ever will.
Election integrity adds another layer. EDMO’s review of the 2024 election cycle noted that rapid advances in generative AI created new challenges for democratic processes at a moment when almost 2 billion people across more than 70 countries were eligible to vote. Platforms read that landscape clearly. TikTok’s election-integrity posts keep repeating the same architecture: labels for realistic AI content, C2PA-based identification, and stricter treatment of harmful synthetic media. During election-sensitive periods, the tolerance for ambiguous pseudo-real content drops. That kind of high-alert moderation does not stay neatly confined to politics. It spills outward into the broader content ecosystem.
UNESCO’s framing helps explain why this pressure keeps widening. It describes deepfakes and synthetic disinformation not as a narrow fact-checking nuisance but as a crisis of knowing. That phrase matters because it captures the cultural cost of endless fake media: people do not simply fail to recognize truth; they stop trusting the possibility of shared evidence. A culture that feels permanently duped starts demanding stronger signals of origin, authorship, context, and accountability. That mood is poison for the middle tier of nonsense AI video, which lives off ambiguity more than craft.
This is where the boom runs into a wall that technology alone cannot smooth over. Better generation does nothing to solve the fact that children, voters, and scam victims are the groups policymakers care about most. Once synthetic video touches those zones often enough, platforms stop seeing it as a creator feature and start seeing it as a governance problem. That shift has already begun.
The market is heading toward a split, not a collapse
The cleanest answer to the original question is this: the boom probably peaks as a mass feed phenomenon around 2026, weakens clearly across 2027, and by 2028 feels less like a boom than a background pollutant. That is an inference, not a measured forecast. I am reading the market through four forces happening at once: rapid model improvement, rising distrust, tougher monetization and moderation, and a formal transparency regime arriving in Europe on 2 August 2026. None of those forces alone would be enough. Together, they are usually what kills a low-quality arbitrage.
That does not mean nonsense AI videos disappear. They will not. Some of them will stay because they are openly absurd and sit outside the strictest deception rules. Some will migrate into low-status zones of the internet where brand safety and institutional trust matter less. Some will persist as disposable comedy, ironic meme stock, scam bait, or children’s sludge. Platforms themselves leave room for that persistence. YouTube does not require disclosure for clearly unrealistic or animated content, and TikTok says AI labels do not by themselves reduce distribution for compliant posts. Those are not loopholes in the accidental sense. They are signs that the system is trying to manage deception, not erase synthetic media from public life.
The deeper change is that high-distribution surfaces will become choosier. Recommendation systems are not moral actors, but the companies behind them do respond when a format stops being commercially comfortable. If mass-produced content becomes harder to monetize, if suspicious media carries clearer labels, if verified-origin capture becomes easier to signal, and if audiences keep telling researchers they trust AI-made information less, then the prime inventory of the feed starts favoring content with stronger authorship and clearer provenance. That does not require a cultural revolution. It just requires enough friction to make the lowest tier less worth flooding.
A lot of people expect the end of a boom to look cinematic. Usually it looks administrative. Views soften. Monetization gets harder. Labels multiply. Trust drops. Serious creators separate themselves from the sludge. The weird stuff still exists, but it loses the center of the room. That is the most likely future for nonsense AI video. Not extinction. Demotion. It moves from “the next big thing” to “the cheap junk you scroll past.”
The post-slop internet will still use AI video
The strongest reason not to mistake this cooling for a total collapse is simple: AI video does have real uses. The same systems that make low-effort junk easier also make previsualization, concept testing, rapid prototyping, background generation, stylized sequences, localization, and hybrid editing easier. Runway’s emphasis on consistent characters and controllable scenes is not meaningless. Veo’s native audio and stronger prompt adherence are not meaningless. Sora 2’s synchronized dialogue and sound effects are not meaningless. These are genuine production capabilities. They will stay.
What changes is the threshold for respectability. The market will keep rewarding AI video that is authored, edited, contextualized, and accountable. A filmmaker using synthetic shots for preproduction is different from a content farm pushing pseudo-real nonsense at industrial scale. A teacher using generated visuals with clear disclosure is different from a channel flooding children with mesmerizing loops. A branded campaign with a known creative team is different from a faceless feed mill trying to borrow reality without owning it. Provenance tools and disclosure systems are not perfect, but they point toward a media environment where those distinctions matter more.
That is also why the phrase “AI slop” has a limited shelf life. It describes a phase of market behavior, not the destiny of the medium. Right now, too much synthetic video is still being judged by whether it can be made at all. Later it will be judged by more ordinary standards: Who made it? What is it for? Can I trust the context? Does it add anything I could not get from a human with a camera, an editor, or an animator? Those are healthier questions. They are also harder questions for junk producers to answer.
So, when does the boom end? Not when the models stop improving. Not when the internet gets bored for a week. It ends when distribution, money, and trust stop rewarding emptiness at industrial scale. The signs of that change are already visible. Late 2026 through 2027 is the likeliest window when the shift becomes obvious on the big platforms. By 2028, the nonsense will still be around, but the boom will feel spent. The feed will still contain synthetic video. It just will not be able to bluff its way into cultural centrality so easily anymore.
FAQ
When will nonsense AI videos actually peak?
The strongest reading of current signals is around 2026 as a visible mass-feed phenomenon, with a clearer cooling across 2027 as monetization rules, disclosure systems, provenance signals, and legal transparency duties start biting harder. That timeline is an inference from current platform policy, trust data, and the EU’s 2 August 2026 transparency milestone, not a formal industry forecast.
Will AI slop disappear completely?
No. It is more likely to be demoted than erased. Obviously fictional, bizarre, or meme-like clips can keep circulating, especially where platforms see little deception risk. YouTube does not require disclosure for clearly unrealistic or animated content, and TikTok says AI labels do not by themselves reduce distribution for compliant posts.
Why do platforms still let a lot of this content spread?
Because the main policy target is usually harmful deception, not generic bad taste. Platforms are building labels, watermarking, provenance and complaint systems, but many of them still treat compliant synthetic media as allowed content. The clampdown becomes sharper when the material looks real, misleads viewers, imitates people, targets children, or enters politics and scams.
Will labels and watermarks be enough to stop the boom?
Not by themselves. Labels explain; they do not automatically suppress. TikTok says the AI-generated setting does not affect distribution on its own, and YouTube has said labels are not always enough, which is why harmful synthetic media may be removed under broader policy rules. Labels matter most when combined with monetization pressure, provenance systems, advertiser caution, and legal duties.
What role will the EU AI Act play?
Its biggest effect here is normalizing transparency as a requirement rather than a courtesy. The Commission says the transparency rules for AI-generated content become applicable on 2 August 2026, and the Code of Practice work around marking and labeling is meant to prepare providers and deployers for that environment. That will not ban synthetic media, but it should make unlabeled pseudo-real content harder to sustain.
Why is child-directed AI video under heavier pressure than random meme content?
Because children create a far stronger political and regulatory trigger. Fairplay’s 2026 campaign against YouTube’s “AI slop” for kids shows how quickly child-directed synthetic content becomes a platform-liability story rather than a creator-tools story. Once policymakers and advocacy groups frame a format as harmful to child development, tolerance drops fast.
What kind of AI video is most likely to survive after the boom?
The strongest survivors are likely to be hybrid forms with clear authorship and purpose: previsualization, stylized creative work, production support, branded content with accountable teams, educational visuals with disclosure, and creator-led work that uses AI as a tool rather than as a substitute for judgment. The platforms and provenance systems now being built reward that direction much more than faceless synthetic volume.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
Sora 2 is here
OpenAI’s announcement of Sora 2 and its emphasis on realism, control, dialogue and sound.
Veo
Google DeepMind’s product page for Veo, outlining current video-generation capabilities and audio support.
Introducing Runway Gen-4
Runway’s overview of Gen-4 and its focus on scene consistency, controllable media, and repeatable characters.
YouTube channel monetization policies
YouTube’s monetization rules, including the clarification around repetitive, mass-produced, and “inauthentic” content.
How we’re helping creators disclose altered or synthetic content
YouTube’s explanation of when creators must disclose realistic altered or synthetic content.
Understanding ‘How this content was made’ disclosures on YouTube
YouTube’s documentation on synthetic-content disclosures and proactive labeling.
Building trust on YouTube: ‘Captured with a camera’ disclosure
YouTube’s description of verified-origin disclosure for footage confirmed as unaltered.
Likeness detection on YouTube
YouTube’s tool for finding AI-altered or AI-generated uses of a creator’s face.
About AI-generated content
TikTok’s rules for labeling realistic AI-generated content and its limits on harmful AIGC.
Partnering with our industry to advance AI transparency and literacy
TikTok’s announcement that it can read C2PA Content Credentials for auto-labeling.
More ways to spot, shape and understand AI-generated content
TikTok’s update on invisible watermarks and ongoing AI transparency work.
Our Approach to Labeling AI-Generated Content and Manipulated Media
Meta’s explanation of its labeling approach and the shift toward “AI Info.”
Labeling AI Content
Meta’s transparency page describing how its AI labels are applied across apps.
Misinformation
Meta’s policy page covering disclosure duties for photorealistic AI video and realistic-sounding audio.
AI Act
The European Commission’s overview of the AI Act timeline and staged application dates.
Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content
The Commission’s update on Article 50 implementation and the 2 August 2026 transparency deadline.
C2PA | Verifying Media Content Sources
The standard-setting body behind Content Credentials and media provenance metadata.
Content Credentials overview
Adobe’s explanation of Content Credentials as a durable record of how content was made.
AI risks, opportunities, regulation: Views of US public and AI experts
Pew Research material on public and expert concern around deepfakes and inaccurate AI-generated information.
Adults’ Media Use and Attitudes 2026 Report
Ofcom’s 2026 trust and media-literacy findings on AI-generated information and misleading online content.
2025 Digital Media Trends
Deloitte’s research on creator economies, platform shifts and the commercial value of authenticity.
Generative AI and news report 2025: How people think about AI’s role in journalism and society
Reuters Institute reporting on how public use of generative AI is shifting from media creation toward information use.
Deepfakes and the crisis of knowing
UNESCO’s analysis of synthetic media as a broader challenge to trust, evidence and social knowledge.
Generative AI and Disinformation in 2024 Elections: Implications for Democracy Going Forward
EDMO’s review of election-period AI risks and the strain on democratic information systems.
FTC Proposes New Protections to Combat AI Impersonation of Individuals
The FTC’s proposal linking AI-generated deepfakes to consumer fraud and impersonation harms.
FTC Highlights Actions to Protect Consumers from Impersonation Scams
The FTC’s enforcement update showing early action under its impersonation rule.
YouTube: Stop ‘AI Slop’ for Kids, Says Letter from Fairplay, Over 200 Experts, Including Jonathan Haidt
A recent advocacy campaign that shows how child-directed AI video is becoming a policy flashpoint.



