The claim sounds neat: AI makes images and video on demand, so stock libraries must be finished. It is a clean headline. It is also the wrong one.
Table of Contents
The companies sitting closest to this market are not acting like businesses waiting for extinction. Getty Images said its full-year 2025 revenue reached $981.3 million, the highest in the company’s 30-year history. Shutterstock reported $989.9 million in 2025 revenue. Those are not the numbers of an industry that has already been wiped out. They are the numbers of an industry being rebuilt while it is still very much alive.
What is actually happening is sharper and more interesting. AI is attacking the generic middle of the market: the predictable stock shot, the fast filler visual, the background illustration, the anonymous “business people in meeting room” file that never needed much specificity in the first place. But the same wave is also making licensed archives, rights management, provenance, real-world capture, and authentic visual evidence more valuable. Getty, Shutterstock, and Adobe are all adding AI products, training-data businesses, contributor payment models, and authenticity tooling rather than abandoning the field.
That is why the better answer to the question is simple: no, photo and video libraries will not disappear because of AI. They will split, adapt, and become more selective. And authentic photography and video will remain, not as a nostalgic leftover, but as a more valuable class of media inside a world flooded with synthetic supply.
The obituary was written too early
The mistake starts with a narrow idea of what stock media companies actually sell. People imagine a stock library as a giant shelf of still images and clips. Pick a file, pay a fee, move on. That model was always incomplete. A serious stock business has never been only about pixels. It has been about search, metadata, releases, indemnification, curation, licensing terms, archive maintenance, contributor networks, editorial access, distribution, and trust.
That fuller picture matters now because AI does not remove those layers. In several cases, it makes them harder and more valuable. A generated image may look fine in a mockup, yet a real campaign still needs answers to practical questions. Where did the source material come from? Who owns what? Was a person’s likeness lawfully used? Is the asset commercial or editorial? Is there a release? Is the file tied to a licensed workflow? Those are not decorative questions. They decide whether a buyer can actually use the image without inviting legal trouble or brand risk.
The financial picture backs that up. Getty’s 2025 results showed growth in total revenue, growth in editorial revenue, and more than half of revenue coming from annual subscriptions. Shutterstock also reported nearly $1 billion in 2025 revenue. Even without pretending that every corner of the market is equally healthy, that is enough to reject the lazy version of the “AI killed stock” thesis. The market is under pressure, but pressure is not disappearance.
The product moves tell the same story. Getty launched a generative AI offering trained on licensed content and presented it as commercially safer for customers. Adobe built Firefly on Adobe Stock, openly licensed content, and public-domain material, while excluding Adobe Stock editorial content from training. Shutterstock expanded its licensed AI training datasets and keeps a contributor fund tied to that business. Those are not defensive gestures from companies waiting for the lights to go out. They are strategic moves by firms that understand where money is shifting.
There is another reason the obituary lands too early. Stock is not one market. It is several markets squeezed into one interface. Editorial photography is not the same thing as generic commercial stock. Sports footage is not the same thing as loopable background video. Licensed archive material is not the same thing as an AI-generated illustration uploaded last week. Product marketers, publishers, agencies, broadcasters, documentary producers, app builders, and model trainers are not buying the same thing for the same reason. Once you stop flattening all of that into one vague category, the future becomes easier to read.
The low-value end of generic imagery will take the hardest hit. That much is real. Yet the larger structure around stock media still serves needs that synthetic generation does not erase. It only changes the mix.
Stock libraries are changing shape, not disappearing
The strongest evidence sits in plain sight: the major agencies are turning into hybrid rights-and-data companies.
Getty’s AI generator is marketed as trained exclusively on licensed content, with legal protection attached to the offering. Shutterstock has been openly expanding its business around licensed training datasets, curation, and AI services. Adobe has gone even further in making the shift visible: Firefly is tied directly to the company’s creative software stack, its stock contributor system, and a compensation model for contributors whose work was used in Firefly training.
That changes what a stock library is. It is no longer just a marketplace where buyers license ready-made images and clips. It is also a source of:
- licensed training data for model builders,
- safe generation workflows for enterprise customers,
- metadata-rich archives that help with search and filtering,
- content provenance infrastructure,
- and creator networks that can produce custom or semi-custom media at scale.
Shutterstock’s contributor documentation makes the pivot especially clear. Its data licensing materials describe datasets that include images, videos, 3D models, and music, with rules around exclusions and contributor payment. The company’s March 2026 announcement about expanded licensed training datasets points in the same direction: the archive is not dying; it is being repurposed into something broader and more industrial.
Adobe shows a different version of the same move. Firefly is trained on Adobe Stock, openly licensed content, and public-domain material. Adobe says it does not train Firefly on Creative Cloud subscribers’ personal content, and it does not use Adobe Stock editorial content for training. It also pays eligible contributors through a Firefly bonus system and maintains standard contributor royalty structures. That combination matters. It says the library is still a revenue source in the old sense, but it is also now part of the machine that powers new AI products.
Getty adds another useful detail. Its developer documentation for “refine” workflows says that using a Getty creative image as part of an AI refinement flow still requires a traditional license product that grants download access to the source asset. That is a small line with big implications. Even inside an AI workflow, the old logic of licensed source material has not vanished. The file, the contract, and the rights still matter.
This is what many outside observers miss. They imagine AI as a substitute for the archive. In reality, the archive often becomes the feedstock, the legal wrapper, the ranking layer, the provenance layer, or the commercial safety layer. That does not save every contributor from disruption. It does show why the libraries themselves are far more likely to mutate than disappear.
The generic middle will feel the hardest pressure
The part of stock media most exposed to AI is not the whole market. It is the broad strip of visuals that were already drifting toward sameness before generative models arrived.
Think about the files that used to fill pitch decks, blog headers, app splash screens, banner ads, and presentation slides: a generic customer-service rep with a headset, a smiling team around a laptop, an abstract city skyline, a drone-like sunset over “somewhere nice,” a faceless close-up of hands typing, a lifestyle clip built from mood rather than real specificity. Those assets were useful because they were fast, not because they were irreplaceable. AI is very good at making things that are merely useful in that thin way.
That is why catalog inflation is about to get worse, not better. Adobe Stock already accepts generative AI content if it meets submission standards. Shutterstock’s contributor materials explain how its AI-generated content tool works and tie it back to datasets licensed from Shutterstock. The practical result is obvious: many more files will exist for many more common prompts, and a lot of those files will compete in visual territory that was generic to begin with.
For buyers, that can be convenient. Need a quick header image for an internal deck? Need ten concept variations before lunch? Need a rough ad visual for a pitch no client may ever approve? AI will do a large share of that work. Plenty of existing stock content in those use cases will lose pricing power.
For creators, the squeeze is harsher. The comfortable middle ground of mass-producible “good enough” stock becomes unstable. A photographer or videographer who built a business around repeatable generic scenes now faces competition from both sides: a huge archive of old stock still online and an even bigger flood of synthetic substitutes.
Yet this does not mean creators have no place left. It means the weak parts of the old stock strategy get weaker. Specificity becomes the defense. Real access, rare locations, unusual professions, credible documentary texture, niche subject knowledge, local language context, identifiable seasons, region-specific architecture, real manufacturing settings, real healthcare environments, real logistics operations, real agriculture, real labs, real classrooms, real communities, real faces with releases, real motion shot with craft — those things remain harder to fake convincingly and much harder to license safely at scale.
Video adds another layer of resistance. A still image can hide its fakery more easily than a moving scene. Motion introduces timing, continuity, reflections, body mechanics, lens behavior, physical causality, sound sync, environment consistency, and editability across multiple shots. AI video is improving quickly, but the gap between a plausible short clip and a trustworthy usable sequence is still meaningful. Buyers notice that gap when the footage needs to survive real scrutiny, not just a scroll.
So yes, AI will eat a big part of the generic stock middle. That part of the warning is real. It still does not follow that stock media disappears. It follows that commodity visual filler becomes cheaper, while harder-to-fake captured work becomes more prized.
Rights, releases, and legal safety still decide real budgets
A generated image that looks polished is not automatically a usable commercial asset. This is where a lot of casual conversation about AI and stock media breaks down. People compare appearance to appearance. Buyers with real budgets compare risk to risk.
Getty’s AI product is explicitly framed around licensed training material and legal protection. Adobe describes Firefly as trained on Adobe Stock, openly licensed content, and public-domain content, and says it is designed to be safe for commercial use. Those claims are not marketing fluff floating above the business. They point directly at the reason major customers still pay intermediaries. Commercial media lives inside contracts.
The ugly version of the problem is familiar: unlicensed training disputes, trademark contamination, likeness issues, accidental copying, uncertain ownership, unclear chain of title, or confusion between editorial and commercial use. A file can be cheap and still be expensive if it drags a brand into a fight later.
That is why stock libraries do not become irrelevant once generation gets easier. They remain useful because they package media together with terms, warranties, restrictions, and search filters that matter to lawyers and procurement teams as much as they matter to creatives. Getty’s license agreement still lays out how photos, illustrations, vectors, and video clips can be used. The form of media may be changing, but the old licensing backbone is still there.
The copyright picture also keeps human-made work in a stronger position than many AI evangelists admit. The U.S. Copyright Office’s 2025 report on copyrightability says that copyright protects the human-authored elements of a work and does not extend in the usual way to material generated by a machine absent sufficient human authorship. That does not mean AI-assisted work is useless. It does mean the legal treatment of AI-heavy outputs is not as clean as many buyers would like. Human creativity is still the anchor point.
That legal asymmetry matters for stock. A library of human-shot, released, cataloged, and contractually governed media is easier to price, license, defend, and reuse than a universe of unclear machine outputs scraped from uncertain sources. This is one reason Getty, Shutterstock, and Adobe all keep circling back to licensed content. They know that visual abundance is not enough. Buyers want something they can actually ship.
There is a blunt way to put it. The bigger the budget, the less patience there is for ambiguity. AI wins fastest where consequences are small. Rights-cleared stock and commissioned capture stay strongest where accountability is real.
Authentic footage becomes scarcer as synthetic media floods the feed
Once synthetic media becomes easy to produce, authenticity stops being a default and turns into a premium attribute.
That premium is not mystical. It is technical, editorial, legal, and commercial. The industry is building systems to prove where media came from and how it was edited because the old assumption — “a photo is probably a record of something that happened” — is no longer stable. The C2PA standard exists to establish origin and edit history for digital content. The Content Authenticity Initiative pushes adoption of Content Credentials, and the Content Credentials ecosystem now includes participation from hundreds of companies.
Camera makers and news organizations are not treating this as a side issue. Leica says the M11-P was the first camera to integrate Content Credentials directly. Nikon’s Authenticity Service attaches secure Content Credentials to photos from select cameras. Thomson Reuters described a proof of concept built with Canon and Starling Lab to securely capture, store, and verify photographs. None of this infrastructure makes sense unless the market expects verified real capture to matter more in the AI era, not less.
Even AI companies have moved the same way. OpenAI said in 2024 that it was joining the steering committee of C2PA, calling it a widely used standard for digital content certification. In 2026, OpenAI also said that every video generated with Sora includes provenance signals and embeds C2PA metadata. That detail is easy to miss, but it is one of the cleanest tells in the whole market. The builders of synthetic media are helping build the labeling systems that distinguish synthetic media from captured media.
Where synthetic media wins and where authentic capture holds
| Most exposed to AI | Hard to replace with AI |
|---|---|
| Generic concept art, filler blog images, quick presentation visuals | Real events, real people, real products, real places |
| Mood clips, stylized background loops, speculative ad mockups | Verifiable news, documentary footage, behind-the-scenes evidence |
| Endless visual variations for pitches and brainstorming | Brand trust assets tied to access, releases, provenance, and accountability |
The table is small because the split is small and sharp. AI thrives where the visual only needs to be plausible. Authentic capture keeps value where the visual needs to be true, attributable, or defensible. The more the market cares about proof, the less synthetic abundance solves the whole job.
This shift also changes the emotional meaning of a real image or clip. A street scene, a protest, a factory line, a founder interview, a medical team at work, a local festival, a storm over an actual coastline — these are not just visuals. They are records. In a feed saturated with things that never happened, records get heavier.
Newsrooms, brands, and filmmakers still need what AI cannot witness
AI can simulate a protest. It cannot attend one.
That line gets to the heart of why authentic photography and video remain secure in the places that matter most. News, documentary, sports, corporate communications, product marketing, internal communications, compliance-heavy industries, employer branding, and public-interest reporting all rely on media that is connected to a real moment, a real subject, or a real claim. A synthetic replacement may look close enough for a thumbnail. It fails the minute the image is supposed to stand as evidence.
Getty’s 2025 results showed editorial revenue of $369.6 million, up 6.9% year over year. Adobe says Adobe Stock editorial content is not used to train Firefly. Those are two different signals, but they point in the same direction. Editorial imagery is not just another pile of pixels. It remains a distinct asset class because it is tied to actuality, timing, rights conditions, and trust.
The same logic applies outside journalism. Brands increasingly need visuals of their own products, staff, stores, warehouses, labs, customers, founders, offices, and events. A bank cannot safely tell its story with endlessly generic fake branch images forever. A hospital cannot build trust on synthetic caregivers alone. A manufacturer selling precision hardware needs footage of its actual process, not just glossy approximations of “industry.” A university recruiting students wants its campus, its faculty, its city, its classrooms. Those are not sentimental preferences. They are practical demands for specificity.
Video makes the point even harder. A brand film, founder interview, case study, recruitment piece, event recap, social proof clip, documentary insert, or training video is not replaced by a few generated scenes. Buyers need continuity across shots, real speech, real demonstrations, real product behavior, and scenes that can survive close viewing. They also need material that can be updated, re-edited, localized, versioned, and defended when someone asks whether the footage represents reality.
This is why authentic footage will not merely survive. It will likely divide into clearer premium tiers. The top tier will be verifiable, rights-clean, context-rich, hard-to-access capture. The next tier will be strong custom or semi-custom production. Lower tiers of generic filler will continue to lose value.
There is no contradiction here. AI will still enter many workflows around these jobs. Previsualization, animatics, test frames, alternate takes, concept boards, extension shots, rough backgrounds, low-stakes B-roll, and internal drafts will all be affected. Yet the center of gravity remains human whenever the image is doing more than decorating a page.
A synthetic image can imitate the look of witness. It cannot be witness. That difference is starting to matter more, not less.
Stock agencies are turning into rights and data infrastructure
The future of stock media becomes easier to understand once you stop picturing it as a giant folder of old JPEGs and MP4s.
The more accurate picture is infrastructure. The file still matters, of course. Yet around the file sits a stack of systems: contributor contracts, review pipelines, taxonomy, keywording, geolocation, release handling, enterprise billing, API access, provenance metadata, model-training rules, brand-safety filters, usage restrictions, and payment logic. AI does not sweep that stack away. It gives the stack more jobs to do.
Getty’s developer documentation for image refinement is a good example. If you want to refine an image from Getty’s creative library inside that AI workflow, Getty says you still need a traditional license product that gives download access to the source asset. That shows how the archive becomes a licensed component inside a new toolchain. The generated layer sits on top; the licensing layer still underneath.
Shutterstock is building a similar machine from a different angle. Its licensed dataset business is not just a way to rent out old files. It is a method for packaging content, metadata, exclusions, and contributor terms into something usable for model builders. The company’s expansion of licensed training datasets, along with its contributor fund, reveals a market where the library becomes a structured input for AI development and deployment.
Adobe’s position is even more integrated because it owns tools people use to make, edit, and distribute creative work. Firefly connects generation, editing, contributor compensation, and Content Credentials. Adobe’s Content Credentials materials describe those credentials as durable, industry-standard metadata that can show whether content was captured by a camera, edited in software, or generated by AI. That is exactly the kind of connective tissue that turns a media company into workflow infrastructure.
This matters for creators, too. The old dream of passive stock income from uploading a lot of broadly useful files becomes weaker. The new opportunity sits in becoming useful to the infrastructure: making licensable source content, producing specialized material, contributing niche archives, participating in training-data programs, attaching richer metadata, offering custom shoots, or supplying media that can carry stronger provenance and higher trust.
A library that helps a buyer find a pretty image will survive in some form. A library that helps a buyer find a pretty image plus lawful usage plus source transparency plus training rights plus creator attribution plus workflow compatibility will survive in a much stronger form.
That is where this market is heading.
A hybrid market is taking shape
The future is not “AI replaces stock” and it is not “nothing changes.” The future is a layered market with clearer roles.
The first layer is synthetic volume. This is where AI wins decisively: rough ideation, quick filler imagery, low-risk illustration, endless variants, internal presentations, fast concept work, cheap background motion, visual testing, and content that only needs to read well for a moment.
The second layer is human-directed production with AI assistance. A buyer may start with generation, then move into editing, compositing, refining, retouching, asset blending, localized variants, and production shortcuts. Much of commercial creative work will live here because it balances speed with control.
The third layer is authentic captured media. This is the part that gains value as synthetic supply explodes. It includes real people, real spaces, real operations, real events, real products, credible editorial work, documentary record, trustworthy footage, and material with provenance or at least a cleaner chain of origin. This layer does not stay valuable because people are sentimental about cameras. It stays valuable because false abundance makes truth easier to price.
That is why the simplest “AI kills stock” forecast misses the real split. AI lowers the cost of making visuals, but it does not lower the need for licensed, attributable, specific, and trustworthy media. In some segments it raises that need.
For photographers and videographers, the lesson is not to run from AI in panic or to imitate it badly. The lesson is to move toward what it struggles to supply at scale: access, judgment, real environments, specialist knowledge, repeat relationships, local trust, documentary presence, and production discipline. The safest subjects are not the bland ones. They are the ones that depend on being there.
For buyers, the lesson is just as clear. Use AI where speed matters more than proof. Use authentic stock or original capture where proof, rights, or brand credibility matter more than speed. Use hybrid workflows where both matter.
For agencies and libraries, the work ahead is already visible. They will keep licensing files, but they will also sell safer generation, contributor-compensated training access, provenance signals, enterprise search, and workflow connections between source assets and generated outputs. The archive does not vanish. It becomes part of a larger operating system for media.
The premium will move toward trust
A lot of cheap predictions about media collapse confuse making something look real with making something be reliable.
Those are no longer the same thing. That gap will shape the next decade of photography, video, and stock media. The wider the public gets used to synthetic abundance, the more valuable trust becomes — trust in origin, trust in licensing, trust in attribution, trust in whether a scene happened, trust in whether a brand is showing what it claims to show.
That is why the future of stock libraries is not bleak in the simplistic sense. It is demanding. Weak material will drown faster. Generic filler will lose pricing power. Contributors without a point of view or access edge will feel the squeeze. Some catalogs will bloat into noise. Search will get harder. Buyers will become less patient with sameness.
Yet the larger market does not disappear inside that turbulence. It reorganizes around a clearer divide:
plausible media for speed, and trustworthy media for consequence.
Stock photo and video libraries remain because the world still needs organized visual supply, lawful usage, searchable archives, and real-world media that somebody can stand behind. Authentic photography and video remain because the internet is heading into an era where evidence is no longer cheap.
The irony is hard to miss. The better AI gets at imitation, the more market value shifts toward things that carry proof of origin, human authorship, or actual presence. That does not weaken authentic photography and video. It gives them a sharper reason to exist.
FAQ
No. The strongest evidence points the other way. Getty and Shutterstock both reported major 2025 revenue, while Getty, Adobe, and Shutterstock have all launched or expanded AI-related products and licensing models. The market is changing shape, not vanishing.
The most exposed material is generic, repeatable, low-specificity content: filler blog images, generic business scenes, concept visuals, abstract backgrounds, and quick mood clips. That material competes directly with cheap synthetic output and with the growing acceptance of AI-generated submissions on stock platforms.
Because real capture solves a different problem. It can document an event, show an actual product, prove a location, represent a real person, or serve as evidence in editorial and brand settings. Provenance systems such as C2PA and Content Credentials are being adopted precisely because verified reality is becoming more valuable in an AI-heavy media environment.
Yes, especially when campaigns carry legal, reputational, or compliance risk. Buyers still care about licensing terms, releases, rights, and commercial safety. Getty and Adobe both frame their AI offerings around licensed inputs and safer commercial usage, which shows how central rights management still is.
They are industry-standard provenance metadata that can show who made a piece of content and whether it was captured by a camera, edited, or generated by AI. The standard is tied to the C2PA ecosystem and is being adopted across software, hardware, media, and AI platforms.
Partly, yes — but not in a way that erases their older role. They are becoming hybrid businesses that combine archives, contributor networks, licensing, datasets, safer generation tools, and workflow infrastructure. Shutterstock’s licensed dataset business, Getty’s AI offering, and Adobe’s Firefly system all point in that direction.
The strongest areas are the ones AI still struggles to replace cleanly: real access, niche expertise, documentary work, specific locations, real products, real teams, unusual environments, and footage that benefits from trust or provenance. The weaker strategy is mass-producing generic visuals that could be replaced by a prompt. This conclusion follows from the way agencies are restructuring their catalogs, rights models, and AI businesses.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
Getty Images Reports Fourth Quarter and Full Year 2025 Results
Getty’s latest annual results, including total revenue, editorial revenue, and subscription mix.
Shutterstock Reports Full Year 2025 and Fourth Quarter Financial Results
Shutterstock’s 2025 financial results and current business positioning.
Commercially safe AI Image Generation and Modification
Getty’s overview of its AI image generation product and its licensed-content positioning.
Generative AI User FAQs
Getty’s explanation of how its AI offering is framed for customers, including licensing and usage details.
Generative AI Refine
Getty developer documentation showing how source-image licensing still applies inside AI refinement workflows.
Getty Images Content License Agreement
Getty’s current licensing terms for photos, illustrations, vectors, and video clips.
Shutterstock Announces Major Expansion of Licensed Training Datasets to Power the Next Generation of Generative AI
Shutterstock’s 2026 announcement on scaling its licensed dataset business for AI training.
Shutterstock Data Licensing and the Contributor Fund
Shutterstock’s explanation of how datasets are built and how contributors are paid.
AI-generated Content on Shutterstock: Contributor FAQ
Shutterstock’s contributor guidance on AI-generated submissions and related policies.
Adobe Firefly – Free Generative AI for Creatives
Adobe’s main Firefly page, including training-source and commercial-use positioning.
Firefly FAQ for Adobe Stock Contributors
Adobe’s explanation of Firefly contributor bonuses and training-related compensation.
Generative AI Content
Adobe Stock’s rules for accepting generative AI submissions.
Royalty details for contributors to Adobe Stock
Adobe’s current contributor royalty and payment framework.
Content Authenticity Initiative
The industry initiative promoting adoption of content provenance and authenticity standards.
Content Credentials | Verify Media Authenticity
Overview of the Content Credentials system and its role in media transparency.
C2PA | Verifying Media Content Sources
The main site for the C2PA provenance standard used to track media origin and edits.
Introducing Official Content Credentials Icon
C2PA’s announcement of the official icon used to signal content transparency.
Copyright and Artificial Intelligence | U.S. Copyright Office
The U.S. Copyright Office’s hub for its AI and copyright reports.
Copyright and Artificial Intelligence, Part 2: Copyrightability
The Copyright Office’s report on how copyright law applies to AI-assisted and AI-generated works.
Reuters new proof of concept employs authentication system to securely capture, store and verify photographs
Reuters’ description of its photo authentication pilot with Canon and Starling Lab.
Leica Content Credentials in the M11-P
Leica’s explanation of built-in Content Credentials in the M11-P camera.
Nikon Authenticity Service | C2PA Content Credentials
Nikon’s overview of adding secure Content Credentials to supported camera outputs.
Understanding the source of what we see and hear online
OpenAI’s statement on joining the C2PA steering committee and supporting provenance standards.
Creating with Sora safely
OpenAI’s description of provenance signals and C2PA metadata in Sora-generated video.



