The safest way to back up massive video and RAW photo archives

The safest way to back up massive video and RAW photo archives

One cloud copy is not a safety plan

The first recommendation is the one many people resist because it sounds less elegant than “I put everything in the cloud.” If your video footage and RAW photo archive lives in only one place, you do not have a backup. You have a single point of failure. That is true whether the single place is one external drive, one NAS, or one cloud account. Backblaze’s summary of the 3-2-1 rule still gets the core right: three copies of your data, on two different media, with one copy off-site. Veeam’s newer 3-2-1-1-0 version adds one immutable or offline copy and zero restore errors, which is a much better fit for media work where mistakes, ransomware, and accidental deletion matter just as much as drive failure.

That matters more for filmmakers and photographers than for ordinary office files because media archives are big, slow to rebuild, and often irreplaceable. A wedding shoot, a documentary field trip, a commercial production day, or a multi-year photo archive cannot be recreated by downloading an export from some SaaS app. The original camera files are the asset. If they disappear, the business loss is not only the storage bill. It is the shoot day, the travel, the people, the access, and the time. That is why a backup design for media has to deal with four separate risks: hardware failure, human error, malicious deletion or ransomware, and off-site failure such as a cloud region problem or account issue. The media world often talks as if these were the same problem. They are not.

Cloud storage reduces one category of risk very well: the risk that your only local box dies, gets stolen, burns, floods, or falls off a cart. It does not erase the rest. AWS says S3 versioning lets you preserve, retrieve, and restore every version of every object and recover from unintended user actions and application failures. Google Cloud says object versioning keeps noncurrent versions when a live object is replaced or deleted. Azure’s immutability and versioning features are built for the same reason. The lesson is plain: cloud providers themselves assume you should not trust a single current version of your files. They offer versioning, retention locks, and replication because deletion and overwrite are real failure modes even inside durable cloud storage.

There is also a second misunderstanding that quietly ruins archives: people confuse availability with backup. A RAID box may stay online when a disk fails. That is useful. It is not the same as having a second clean copy somewhere else. Synology states it directly in its own knowledge base: RAID, including SHR, is not a backup solution. A NAS is excellent as working storage or as one layer in a backup system, but a NAS by itself is still just one storage system. If someone deletes a folder, encrypts it with ransomware, corrupts it, or writes bad data into it, the RAID usually preserves the disaster very efficiently.

So the answer to the user’s fear — “What if the cloud goes down and everything disappears?” — is simple and not comforting: if the cloud is your only archive, then yes, you are exposed. If the cloud is one copy in a layered design, then a cloud outage becomes a nuisance, not a catastrophe. That is the difference a real backup plan creates.

The backup rule that still works for giant media libraries

The old 3-2-1 rule survives because it describes failure independence better than most newer marketing slogans. You want the original working copy, a second copy on different storage, and a third copy off-site. For huge video and RAW photo archives, though, I would not stop there. Use 3-2-1-1-0 as the working standard. The extra “1” gives you one offline, air-gapped, or immutable copy. The “0” forces you to verify that your backups restore correctly and that your copied files match the source. Veeam spells that out plainly, and for media archives that part is not corporate theater. A backup you have never restored is an assumption, not a backup.

For creators, the three copies usually look like this. Copy one is your working storage. That might be a fast SSD RAID, a DAS enclosure, or a NAS used for active projects. Copy two is a separate local backup, ideally on another device that is not the same array mirrored in the same chassis. Copy three is off-site: cloud object storage, a second NAS in another building, rotated external drives stored elsewhere, or LTO tapes stored away from the main site. The extra immutable or offline copy can be part of that third copy, or it can sit beside it if your archive is large enough. The point is not to collect gadgets. The point is to make sure one failure does not take every copy with it.

Media archives add one ugly detail that ordinary backup advice often ignores: ingest speed. If you come back from a shoot with many terabytes, the backup system must fit the pace of production. Adobe’s own ingest and proxy workflow guidance reflects this by separating original media for backup from proxy creation for editing. That is exactly the right instinct. Keep the originals sacred. Make proxies cheap. Originals need verified copy, retention, and archive discipline. Proxies need convenience, speed, and broad access for edit and review. Those are not the same job, and mixing them leads to sloppy archive habits.

This is also where budgets get distorted. Many people overspend on the working tier and underspend on the boring tiers that prevent ruin. A fast RAID for current projects is nice. A second boring box that quietly holds clean backups is better. An off-site copy that cannot be wiped by a bad day is better still. The glamorous gear is rarely the thing that saves the archive. The thing that saves the archive is the copy you hoped you would never need. That is why immutable storage, versioning, snapshots, and offline media deserve real money in a production budget.

The stronger version of the rule also helps settle a common argument about “local versus cloud.” The answer is neither. You need both, and they need to fail differently. Local gear gives you speed, direct control, and fast restores. Cloud gives you distance and another fault domain. Offline disks or tape give you protection against live-system failure and malicious changes. Good backup design is not choosing a favorite medium. It is choosing independent failure paths.

A storage stack that survives the failures people actually suffer

If I were setting up a serious archive for video and RAW stills today, I would build it in layers. The first layer is fast working storage. That can be a RAID or a NAS, because editing directly from a single slow disk is miserable once timelines and catalogs get heavy. The second layer is local backup on a separate system. The third layer is off-site object storage or a second system in another location. The fourth layer, when the archive matters enough, is immutable or offline cold storage. That four-layer stack covers the real failures much better than a single “best” product ever will.

The working layer is about speed and convenience, not long-term trust. Use it for editing, culling, grading, and delivery. It is allowed to be busy. It is allowed to be exposed to users. It is allowed to change constantly. That is why it should never be your only home for the originals. Synology’s own material is useful here because it separates snapshot replication and disaster recovery from RAID itself. The hardware can keep you moving after a disk dies, but the protection against malicious or accidental deletion sits in snapshots and replication, not in RAID alone.

The local backup layer should live on a second physical system. That can be another NAS, a backup server, or even a bank of external disks if the budget is tight and the discipline is strong. The key is separation. It should not share the same single controller, power event, filesystem accident, or administrator mistake as the working tier. If you use ZFS on TrueNAS, snapshots give you read-only point-in-time copies, and scheduled scrubs check the pool for silent corruption and early disk problems. Those are exactly the kind of dull, technical habits that keep old footage readable years later.

The off-site layer is where object storage earns its keep. AWS, Google Cloud, and Azure all offer the features that matter here: versioning, retention or immutability, and geographic redundancy options. AWS supports Object Lock and cross-region replication. Google Cloud supports object versioning, object retention lock, and dual-region buckets with failover during regional outage. Azure supports geo-redundant storage and version-level immutability, while also warning that geo-replication is asynchronous and may have an RPO gap during a major regional failure. That warning matters. “In the cloud” is not the same as “immune to regional loss with zero data gap.” You still need to choose the replication model deliberately.

The cold layer is where most serious archives become either sane or fragile. A truly cold copy is one that your day-to-day work cannot casually modify. That can be immutable cloud storage, an offline disk rotation, or LTO tape. The right choice depends on scale. For small archives, rotated offline disks are often enough if you label and verify them properly. For very large archives, tape starts making financial and operational sense because it is made for long retention and bulk movement. The LTO consortium is obviously selling tape, so take the marketing tone for what it is, but the technical case remains sound: high capacity, strong throughput, and LTFS support for easier retrieval.

Two storage models that make sense

ModelWhat it is good for
Active archiveCurrent and recent projects that must stay easy to reach, edit, restore, and search
Deep archiveFinished footage, RAW libraries, and source masters that you rarely touch but cannot afford to lose

This split keeps spending honest. Not every file deserves premium fast storage forever. Recent jobs deserve convenience. Old but important jobs deserve safety first. That is why the best systems usually mix fast disk for the active archive and colder storage for the rest. Adobe’s proxy workflow guidance fits this split nicely because it lets you keep original media protected while lighter derivatives stay easy to work with. AWS Glacier tiers, Azure archive-capable designs, Google’s colder object storage classes, or LTO can all sit under the deep-archive side, depending on your restore expectations.

NAS, tape, and rotated disks each have a place

A lot of creators ask one question as if it had one answer: should I buy a NAS or use tape or just stack hard drives? The honest answer is that each one solves a different job. A NAS is excellent for shared working storage and for one layer of local backup. Tape is excellent for large cold archives. Rotated external disks are the cheapest serious off-site method if the archive is still small enough to handle physically. Problems start when people force one tool to do every job.

NAS first. If you work with other people, need central access, or want snapshot-based recovery, a NAS is usually the right backbone. Synology’s snapshot replication stack and immutable snapshots are useful because they deal with accidental deletion and ransomware-style events. TrueNAS gives you ZFS snapshots, replication, and scrub-based integrity checking. That is a serious set of features for media work. Still, the warning does not change: the NAS is one layer, not the whole plan. A NAS without a second copy somewhere else is still a risk concentration point.

Tape next. For people with tens to hundreds of terabytes of finished projects, tape deserves more respect than it gets in creator circles. LTO’s current public roadmap and benefits pages show why it stays alive: big cartridge capacity, strong transfer rates, and a format built for long retention. LTFS also makes tape easier to browse than older tape workflows that felt trapped inside special software. Tape is not fashionable, but it is still one of the cleanest answers to “I need an archive that stays offline, scales, and does not charge me rent every month forever.”

Tape does have limits. It is slower to retrieve than spinning up a NAS share. It needs discipline in labeling, cataloging, and storing cartridges. It feels excessive if your entire archive is 8 TB and you touch everything every week. The break-even point is not a universal number, because it depends on restore frequency, labor, internet speed, and whether you want to own the hardware. My own rule of thumb is simple: once the archive is big enough that cloud egress, restore time, or endless disk sprawl starts to irritate you every quarter, tape should enter the conversation. The sources support tape’s strength for high-capacity offload and long retention; that threshold is my judgment call, not an industry law.

Rotated external disks deserve a defense too. They are often mocked because they do not sound enterprise-grade. Yet for a solo photographer or a small video shop with 10 to 30 TB of archive, two or three well-managed external disk sets rotated off-site can be far safer than one fancy NAS with no second copy. The weakness is not the disks. The weakness is human behavior: bad labeling, no checksum reports, no rotation schedule, and no restore tests. Cheap storage becomes dangerous only when the process around it is sloppy.

So the better question is not “which medium is best?” It is “what role does each medium play in my archive?” NAS for work and fast restore. Cloud object storage for off-site copy and geographic separation. Offline disks or LTO for cold, independent retention. Once you think in roles, the design gets much clearer.

Cloud done properly for huge archives

Cloud is strongest when it is treated like durable off-site infrastructure, not like a magic black box. The first choice is between consumer sync-style services and proper object storage. For huge video and RAW photo libraries, object storage is usually the better long-term answer because it is designed for scale, versioning, retention controls, replication, and lifecycle policies. AWS S3, Google Cloud Storage, and Azure Blob Storage all expose those controls clearly in their documentation. If you are serious about big archives, use storage that lets you control versions, retention, and geography.

Versioning should be turned on before you trust the bucket with anything important. AWS says versioning preserves and lets you restore every version of stored objects. Google says noncurrent versions are retained when live objects are replaced or deleted. Azure’s version-level immutability setup automatically ties blob versioning to immutability support. That is not a side detail. Without versioning, deletion is often just deletion. With versioning, deletion becomes a recoverable event.

Immutability or retention lock is the next line of defense. AWS Object Lock uses a write-once-read-many model to stop deletion or overwrite for a set period or indefinitely. Google’s Object Retention Lock can prevent the retention time from being reduced or removed when locked, and the object cannot be deleted or replaced before the retain-until time. Azure’s immutable storage does the same job in its own model. For media archives, that matters less for regulation than for sanity. A locked copy is the copy that survives panic, malware, bad scripts, and tired humans.

Geography matters too. If the fear is “what if one cloud region dies,” then the answer is not hand-waving. Choose redundancy deliberately. AWS offers cross-region replication. Google says dual-region buckets automatically fail over to the other region during a regional outage. Azure offers geo-redundant and geo-zone-redundant storage, while also warning that replication to the secondary region is asynchronous and can leave a recovery point gap. That last point is the kind of detail people miss until a bad day. Cloud redundancy is real, but the exact recovery behavior depends on the product you chose.

Lifecycle policy is the part that saves money without wrecking the archive. AWS’s docs warn that retaining all versions can raise costs if you do not set expiration rules thoughtfully. That does not mean “delete aggressively.” It means treat the bucket like an archive system with rules, not a junk drawer. Keep recent versions and deleted versions long enough to protect against mistakes. Move older cold material into cheaper archive classes when fast access no longer matters. The cloud bill gets ugly when the retention policy is either absent or mindless.

My strongest cloud recommendation for big media is this: keep the cloud as an off-site source of truth for backup, but do not rely on cloud as the only place your entire cold archive exists unless you also keep a second independent cold copy. That second cold copy can be tape, rotated offline disks, or a second cloud target with different credentials and a different failure path. The reason is not paranoia. It is discipline. A single vendor, single account, single bucket, and single policy stack is still one system. For irreplaceable media, one system is not enough.

Checksums, scrubs, and restore tests are where archives quietly live or die

Most ruined archives do not fail in dramatic Hollywood style. They decay in silence. A file copies badly and nobody notices. A disk starts returning corrupt blocks and nothing checks them. A backup job runs, but no one restores from it for two years. When the restore finally matters, the archive turns out to be more theoretical than real. That is why verification belongs in the design, not as an occasional guilty thought. The “0” in 3-2-1-1-0 is there for a reason: zero recovery errors.

The Library of Congress offers the right language here. It treats fixity as a bedrock part of digital preservation and describes monitoring hash values such as MD5 or SHA through manifests and inventory systems. That is not museum-only advice. It is exactly what media teams should be doing when cards are copied, footage is moved between arrays, or archives are written to cold storage. If you do not know whether the copied file matches the source, you are trusting luck.

For local storage, scrubs matter. TrueNAS explains that ZFS scrubs identify data integrity problems, detect silent corruption from transient hardware issues, and provide early disk failure alerts. The ZFS primer adds that regular scrubs read blocks, validate checksums, and correct corrupted blocks when redundancy exists. That is the kind of unglamorous maintenance that turns a storage box into an archive system. Large media archives are not just about capacity. They are about ongoing proof that the bits stayed the same.

Restore testing deserves the same respect. Pull a few projects at random. Restore the folder tree. Open the catalog. Relink footage. Read a few clips. Check some RAW files. Test one old job and one recent job. If you use tape, do a tape restore drill. If you use cloud, pull enough data to make sure the path, credentials, and timing are real. Synology’s snapshot replication material even mentions test failovers as a way to verify that recovery will work in disaster conditions. A backup system should be exercised like emergency gear, not admired like furniture.

This is also where proxy workflows help. Adobe notes that you can ingest original media for backup while creating proxy files for smoother editing. That split keeps production moving while preserving the originals with more care. The archive process should slow down just enough to verify the source masters properly. The edit process can stay fast because proxies are easy to regenerate. Treating originals and proxies as different classes of data removes a lot of pressure from the working system and keeps the archive cleaner.

The setups I would actually recommend by archive size

For a solo shooter or photographer with up to roughly 20 TB of serious media, the best value setup is usually simple. Use one fast working SSD or RAID for active jobs. Back it up to a separate local disk set or small NAS. Keep one off-site copy, either in object storage with versioning and retention lock or on rotated external disks stored elsewhere. If the work is truly irreplaceable, keep one of those copies offline between updates. You do not need enterprise gear to be safe. You need separation, verification, and routine.

For a small studio or production team sitting somewhere around 20 to 100 TB, I would move to a NAS or backup server as the center of gravity. Working storage can live on a faster shared system, but the backup side should have snapshots, scheduled replication, and off-site object storage. Synology’s immutable snapshots or a TrueNAS design with snapshots and replication fit well here. Cloud should hold the off-site copy with versioning turned on and retention controls enabled. This is the range where a serious NAS plus serious cloud backup often beats tape on simplicity. The archive is big enough to demand structure and still small enough that disk-and-cloud workflows are manageable.

For archives above 100 TB, especially where most finished work is seldom restored, I would look very hard at adding LTO. Not because tape is romantic, but because cloud retrieval time, cloud egress cost, and endless shelves of hard drives all become irritating at scale. Keep the active archive on disk. Keep an off-site cloud copy for the most important current or near-current material. Write completed jobs to tape as cold archive. Keep a catalog. Store tapes off-site. This hybrid model is often the point where the system stops feeling like a pile and starts feeling like an archive. The LTO material supports its use for high-capacity transfer, nearline work, and long-term retention; the exact threshold where you switch is a business judgment.

For high-end commercial teams, agencies, and production houses with multiple editors and many ongoing jobs, I would split the system into active, nearline, and cold tiers. Active jobs live on fast shared storage. Nearline backup sits on a second system with snapshots and restore convenience. Cold retention sits off-site in cloud object storage with immutability, and often also on tape for the deepest archive. The proxy workflow should be built into ingest from the start. The mistake at this scale is pretending one storage tier can serve edit speed, cheap retention, and disaster recovery all at once. It cannot.

There is one recommendation I would avoid unless the budget is extremely tight and the archive is tiny: keeping the only off-site copy in a normal sync folder with no versioning discipline, no immutable retention, and no second cold copy. That setup feels modern right up until the day it is not. Media archives are too hard to rebuild for that level of trust. The more irreplaceable the footage, the less you should rely on convenience alone.

The best recommendation after all the gear talk

If you want one answer, here it is. For most serious filmmakers and photographers, the best setup is a layered system with fast working storage, a separate local backup, off-site cloud object storage with versioning and immutability, and for bigger or longer-lived archives, one offline cold copy on tape or rotated disks. That is the safest balance of speed, restore convenience, and disaster resilience.

If the archive is still modest, do not overcomplicate it. Buy discipline before you buy prestige. Label things. Verify copies. Rotate off-site media. Run restores. Use snapshots. Turn on versioning before you need it. If the archive is already huge, accept that the answer is tiered storage, not one perfect box. Big archives become safe when each layer has a clear job and a different way to fail.

The fear behind the original question is a sensible one. Cloud can fail. Disks can fail. People can fail. Policies can fail. The only calm answer is redundancy with independence. Not duplicate clutter, but deliberate copies that do not die together. That is what turns backup from a hopeful habit into something you can trust when the worst day finally arrives.

FAQ

What is the safest backup setup for video footage and RAW photos?

The safest setup for most serious creators is 3-2-1-1-0: one working copy, one separate local backup, one off-site copy, one immutable or offline copy, and regular verification so restore errors stay at zero.

Is a NAS enough as a backup system?

No. A NAS is excellent as working storage or as one backup layer, but Synology states that RAID and SHR are not backup solutions by themselves. You still need a second independent copy, and ideally one off-site.

Can I trust cloud storage as my only archive?

Not if the media is irreplaceable. Cloud is excellent as an off-site layer, especially with versioning, retention lock, and geographic redundancy, but relying on a single cloud system still leaves you with one failure domain.

Should I use tape for large media archives?

Once the archive grows into the many tens or hundreds of terabytes and most older work is rarely restored, tape becomes a serious option. LTO is built for high-capacity transfer and long retention, and LTFS makes retrieval easier than older tape workflows.

Why are checksums and fixity checks so important?

Because copied files can be damaged without obvious signs. The Library of Congress treats fixity checking with hash values as a core preservation habit, and ZFS scrubs on TrueNAS are designed to detect integrity problems and silent corruption.

What should I keep in the cloud, originals or proxies?

For many teams, the cleanest approach is to protect the original media carefully while also generating proxies for faster editing and review. Adobe’s workflow guidance supports keeping original media for backup while using proxies to keep the edit side light.

What if a cloud region goes down?

That depends on the storage design. AWS supports cross-region replication, Google dual-region buckets can fail over during a regional outage, and Azure offers geo-redundant options, though Azure also warns that asynchronous replication can leave a recovery point gap.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

The safest way to back up massive video and RAW photo archives
The safest way to back up massive video and RAW photo archives

This article is an original analysis supported by the sources cited below

Data Backup Strategies Why the 3-2-1 Backup Strategy is the Best
Backblaze’s plain-language explanation of the classic 3-2-1 backup rule.

Protect
Veeam’s security guide explaining the expanded 3-2-1-1-0 model.

3-2-1 Rule
A second Veeam best-practice page that lays out the 3-2-1-1-0 rule and the role of immutable and secondary copies.

Locking objects with Object Lock
AWS documentation on WORM-style protection against deletion and overwrite.

Resilience in Amazon S3
AWS guidance on versioning, object lock, and archive storage classes for recovery and resilience.

Replicating objects within and across Regions
AWS documentation on same-region and cross-region replication for S3.

Backing up your Amazon S3 data
AWS guidance on S3 backups and the cost implications of keeping every version.

Object Versioning
Google Cloud documentation on preserving deleted and replaced object versions.

Object Retention Lock
Google Cloud’s documentation on per-object retention controls and locked retention periods.

Bucket locations
Google Cloud documentation covering regional, dual-region, and failover behavior.

Data redundancy
Microsoft Learn documentation on Azure redundancy models and regional failover behavior.

Configure immutability policies for blob versions
Azure guidance on version-level immutability and versioning for blob storage.

What is Synology Hybrid RAID SHR
Synology’s knowledge-base article that explicitly notes RAID and SHR are not backup solutions.

What is an immutable snapshot How do I use it
Synology’s explanation of immutable snapshots and their WORM-style protection.

Snapshot Replication
Synology’s feature page for snapshots, replication, and test failover.

Creating Snapshots
TrueNAS documentation on ZFS snapshots as point-in-time read-only copies.

ZFS Primer
TrueNAS reference material explaining checksum validation and scrub behavior in ZFS.

Managing Scrub Tasks
TrueNAS guidance on scrub tasks for detecting corruption and early disk problems.

Data Integrity Management
Library of Congress guidance on fixity checking and checksum-based integrity control.

LTO Benefits Why LTO Is a Good Choice
LTO consortium material on current tape capacity, throughput, and archive use cases.

Linear Tape File System LTFS Specifications
LTO’s explanation of LTFS for easier browsing and retrieval of tape-based archives.

Ingest and Proxy workflow
Adobe’s documentation on protecting original media while creating proxies for editing.