So I run a video production company. We have 300TB of archived projects (and growing daily).

Many years ago, our old solution for archiving was simply to dump old projects off onto an external drive, duplicate that, and have one drive at the office, one offsite elsewhere. This was ok, but not ideal. Relatively expensive per TB, and just a shit ton of physical drives.

A few years ago, we had an unlimited Google Drive and 1000/1000 fibre internet. So we moved to a system where we would drop a project onto an external drive, keep that offsite, and have a duplicate of it uploaded to Google Drive. This worked ok until we reached a hidden file number limit on Google Drive. Then they removed the unlimited sizing of Google Drive accounts completely. So that was a dead end.

So then we moved that system to Dropbox a couple of years ago, as they were offering an unlimited account. This was the perfect situation. Dropbox was feature rich, fast, integrated beautifully into finder/explorer and just a great solution all round. It meant it was easy to give clients access to old data directly if they needed, etc. Anyway, as you all know, that gravy train has come to an end recently, and we now have 12 months grace with out storage on there before we have to have this sorted back to another sytem.

Our options seem to be:

  • Go back to our old system of duplicated external drives, with one living offsite. We’d need ~$7500AUD worth of new drives to duplicate what we currently have.
  • Buy a couple of LTO-9 tape drives (2 offices in different cities) and keep one copy on an external drive and one copy on a tape archive. This would be ~$20000AUD of hardware upfront + media costs of ~$2000AUD (assuming we’d get maybe 30TB per tape on the 18TB raw LTO 9 tapes). So more expensive upfront but would maybe pay off eventually?
  • Build a linustechtips style beast of a NAS. Raw drive cost would be similar to the external drives, but would have the advantage of being accessible remotely. Would then need to spend $5000-10000AUD on the actual hardware on top of the drives. Also have the problem of ever growing storage needs. This solution we could potentially not duplicate the data to external drives though and live with RAID as only form of redundancy…
  • Another clour storage service? Anything fast and decent enough that comes at a reasonable cost?

Any advice here would be appreciated!

  • RiftbreakerB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Please keep us posted on what you decide. I am facing almost the exact same problem.

  • bee_ryanB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I like doing this math.

    A DS1821+ with (2) DX517 expansion bays would cost 4.1K AUD presuming 10% tax and would be 307 TB presuming (18) 22TB drives with a BTFRS file system running SHR-2 (allows for 2 drive failures).

    (18) 22TB drives @ $22/tb AUD = $9.5K

    So an all in cost for 307TB is 13.6K AUD using that equipment. 27.2K AUD to have a mirrored backup, but it sounds like you’re ready for another 300+ TB right now, so 54.4K AUD to have 1:1 backups and 307TB of runway.

    If AWS Glacier is what you’re comparing to, then you make that up in 6 months.

    Rack mount would be more convenient, as you can have 1PB volumes and a little less cumbersome and tidy setup - the 1821+ with expansion bays are 108TB max per volume, so you’d have to deal with 6 different volumes but maybe not a big deal if your filing system is by year/month. But getting into rack mount with Synology for example would basically double your infrastructure cost. Or you bite the big bullet now on scaleability and use a 60 Bay rack mount @ 29.9K AUD for just 1, but it’s still roughly the same cost per drive bay as the 16 bay.

  • campster123OPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I just want to say a massive thank you to everyone contributing advice and thoughts here. There’s a lot to get through and I’m taking my it all in.

    To those saying we should be charging for this, we hear you, you’re not the first to tell us. We’re looking into implementing that going forward and need to assess how we’ll tackle that for older clients.

    I feel like this is a good point to assess our whole data infrastructure (live edits and archiving) and we’ll keep you all up to date once we decide on a direction. In the meantime keep the thoughts rolling in!

  • physx_rtB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    You need ot think about how often you need to access the data. If it’s once or twice a year, then the added overhead of having to find and load a tape wouldn’t add up that quickly and IMO should be acceptable.

    However, for projects you currently work on, you’d want hard drives and/or SSDs, preferably on a network, I suppose. Unless all your in0flight footage resides on the computers you edit them on (in which case I hope they have redundant storage).

    Also, if any of your clients needed some archived data, would it be feasible to come back to the tapes, read, upload and share them? If you had a NAS and a fast enough internet connection, you may be able to host a site yourself, thus no need for reading the tape and uploading to a cloud.

    Also, if it’s video footage, then you shouldn’t really count on LTO’s compression ability. It’s not particularly good for pictures and videos.

  • huskypenguin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I’m in the same biz. I use tape. Specifically a Mac mini + canister from guys that make Hedge. I then index each tape with neofinder, it makes it easy to find and pull projects. The idea was to make a system simple enough that it wasn’t one persons full time job.