• 1 Post
  • 24 Comments
Joined 1 year ago
cake
Cake day: October 24th, 2023

help-circle




  • As long there are no reallocated sectors the drive may still be fine (your current data certainly has some damage, you better have a backup here.) Basically a “uncorrectable” is a sector that can’t be read anymore for various reason.… aka the data is bad currently. It does not automatically mean the sector/disk structure itself is broken.

    The proper operation in this case is to full wipe the drive with a proper format (zero fill). Don’t do a quick format! Only afterwards you will know for sure the drive is bust or not. In case there’s permanent damaged sectors those “pending” ones would convert to “reallocated” sectors. If there was just some bad data the “pending” ones simply will disappear. If your disk collects reallocated sectors then it’s probably time to look for a new disk (albeit a low count of reallocated sectors that stays stable isn’t the end of the world either).


  • binaryriotBtoData Hoarder@selfhosted.forum4TB disk advice
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Unlikely. There used to be 8TB heliums at some point, but then WD replaced that line with air-filled ones. A process that at least went to the 10TB line too already, I believe. Your safe bet for helium is 14TB and up, I would say.

    When you buy that 6TB drive make sure to get the WD60EZAX (aka CMR) and not the WD60EZAZ (aka SMR). It’s one letter difference… if you buy from a random dealer they may send you the SMR.




  • binaryriotBtoData Hoarder@selfhosted.forum4TB disk advice
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you have backups it doesn’t matter much what you buy. It’s not like there’s much choice anyway. :)

    Do you want to buy a small 2.5" one? Or a larger 3.5" one?

    All the small ones are SMR these days, so that’s something you should consider. Depending on exact use cases this may become very slow, and in case of error, like a sudden drop of power source which easily happens with externals, they also corrupt more easily (in other places, not just necessarily in the data you just wrote to them).

    If you go for a bigger 3.5" drive go for a larger size and CMR. 4 TB is laughable and ultimately more expensive (EUR/$ per TB). I would recommend to look for 12TB and upwards with helium filled. The 18TB disks are the current price sweet spot it seems.




  • It’s a good idea to keep your important data where privacy matters separate from all the other stuff you collect. E.g. keep your documents and private photos and their backups on separate (smaller, cheaper to replace) devices. In worst case you can sacrifice.

    Keep the less important data that needs all the space (erm. those Linux ISOs) on the larger expensive disks. If you send those in, in case a replacement is required, then it probably won’t matter too much if someone must snoop over it (unless it’s too much of the dirty Linux ISOs) :)

    Aka have clear device-level separations between any sort of media collections and your critical data.



  • With a single disk probably impossible.

    But if you have the same HDD already and know it works reliable for some time then you could compare (run them in an open USB case or something during that). Also a good idea to gently touch them, so you can ‘feel’ the respective vibration patterns (good idea to wear an anti-static arm band thingy).

    If the new disk —after some “warming up time” (e.g. after it has run for about a week. Use that time to do the zeroing out and SMART long test!)— is much louder, makes unusual clunky noises, scratching noises, high frequent peeps, or other strange sounds much worse than your old disk then may be worth exchanging it as precaution.

    Basic mechanical tests are done during the SMART tests (e.g. if you run a short test you’ll hear).


  • 18 TB disks are still the sweet spot price wise it seems. So I would go for that as many as you can afford. The rest depends on what you actually want to do with the NAS. IMHO, you don’t need insane amounts of cache, if the NAS shall just serve some media files. The HDDs itself will be plenty of fine. Just one SSD for the system installation should be all you need, IMHO.



  • If out of other options just do a simply zero format (e.g. diskutil zeroDisk diskX on macOS), and a long SMART test afterwards (e.g. smartctl -t long diskX). That’s what I do with my new disks and it served me well so far. For large capacity disks it is like a heavy 2 day process (1 day formatting, 1 day testing), but it gives me a piece of mind afterwards.


    Extra Hint: During any SMART long test make sure to disable disk sleep in your OS for the time, else test will abort (e.g. caffeinate -m on macOS). Also avoid crappy external enclosures that put the disks to sleep by themselves (or you may want to run a script that regularly reads a block from the disk to keep it awake.)

    Here’s my macOS script to handle the job (I needed it recently because a temporary crappy USB enclosure). It reads a block every 2 minutes via raw I/O w/o caching involved (“/dev/rdisk”)

    #!/bin/bash
    # $Id: keepdiskawake 64 2023-10-29 01:55:56Z tokai $
    
    if [ "$#" -ne 1 ]; then
    	echo "keepdiskawake: required argument required (disk identifier, volume name, or volume path)." 1>&2
    	exit 1
    fi
    
    MY_DISKID=`diskutil info "${1}" | awk '/Device Identifier:/ {print $3}'`
    
    if [[ ! -z "${MY_DISKID-}" ]]; then
    	printf '\033[35mPoking disk \033[1m"%s"\033[22m with identifier \033[1m"%s"\033[22m…\033[0m\n' "${MY_DISKNAME}" "${MY_DISKID}"
    	MY_RDISKID="/dev/r${MY_DISKID}"
    	echo "CTRL-C to quit"
    	while true; do
    		echo -n .
    		dd if="${MY_RDISKID}" of="/dev/null" count=1 2>/dev/null
    		sleep 120
    	done
    else
    	echo "keepdiskawake: Couldn't determine disk identifier for \"${1}\"." 1>&2
    	exit 1
    fi
    



  • If you improperly eject a disk while the filesystem is in a flux state it doesn’t matter which disk you use you’re very likely encounter that issue again. More so with some filesystems than others. APFS is for some reason worse in this regard, so best stick with the traditional “HFS+ w/ Journaling” on a Mac.

    If you transfer large collections of data you could/ probably should use rsync and not the Finder, preferably in a screen or tmux session. That way any crash of any of any the UI components will not mess up the copy process (even if Terminal.app goes down you’ll be able to reconnect to the screen/tmux session with the copy process still doing its thing). Also make sure your external disk has proper power all the time during the process (preferably do not attach another device during that time.)