• 5 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle






  • I’m totally OK with this, but having 2 pro licenses I may be biased.

    It’s no different from other vendors that offer a year of upgrades with a license and you need to pay afterwards.

    As long as it never moves to a “pay periodically or your entire license becomes inactive” (like Adobe), I have no issue with it.

    They need to have a gentle way to handle upgrades after the included “timeframe” that also isn’t just “buy a new license if you decide to skip updating for a period of time” for whatever reason. If you stop updating for hardware or personal reasons for a few years, getting back up to date should still be competitive vs buying a new license.

    UnRAID is absolutely worth it. Definitely the best computing investment I’ve made in the last 2 decades.


  • Nogami@lemmy.worldtounRAID@reddthat.comUnraid 6.12.6 Now Available
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 months ago

    It’s not really a big deal for me, updating and rebooting takes all of 4 min on my server, so prefer to update and ensure issues are mitigated.

    If you’re running something earlier than 6.12.5 and using ZFS, there is a potential for data loss due to a bug that can occur on any ZFS filesystem (not just within unRAID). Patching to at least 6.12.5 mitigates that bug and 6.12.6 solves it, so delaying a couple of weeks could put data at risk. Not advised.



  • One other suggestion is to start in maintenance mode and check the filesystem for errors. I had one strange one some time ago where it wasn’t showing errors but was showing space used. When I scanned the drive I got my free space back.

    If there is a share there that won’t delete, it’s not empty. You can use the file manager to view everything inside and erase all of it, or alternately do it from a shell prompt (carefully).




  • Yup and make sure you keep a backup on a separate machine. I used to just keep flash backups on the array with the rest of my stuff. You can still get to it without unraid mounting the main file system but it’s another level of annoyance you don’t need when your system is down.

    I don’t really mind the system contacting unraid mothership. It’s a way to prevent pirates and sneaky types from ruining it for all of us paying users. Maybe when they stop doing that, license management gets easier.




  • May have been people with very esoteric setups. Easy enough to test, nothing is going to break. Just back up usb key before upgrading. If it doesn’t work you just restore the backup, no need for panic.

    FWIW both of my 6.12 systems upgraded flawlessly. I run a handful of common dockers on my main system and very few on my backup machine.

    I’m toying with upgrading mine to the latest release remotely. I’m on the other side of the world away from my servers right now but not a significant risk imho. Supermicro IPMI makes it pretty risk free.





  • Nogami@lemmy.worldtounRAID@reddthat.comZFS pools
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If he had 6 x NVME drives and his controller and MB supported maximizing bandwidth it, it should be capable of 300MB/sec x 6 (1.8TB/sec), parity calculations nonwithstanding because the data is striped across all drives in the pool as it reads and writes to/from all drives in the pool at the same time.

    For example, if a big 1TB file is split across 6 physical devices in ZFS with striping, the system will read pieces of that 1TB from all NVME drives in the pool at the same time, not from one drive at a time.

    Transfer rate is determined by the physical devices (and the hardware attached), not a pool.

    Here’s a little video about it - this isn’t using ZFS, but the idea about increasing transfer rates by raiding your NVMEs together is accurate. He’s getting speeds of over 21TB/sec for reads, and almost 17TB/sec for writes (though his example doesn’t have parity protection, which will boost his speeds)

    https://www.youtube.com/watch?v=DXT1IXFIFAI


  • Nogami@lemmy.worldtounRAID@reddthat.comZFS pools
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I gotta disagree here, if you’re using 6NVME drives, the ZFS data is striped across 6 drives, so you’ll get more performance, assuming the controller they are attached to can take advantage of it and you have a fast enough CPU to manage parity calculations.

    This is far more evident on spinning disks than NVMEs, but there should still be some speedups. Doing it for the protection and having a single large storage space would also be significant benefits in my book.

    ZFS also has far superior data caching, so recall of commonly used data will be from RAM, rather than the drives.


  • The other interesting thing with snapshots is that you have a few different ways of utilizing them.

    Reverting changes

    The simplest is rolling all changes back, so if your filesystem got totally hosed (say, by ransomware), you can revert the changes back to where they were undamaged and it’s like it never happened at all (hopefully after getting rid of the ransomware). This means that all changes since the snapshot you revert to are discarded like they never existed.

    Accessing Snapshots Directly

    Say it’s not ransomware, but an important file was deleted, but you only discovered days or months afterwards, and you don’t want to undo everything you’ve done past that file deletion and lose important new data, you can access the snapshot directly in read-only mode and recover your files.

    To do so, you enter your filesystem and use a hidden directory, so in my case, my dataset is “/mnt/zfsarchive/Documents”.

    By adding “.zfs/snapshots” on the end of the path, I can access the hidden snapshots directory and see snapshots in read-only mode and recover my data (you can make the hidden directory visible with a configuration option if necessary, but probably best to leave it hidden most of the time).

    “cd /mnt/zfsarchive/Documents/.zfs/snapshots/autosnap_2023-06-13_23:59:01_daily/”

    Snapshot Clones

    You can also take a snapshot and make a fully read/write duplicate of it to experiment with.

    Ssay you are using software to automatically rename and reorganize thousands of files, but you don’t want to mess with doing it “live” in case something goes bad.

    You can make a clone of a snapshot that you can “modify” however you want for testing purposes. Then if everything goes well, you can “promote” the clone to be the new active filesystem at no risk, or just delete it with no consequence if it goes badly.

    Sending snapshots to another (backup) filesystem

    This is what I do for backups using the sanoid plugin. When the system creates a snapshot, it records the difference in the filesystem between points in time, then it can send that difference to another filesystem. For example, if I have 500,000 family photos and I decide to delete one bad photo that was out of focus, a traditional backup would need to compare the 500,000 photos on the source and backup destination to find what changed, then delete the file on the destination.

    With a ZFS snapshot, it sends a tiny chunk of data that just says “file xyz123.jpg was deleted”, and that’s all it takes to have the backup replicated to the destination. By the same token, if I didn’t delete the file, but just edited it to remove the photobomber in the background, the snapshot would just contain the difference between the source and destination image, maybe a few hundred K, and send nearly instantly.

    I’m sure there are many more options, but these are the first ones I learned.


  • Before I started experimenting, I had everything well backed-up in a couple of places, but even with that, I never thought my data was in any danger when experimenting. It was all very safe, though I was learning some new lingo.

    I also kept a copy of my ZFS data on my main array (I have the happy fortune to have room to spare right now), so nothing was ever really at risk.