Decorum for the Forum:
  • Be nice. If you want to be mean, try Reddit.
  • No Piracy. If you want to be a thief, there are dark places on the internet dedicated to that.
  • No Cracking. Discussions on AnyDVD, DeUHD, DVDFab, UHDKeys and similar tools are not permitted here.
  • No Spamming. If you want to make a buck, work smarter... somewhere else.
  • No Adult Content. Half the internet is dedicated to adult content. This half isn't.

Privacy Policy: Click Here to Review (updated September 30, 2020)

What is Unraid and how to build an Unraid media server

Show off your HTPC builds, NAS Servers, and any other hardware. Great place to ask for hardware help too.
Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Fri Mar 25, 2022 4:40 pm

Thanks, it never occurred to me that this didn't mean staggered. Pretty much pointless option as it is then.

User avatar
Pauven
Posts: 2777
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Fri Mar 25, 2022 4:49 pm

You're not the only one that thinks it's pointless:

image.png
image.png (126.67 KiB) Viewed 6629 times

It obviously never got removed. Kind of interesting to read some of the use cases people shared that justified its continued existence:
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Fri Mar 25, 2022 5:41 pm

Quick question, I'm about to add 6 drives to my 12-disk array (10 data, 1 parity) before copying a whole bunch of data.

My plan is to:

- Disable the parity drives (set them both to none, I have all the existing data an the array backup up to another NAS)
- Add the 6 additional drives to the array, my understanding is that I can add them all at the same time as there is no parity at that stage.
- Move the data from the first drives using the unbalance plugin as I've upgraded them from 4tb to 6tb, so I want to have all the existing folders in the same disks before adding data in the then empty disks. the new drives (and the parity drives) are all 6TB.
- Copy all the new data (faster without parity)
- Once the data is moved/copied, enable the parity drives. I wonder if I should format them first, to make sure that they are empty and not considered as valid parity drives). [EDIT: In fact I'm going to replace them with two Red Pro drives, to make sure that the parity drives are faster than any data drive.]

What do you think?

User avatar
Pauven
Posts: 2777
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Fri Mar 25, 2022 8:46 pm

You definitely want your parity drives to be the biggest/fastest in the system, but no need to be any bigger/faster than your biggest/fastest data drive as that won't gain anything. The goal is you don't want the parity to be a size limit or speed bottleneck, but you can never go bigger/faster than a data drive. You can think of it as speed rated tires on a car. The tires will never go faster than the car as a whole, but if you have a low H speed only tire, you won't be able to reach the max speed of your other Z-rated tires. Doesn't matter which wheels are driven, they all turn at the same speed, but no faster than the slowest. Not a perfect analogy, but helps illustrate the concept.

Your plan sounds fine, as long as your data is backed up externally, I see no major risk, and this way should be twice as fast.

But hopefully you've already validated (pre-cleared) the 6 new drives. It won't make the process any faster since you won't have parity, but at least you know they're healthy drives.

I don't believe there's any concern about formatting parity drives before re-using them. Unraid is smart, it wouldn't let you reuse the old data if you tried, sometimes it's frustratingly conservative. When re-enabling parity, every last bit will be re-calculated and written to both parity drives, no bit left undisturbed. Formatting wouldn't do anything to change that or make it better.

But if you do go to new parity drives, be sure to exercise them first to make sure they're healthy, always pre-clear at least once every drive that has never gone through a pre-clear, no matter what role you assign it. And if you pull any drives from duty and want to return them to an emergency/future-use stack, go ahead and pre-clear them so you know they're still healthy and ready at a moment's need.
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Sat Mar 26, 2022 5:36 am

Thanks for the sanity check :)

Data is definitely safe. When one of my Qnap died, I first backed up all the data from the still working one on my Unraid A server, then I swapped the drives to back up the data drives from the failed Qnap to my B server, but I kept the drives from the working Qnap untouched. Now that the data from the failed Qnap is backed up to Unraid B, I decommissioned the disks (six of them are the new disks I'm about to add to server A). I put the old disks from the non-failed Qnap back, did a sync to get the new data from Server A onto it, which only took a few hours, and now I have a full back up of all the data in server A while I'm disabling parity for performance as I move the data around / copy more data from the Synology DS2411. The Qnap is now the backup for server A and is switched off for safety. Once I re-enable the parity drives on server A, I'll decommission the backup disks in the Qnap to use them in server B and move on with the next steps, until I can decommission and use the disks in the Synology server to restore data from individual disks... When everything is done, I'll install older disks in the QNAP and the Synology to use them as backup servers only, which is how I'm using my old Thecus N5200 already.

Parity and data drives are the same size on both Unraid servers, as I said I'm only using WD 6TB drives at this stage. I don't want larger parity unless/until needed as it slows down the parity/read check. A parity check with currently takes me around ten hours (replacing a drive with a parity sync around 13-14 hours), and that won't change as I add disks as there is no bottleneck from the controllers on both servers, even if all the 24 bays were used. So if I was to use 12-20TB parity drives, it would double or triple that time, and I really don't want that as it means I'd have to do parity checks in stages to make sure they is no parity check running when I actually need the array, and that's not an option when replacing a disk.

So I'd rather add 6TB drives while I have some available as spares rather than increase the size of parity drives for just one drive. For performance reasons, I'll use WD Red Pro 6TB for parity on both servers, to make sure parity is not a bottleneck from the parity drives, even though server A has only WD Red 6TB data drives. The Pros are for server B, as I don't plan to ever replace them any time soon, unlike server A where I plan to replace them at some point with bigger/larger drives, mainly for energy savings reasons, but only as I run out of 6TB spares. Server B will go up to 24 drives, while I'd like to keep the main server smaller, around 12 drives, at least until I start replacing the drives with larger ones. When I start doing that, I plan to use the unbalance plugin to move the data to the new drives and reduce the number of drives in server A, then I'll add one large drive at a time.

All my drives have either been pre-cleared or come from a working NAS, so they have been pre-cleared at some time and are still working fine, hence I consider them pre-cleared. I've always pre-cleared drives before putting them in an active array (usually using WD Diag on my PC), and I always pre-clear them when I take them out for future use, or when I buy new drives to check for infant mortality. This has always been my routine with RAID, way before Unraid even existed, so I'm used to doing this. I also usually buy new disks in batches of two max with a few weeks in between each batch or from different suppliers, to make sure that all my drives don't come from the same manufacturing batch and are not put in service at the same time, hence are at risk of all failing at the same time (which can happen). Good old RAID safety practice...

This whole data moving is already taking me a month, so given that I have backups for most vulnerable stages the way I've designed the 30-step transfer process, I don't do additional pre-clear or it would take me two months. So far, so good...

Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Sat Mar 26, 2022 7:19 am

I solved the 24W on stand-by mystery :)

Nothing to do with the MB or peripherals... It's the stupid Corsair SF750 that's drawing 24W even with nothing connected to it (all the modular cables taken out). I've replaced it with the Seasonic 700, which draws 10-12W on its own. I now get 10-15W on standby/shutdown, as previously (the Seasonic was the PSU I used when I produced my first power use data for the A server, as it was in my Tower case at the time).

So final numbers for Server A (3770K, MB P8Z77-V LX, 16GB RAM, iGPU):

Off/S3 Sleep (with WOL that works even from shutdown state): 10-15W
During boot/power up: 100-120W (with a spike up to 210W when all the drives spin-up, only 12 currently)
Idle after boot (all drives active, no activity except maybe the VM starting): 115W-120W
All drives asleep (except cache due to VM idle activity, CPU 7-10% load, with spikes up to 25% load): 75W
Read check (all disks active): 135-140W.
Last edited by Manni on Sat Mar 26, 2022 3:18 pm, edited 2 times in total.

User avatar
Pauven
Posts: 2777
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Sat Mar 26, 2022 9:30 am

Manni wrote: Sat Mar 26, 2022 5:36 am This whole data moving is already taking me a month, so given that I have backups for most vulnerable stages the way I've designed the 30-step transfer process, I don't do additional pre-clear or it would take me two months. So far, so good...
Maybe you're exaggerating for effect, but if you did 1 additional pre-clear on all 6 drives at the same time, it should only add 2 days.

But otherwise it sounds like you know what you're doing and have put a lot of thought about safe practices into every step. I'm sure it will go well.

Manni wrote: Sat Mar 26, 2022 7:19 am I solved the 24W on stand-by mystery
Sweet! 8-)

Manni wrote: Sat Mar 26, 2022 7:19 am It's the stupid Corsair SF750 that's drawing 24W even with nothing connected to it
WTF!? That's an expensive Platinum Plus rated PSU! Never in a million would I have expected that to be the problem.

But I'm no fan of Corsair. Years ago I bought into their cooling hub solution, worst piece of junk I ever put in a PC. I tried to get it to work right for months, and finally recognized that only sheer incompetence at the company level could ever produce and sell and mangle the support of such a simple product. I ripped it out and never bought anything Corsair branded again.

Seasonic is a great company. No surprise their PSU worked right.
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Sat Mar 26, 2022 11:10 am

Well, 6 disks would take two days, but I have more than 40 disks in total, so yes I exaggerated a little but it would add a couple of weeks.

Yeah I never buy corsair PSUs either (I do like their RAM though), usually I buy Seasonic PSUs for all my builds, but I had to buy that Corsair PSU as it was the only one I could find in the form factor I needed for the E-GPU that was also silent. Complete waste of money, though I'll keep it as a spare for the servers as it has all the leads now (I had bought two extra long leads for the Molex connectors to go to the backplane).

Also, thanks a lot for posting your PWM discoveries, I tested and was shocked to see that the chassis fans used 35W at 100%! I was only using them at 75-80% with the Noctua regulator, to reduce the hellish noise at max, but I decided to give another try to the dynamix plugin and at least with server B the results are great.

I managed to fine-tune the minimum PWM value to 134, that gives me 1950 RPM at 40%, so a significant saving when the fans are not needed. Anything below that and I get 0RPM.

Here are the results for server B (3770K as well now, nvidia 7600GS, 16GB RAM, MB Asus P8P67 Pro):

Standby/off: 7W with Anted PowerTrio 550, no WOL from shutdown, only from standby
Boot: 130-170W (peak at 240W with 6 HDDS)
After boot, idle, no VM: 140W
All disks spinned down: 105W, so it looks like the 7600GS costs me around 20-30W. I'd rather have this on server B because it will be off most of the time.
All disks used: not tested yet, I'll do this when I have my 20 disks online.

I'll post the revised numbers for server A when I've had a chance to decommission the Noctua regulator and adjust the dynamix auto fan settings.
Last edited by Manni on Sat Mar 26, 2022 3:20 pm, edited 3 times in total.

Manni
Posts: 593
Joined: Wed May 22, 2019 5:27 am

Re: What is Unraid and how to build an Unraid media server

Post by Manni » Sat Mar 26, 2022 12:44 pm

Re previous post, I hit a wall on server A that made me rethink what I did on server B.

My chassis fans were working fine with the Noctua on the new motherboard on Server A because I was using an SATA to power the chassis fans for the drives (3 x 140mm just behind the backplane).

When I try to connect 3 or even 2 with a splitter at any of the chassis fan headers directly, they don't work. Either chassis fan connector on the mobo will only accept a single fan, so I guess they are limited to 12W, which is the standard.

So I have to use the Noctua on server A, but in that case even if I connect it to the PWM connector she speed is only controlled by the Noctua controller, so can only be controlled manually. So I set it around 80%, which is 3,000 rpm, to be on the safe side. That's a bummer because it's more noise and more power when there is no need to cool down the drives.

I had a second thought on server B, because even if the MB chassis fan headers seem happy to take the three fans, I don't want to fry the connector when/if they go up to 100%, hence draw 35 watts. So I might put the Noctua controller back there too. Not sure if there is a way to get just power from molex or SATA and use the PWM header on the mobo to control the speed. I thought this was what the Noctua would do, but it doesn't seem to be the case.

Jamie
Posts: 942
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is Unraid and how to build an Unraid media server

Post by Jamie » Sat Mar 26, 2022 5:24 pm

Hi Paul,

What do you recommend for a preclear app that is compatible with 6.9.2? I just upgraded and I believe that the old preclear app is no longer compatible. Everything is running fine by the way.

Post Reply