Decorum for the Forum:
  • Be nice. If you want to be mean, try Reddit.
  • No Piracy. If you want to be a thief, there are dark places on the internet dedicated to that.
  • No Cracking. Discussions on AnyDVD, DeUHD, DVDFab, UHDKeys and similar tools are not permitted here.
  • No Spamming. If you want to make a buck, work smarter... somewhere else.
  • No Adult Content. Half the internet is dedicated to adult content. This half isn't.

Privacy Policy: Click Here to Review (updated September 30, 2020)

What is Unraid and how to build an Unraid media server

Show off your HTPC builds, NAS Servers, and any other hardware. Great place to ask for hardware help too.
Post Reply
User avatar
Pauven
Posts: 2787
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is unraid and how to build an unraid media server

Post by Pauven » Sun Mar 03, 2019 1:11 pm

Clearing Drives
Unraid uses an unusual parity scheme: All the bits from the same position across all drives are added up to calculate a parity value, and that value is written to the same position on the parity disk. If a drive is replaced, that simple math formula can be reversed to rebuild the contents on the replaced drive. With dual parity, a different calculation is performed and written to the 2nd parity disk, so that two drives can be rebuilt at the same time.

Unraid also has a great feature in that you can add additional drives at any time, expanding your storage. When you first add a drive, Unraid does not make it immediately available for use. Instead, it first "Clears" the drive by writing zeros to the entire drive, then writes an Unraid signature to the drive, then finally adds the drive to the available storage. When this last step occurs, because the drive has all zeroes, it does not alter the parity values that were previously calculated, as adding zero to a value does not change a value.

During this drive clearing process, the Unraid system is in a reduced state of performance (Note: it might actually be that Unraid makes the array unusable during the drive zeroing - I've seen that mentioned, but it has been so long since I've allowed Unraid to clear a disk that I truly don't remember). Considering that simply Clearing a disk (writing zeroes) takes around 2 hours per TB, you can see that this is incredibly inconvenient especially when adding larger drives.


Pre-Clearing Drives
The wonderful Unraid user Joe L. realized that he could write a script to do this Clearing routine before you even add the drive to your array. By doing the Clearing outside the array, array performance would not be impacted during the Clearing. This also opened up the possibility of Clearing drives on a different machine, or simply clearing a stack of drives that you then store offline, waiting until you need to use them.

Joe L. also recognized the anecdotal wisdom that HDDs most commonly fail either when brand new (manufacturing defects) or very old (wear and tear), and that he could turn his Pre-Clearing script into a stress test to validate that drives are healthy, and hopefully to cause new drives with manufacturing defects to fail early, before you add them to your array, and when returning them to the seller is likely easier.

Joe L. made his Pre-Clear script configurable to suit the goals of different users: you can do a simple one pass, writing the zeros and the signature the same as Unraid does, or you can go with multiple passes, as many as you desire, to really give a drive a workout. A standard pass includes a pre-read, a write, then a post-read to validate the data (zeros) were written correctly. The script will even give the before and after SMART health stats, so you can see if a drive is tending towards a failure profile.

Typically this default Pre-Clear script's config takes 3x's longer to process than it does to simply read the drive once - i.e. if it would take 17 hours to read a full 8TB drive, then it will take 50+ hours to read + write + read the drive. Adding more passes increases the processing time.

Running a Pre-Clear on all drives, even previously used drives from other devices, has become a best practice for Unraid. Knowing that a drive is healthy before you add it to your array is a huge benefit that, as far as I am aware, is unique to Unraid. Another benefit is that when adding a Pre-Cleared drive to an Unraid array, due to the presence of the Cleared signature on the disk, Unraid adds the drive instantly, without the long period of array downtime experienced when you allow Unraid to do this Clearing.

Unfortunately LimeTech has not seen fit to integrate a Pre-Clear function into Unraid, so you are still forced to use 3rd party solutions.


Is there a Pre-Clear Plugin?
Yes, there is a plugin named "Preclear Disk", by gfjardim. This plugin ships by default with gfjardim's own pre-clearing script, and it needs to be noted that this is NOT the Joe L. script. gfjardim had different goals for his script (quickest possible clearing, less stress on disks), that many users (myself included) found objectionable - we actually want to stress test the disks, not take it easy on them.

I did some testing with an older version of the Preclear Disk plugin back in August of 2016, and I had very poor results with gfjardim's script: https://forums.unraid.net/topic/54648-p ... ent=485163

I discovered some high memory consumption issues with the Preclear Disk's built-in script. Worse, I discovered that the Preclear Disk script wasn't properly flagging two known bad drives as bad. I documented these issues, and the lack of response from the developer left a bad impression on me, so the next time I did a Pre-Clear I went back to Joe L.'s script, which works perfectly.

Unfortunately, Joe L. has expressly denied consent for his script to be built into a plugin, and has also refused to do so himself.


Running Joe L.'s preclear_disk.sh Script
The challenge to running Joe L.'s script is that it is command-line based, and if you want to run more than one at a time then you need to use Screen to initiate multiple console sessions that you can toggle amongst. Actually, the Screen console session manager is smart to use with even just a single console session, so that if you accidentally get disconnected from the remote console session, you can get back to it by resuming your console session via Screen.

You can install Screen via the Nerd Pack plugin (which is an awesome plugin for installing tons of Linux utils). And you can read more about Joe L.'s preclear_disk.sh script here: https://forums.unraid.net/topic/2732-pr ... quick-add/

It does appear that Joe L. has stopped support of this script, with his last update being Apr 05, 2014.


But I really want a Plugin!
Me too. Luckily, gfjardim has made his Preclear Disk plugin compatible with Joe L.'s preclear_disk.sh script. You'll need to use a patched version of Joe L'.s script and copy it to the right location, but once you've done that then you can use the Preclear Disk plugin as a front-end for Joe L.'s most excellent preclear_disk.sh script.

The patched version of the script simply changes a few things to make it compatible with the plugin and with newer versions of Unraid. You can find it here: https://raw.githubusercontent.com/gfjar ... isk_15b.sh

The Preclear Disk plugin also uses Screen to manage multiple instances of the preclear_disk.sh script, same as if you were doing it from the command line, so you'll need Screen installed too.


I really wish there was a video on this
Spaceinvader One to the rescue, and he does give some additional details that I omitted from my write-up above: https://youtu.be/csGYrd5G0ik


Summarized Step-by-Step Instructions
  1. Install the Community Applications plugin (which itself is a front-end for installing all other plugins)

    Code: Select all

    Plugins > Install Plugin > https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg
    
  2. Install Nerd Pack Plugin: Apps > search for Nerd Pack
  3. Enable Screen in the Nerd Pack (Settings > Nerd Pack)
  4. Install Preclear Disk: Apps > search for Preclear Disk
  5. Download the patched version of Joe L.'s preclear_disk.sh script and save to \\<servername>\flash\config\preclear.disk. Make sure you save it as "preclear_disk.sh".
  6. With a target disk inserted in the server, run a Preclear: Tools > Preclear Disk
  7. Click "Start Preclear" for the disk you want to Preclear.
  8. Change the Script from "gfjardim - 1.0.3" to "Joe L. - 1.15"
  9. Set the options you want
    My Preferences: Operation = Clear, Cycles = 1, Browser Notifications on every 25%, Default on Read/Write, and I don't skip the Pre-read

Hope this helps!
Paul
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Jamie
Posts: 945
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is unraid and how to build an unraid media server

Post by Jamie » Sun Mar 03, 2019 2:14 pm

Thank you for the clear and pre clear info Paul. I will run the script to test out newly added drives first thing.

Paul, you mentioned in the past running a job once a month to do a rebuild, or something. Can you describe what that is about?

Also, I found this article on running unraid without parity disks. I guess it can run like one giant virtual hard drive and if one drive fails you can just replace that data on the failed drive.

I would turn on parity once all my drobos are all decommissioned.:

https://www.reddit.com/r/unRAID/comment ... o_parity/

User avatar
Pauven
Posts: 2787
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is unraid and how to build an unraid media server

Post by Pauven » Sun Mar 03, 2019 2:59 pm

Jamie wrote: Sun Mar 03, 2019 2:14 pm Paul, you mentioned in the past running a job once a month to do a rebuild, or something. Can you describe what that is about?

That is the monthly Parity Check. You don't have to do it monthly, but that is the most common schedule and is considered a best practice. The Parity Check spins up all the drives and reads all of the data from all the data drives and the parity drives, calculating the correct parity values on the fly, and verifying that the correct values are stored on the Parity Drives.

You can run this either as a Check Only, or as a Check and Correct, which will fix parity values if they are wrong.

If you ever ungracefully stop your array (i.e. pull the power plug, or the whole server hangs and you reboot it), then the next start-up will automatically initiate a new Parity Check, as Unraid assumes the downtime event may have occurred while data was being written and some parity data may be invalid.

Parity Checks are a unique concept to Unraid. On a Drobo, the parity data is calculated once and, as far as I am aware, never validated. The Parity Check feature in Unraid allows you to revalidate parity. 99.9% of the time your Parity Checks come back clean (no calculation errors), though if you do start to see some errors then that is typically the first sign that you may have a pending drive failure.

Because Parity Checks take a long time (mine take 18.5 hours, varies based upon hardware and HDD sizes) these can be stressful events for your drives, and it is very important that you have a great case with excellent cooling. The X-Case is one of the best with 3x 120mm fans pulling air through your drives.


Jamie wrote: Sun Mar 03, 2019 2:14 pm Also, I found this article on running unraid without parity disks. I guess it can run like one giant virtual hard drive and if one drive fails you can just replace that data on the failed drive.

Correct, Parity is entirely optional. You can even remove parity if you want, and add it again later. Unraid is very flexible. Having 2 Parity Disks allows you to recover from 2 concurrent drive failures, having 1 Parity Disk allows you to recover from 1 concurrent drive failure, and having 0 Parity Disks means there is no recovery possible, and you would have to manually recover any data lost, but you would only loose the data on the failed drive.

I highly recommend 2 Parity Disks, as you are especially vulnerable if a drive fails and you are doing an 18 hour rebuild to replace the failed drive - if a 2nd disks fails during the rebuild and you only had 1 parity, that is a really bad scenario. By having 2 Parity Disks, you are still protected during a rebuild operation.


Jamie wrote: Sun Mar 03, 2019 2:14 pm I would turn on parity once all my drobos are all decommissioned.

Sorry to beat a dead horse, but I want to be extremely clear on this: Turn on Parity BEFORE decommissioning any Drobos. Once you decommission a Drobo, that data will only exist on your Unraid server, and you want Parity enabled before then, not after.

I still recommend doing some copy tests to how how much slower your copies will be with parity turned on vs. no parity. If the performance impact is small, you may want to enable parity before you even begin your copies, and this will for sure be the safest route. But if the performance impact is too much, you can do as I suggested and make your copies with no parity enabled, then enable parity, then decommission your Drobos.

Paul
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Jamie
Posts: 945
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is unraid and how to build an unraid media server

Post by Jamie » Sun Mar 03, 2019 5:46 pm

Just to clarify. I agree with you to make sure all the Drobo data is properly propagated to the unraid box and that 2 disk parity is turned on before unplugging the drobos. I do not want to go through what I am going through with my Drobo_V box again. Fortunately I have been "somewhat" organized when I put the disks into storage. It's just a matter of finding the proper crates filled with the V box disks.

Jamie

Jamie
Posts: 945
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is unraid and how to build an unraid media server

Post by Jamie » Mon Mar 04, 2019 4:13 pm

Here are two videos from NewEgg on how to Build a PC.

The first video mainly describes how to install a CPU into various motherboards

Part 1:

https://www.youtube.com/watch?v=aPJkaE05pxM

Part 2 is how to build a computer from beginning to boot.

1. It describes CPU installation, cooler installation, and memory installation
2. Describes power supply installation. Hard disk installation and cable routing
3. Bootup hopefully.

Part 2:

https://www.youtube.com/watch?v=d_56kyib-Ls

If this provides any ideas regarding comments. Please provide your comments. Paul any gotchas for BIOS setup afterwards??

Jamie

User avatar
Pauven
Posts: 2787
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Mon Mar 04, 2019 5:48 pm

I checked out the videos. The Part 1 video seems redundant as the Part 2 video shows installing the CPU, and this looks very similar to what you will be experiencing. The cooling fan that comes with that CPU should already have the thermal paste applied just as he showed - just be careful not to smudge it before you go to install it on the motherboard.

I'm not sure how effective his static grounding advice is, especially if you are touching an un-grounded case. To me it is better if you take the power supply and plug it in, then touch the power supply to ground your self - as at least that really would be grounded. The power supply doesn't even need to be turned on, just plugged in.

When he shows installing the video card, that won't apply to you as your video card will be built into the CPU, and you will use the video output connectors that are on the motherboard.

I wouldn't worry about connecting a speak to hear the post beep.

When you install the I/O shield into the case, this is often difficult to get pushed in all the way on all sides. Just be sure to double-check your work. Also, when you go to install the motherboard, be sure to slide the I/O ports horizontally into the I/O shield, as there are some metal tabs on the shield that stick out and they like to get into your USB and Ethernet ports which would be bad. Double-check the USB/Ethernet ports on the back before you insert any screws.

Definitely have a long screwdriver with a magnetic tip for when you go to screw in the motherboard screws. It's not fun chasing screws that fall on the motherboard.

Something specific to the X-Case: When you run the power cables from the power supply to the backplanes (which the hard drives plug into) do not use more than one or two connectors per cable. For example, if there are 6 power plugs on the backplanes, then you want 3 separate power cables with two plugs per cable each going to two adjacent backplanes. That way you spread out the power load across different cables, and also sometimes the power supply's internal circuits put each cable on a different channel.

Regarding BIOS, I typically start with any performance defaults, then turn off all the features I don't use (i.e. serial ports, parallel ports, RAID functionality, Wi-Fi if the motherboard has it). I also make sure that the power option for after a power failure is set to OFF, as I don't want my server starting back up on it's own after a power failure, as the power might still be unsteady in a major storm.

Paul
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Jamie
Posts: 945
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is Unraid and how to build an Unraid media server

Post by Jamie » Mon Mar 04, 2019 6:02 pm

Thank you, Paul. This sounds like good advice to follow. I will know better when I start putting things together.

Jamie

User avatar
Pauven
Posts: 2787
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Fri Mar 08, 2019 12:05 pm

Since we've discussed Unraid's ability to use all of your available space, I wanted to share this pop-up alert I got today in Unraid.

In the top right, you'll see the red alert box informing me that Disk 15 is low on space (98% utilization). I also get an email with the same info - Unraid is very helpful.

Yesterday I copied 3 Blu-rays out to my TV_Series share (via the Cache) and overnight the Mover relocated them to Disk 15, prompting the alert. I think usage actually decreased a bit after the alert, as current usage is closer to 95%. I've noticed that during copies from the Cache that utilization is sometimes reported higher than the final result - not sure why, maybe temp files.

If you look at my Disk 15 row, you'll see that I've used 3.77 TB of this 4 TB disk, and that there are still 226 GB free. Based upon my settings for Blu-rays (min space 180GB) and DVD's/TV_Series (min space 100GB), I can expect that Unraid will continue sending even more data to this drive. By the time it hits the 100GB or less mark, Unraid will have used about 98% of this drive with zero intervention from me, and zero errors.

And as I explained before, I can manually copy some more data directly to that drive to further fill it up, it just won't happen automatically through my configured Shares.

image.png
image.png (959.6 KiB) Viewed 18995 times
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Jamie
Posts: 945
Joined: Wed Dec 27, 2017 11:26 pm

Re: What is Unraid and how to build an Unraid media server

Post by Jamie » Fri Mar 08, 2019 3:31 pm

Hi Paul,

This might be a stupid question but I'll ask it any way. On the xcase drive bays how does the numbering go? Top to bottom and right to left? How do you attach the drive to the MB sata controller and controller cards to ensure that the numbering system is correct. How can I be sure during my installation that drive 15 is where I expect it to be? Any gotchas I should me aware of?

User avatar
Pauven
Posts: 2787
Joined: Tue Dec 26, 2017 10:28 pm
Location: Atlanta, GA, USA
Contact:

Re: What is Unraid and how to build an Unraid media server

Post by Pauven » Fri Mar 08, 2019 6:39 pm

I think that is a great question.

First, it should be said that Unraid itself won't care if you randomly connect the drive bays in any which order. You plug in a drive, Unraid assigns a drive letter to it, and makes it available to use. Then you assign a drive (not a slot, but a drive) to a Disk # (not a letter, but a number) or Parity or Cache. You can then pull out the drive, pop it in another slot, and back in the day with only 1 drive parity, it wouldn't even care that you moved the drive to a new slot. Each drive has an ID (model + serial #) as you can in my screenshot above, so Unraid knows if you relocated a drive, and remembers what role that drive has.

Now, with 2 parity drives, I think that moving a drive to another slot invalidates parity, so you would have to rebuild the parity, but Unraid can still track that you moved the drive. I could be wrong about this invalidating parity, however, so take this with a dash of salt. But in general I don't recommend willy nilly moving drives around to different slots with dual parity in play.

When you connect the LSI controller to a backplane, each cable will connect to one row of 4 drives, and it will assign drive letters in order from left to right.

For example, if your flash drive is sda, then the controller might assign sdb, sdc, sdd, and sde for a row of drives.

At one point, I made sure that sdb, sdc, sdd, and sde were my top row. But looking at my screenshot, I can see that is no longer the case, as somehow they are now my 5th row. To be honest, this can change from boot to boot, depending upon which controller jumps in line first during the boot registration process. It really doesn't matter. Unraid maps a drive ID to a Disk #, regardless of what drive letter it is assigned.

What I do know for sure is that Drive 1-4 is my top row. You can assign any drive to any Disk #, so that is how I really keep track of it. I have my top physical row assigned as drives 1-4, they just happen to be drive letters sdi, sdj, sdk, and sdl.

So Unraid doesn't care, and while it would be nice if my top row was sdb-sde, as you can see it doesn't really matter. What matters is that I assigned my physical top row to Drives 1-4. Just an FYI, I keep my two parity drives in the bottom left and bottom right, so slot 21 and slot 24 if you want to think of it that way. I keep slots 22 and 23 unused in the array, and instead use them for pre-clearing new drives.

While it is purely optional (like I said, it doesn't matter if sdg is in slot 15 or 22 or 8 or 3), you may want to take extra care when you plug in the 8 SATA ports from the motherboard into the backplane. Because you have to plug into all 8 ports individually, you might do so in a way that gets those drive letters out of order (i.e. d,f,e,b,c,i,h,g instead of b,c,d,e,f,g,h,i). The only way you will know is to plug them all in during assembly, boot up, install a drive, and see what drive letter each slot comes up as.

You can hotplug drives with Unraid, so you can use one drive and plug it into each slot one by one, and see what drive letter it registers. If you don't like the drive letter a slot uses, then powerdown and move your cables around - that's your only method of control. To me, I like to know that each row gets assigned a drive letter from left to right in increasing sequence - makes for easier troubleshooting, but again this doesn't matter, it's just me being overly controlling about things that matter to me, but not to Unraid.

Keep in mind that your boot drive (the USB flash) will most likely show up as sda, or the very first drive letter, so you likely won't be able to get your slots to be sda-sdx, at best you can do sdb-sdy. But since you will also have 2 cache drives, and they will be plugged into the motherboard sata ports, they will show up somewhere in the middle. Also, you'll find it hard to control how Linux likes to assign drive letters. It may assign sda to the flash, then sdb-sdg to the 6 main motherboard SATA ports, then sgh-sdo to the first LSI controller, then sdp-sds to the other 4 motherboard SATA ports, then sdt-sdaa (sdaa is the drive letter AFTER sdz) to the other LSI controller.

So don't fret too much on trying to achieve the perfect drive letter order. Instead, focus on mapping a physical drive slot to the same numbered slot and drive #.

What will probably be of great interest to you is a Server Layout plugin that I use:

image.png
image.png (1.07 MiB) Viewed 18992 times

This is a really good plugin for multiple reasons. The main one is that it lets me track where all my drives are, in a layout that represents my physical case. So you see I have Disk 1 in the top left of row 1, Disk 20 in the bottom right of row 5, and my parity drives in the bottom corners. I like to keep my parity drives in the bottom row, as they get used the most of any drive, and putting them in the bottom means they get less heat applied to them from neighboring drives - at least that is my theory.

The Server Layout plugin allows me to assign a Drive ID to a Tray #. So I take my drive in the top left tray, ID WD-WMC1T00073984, and I assign it to Disk 1 in my Array, and Tray 1 in my Server Layout plugin. The fact that this is drive letter sdi doesn't matter. I know that Disk 1 is always my top left tray, and the Tray 1 is also my top left tray.

It takes a few minutes of management, but it really is not a big deal. Usually, once you set this up once, it pretty much takes care of itself.

The other nice thing about the Server Layout plugin is that you can track drive info, like when you bought it and from where, the warranty, how much you paid, the date it was first installed, the firmware, and more. So anytime you swap out a drive, you go to the Server Layout plugin, enter any info you want to track, and assign it to the virtual tray #.

I think there is a way to have Unraid flash the activity light on a drive to help you find it. It might require a plugin, not sure. I haven't needed this, because I always make sure that my physical slots are assigned numerically to my array disks. So if I'm swapping out disk 15, I know off the top of my head that 15 is in row 4 from the top, 3rd from the left, because that is how I assigned the physical drive to the array in the first place.

Hope that helps.

Paul
President, Chameleon Consulting LLC
Author, Chameleon MediaCenter

Post Reply