
https://www.scan.co.uk/products/logic-c ... -s-minisas
 
 
No, the disk ID is part of the disk, has nothing to do with the controller.Manni wrote: Thu Dec 23, 2021 3:55 pm Re the disks when you replace the controller, the issue is that they might have a different disk ID
Where are you getting this info from? The disk ID is the manufacturer's ID combined with a serial number of the disk. Controller shouldn't ever change that.Manni wrote: Thu Dec 23, 2021 3:55 pm this can happen when you update the f/w of your controller, and the controller reports IDs differently.
In the olden days, with single parity, drive order didn't even matter. The parity data is just a checksum of all the bits at the same location on all disks, just basic math, doesn't matter which order you sum the bits. I'm admittedly confused as to how Lime Tech figured out how to do dual parity, and I'm not sure if order matters. In my mind it matters, but somehow I think it still doesn't matter - but I really don't know.Manni wrote: Thu Dec 23, 2021 3:55 pm In that case, you have to recreate the array, and provided you get the disks in the correct order, all should still be there.
Awesome! No bother at all, I enjoy this stuff.Manni wrote: Thu Dec 23, 2021 3:55 pm For now, MC has moved gracefully to disk 3 and has started to fill it up. It's still moving data at a nice and steady 110MB/s, so it's definitely more performant than the Windows client, as the speed would fluctuate between 112MB/s and 80MB/s, and even down to 60MB/s if I paused the copy process.
Thanks again for all your help, hopefully the rest of the process will take place while I do other things now, I'll update you when I get the new controller/cables in a week or so, unless I find the time to explore the VM rabbit hole, in which case you might hear from me (assuming it's still ok to bother you with this, don't hesitate to let me know if it's not).
Here is what can happen when you change the f/w and the disks are not reported the same way by the controller: https://forums.unraid.net/topic/33187-s ... ent-322305Pauven wrote: Thu Dec 23, 2021 6:21 pm Where are you getting this info from? The disk ID is the manufacturer's ID combined with a serial number of the disk. Controller shouldn't ever change that.
That's a scary comment - what if it's not the same? I'd double-check with the seller/manufacturer.Manni wrote: Thu Dec 23, 2021 6:12 pm You will also need 6 SAS leads. If it's the same as mine (a nearly identical model) you will need 8643 plugs on the end that connects to the backplane.
Just be careful that users report it as supported in Unraid. I didn't recognize the card, so I googled and found some Unraid users reporting success with an LSI 9305, and also mention of an Avago 9305, but not Broadcom. Most likely these are all the same card under different names, but do your homework. Many a user has bought a card that doesn't work with Unraid - most work but not all, learn from other's mistakes.Manni wrote: Thu Dec 23, 2021 6:12 pm As I plan to buy this controller to go with it: https://www.scan.co.uk/products/24-port ... as-pcie-30 I'll have to get 6x 8643 on each end, as it looks like the 9305 uses the same connectors.
Thanks for that link. Strike experienced this problem because his original card wasn't working right at all, and was misreporting the ID. Notice how the ID looked completely different before and after the FW upgrade - I've never seen anything look like the before example he provided. It was borked. After the FW fixed his controller, it reported correctly, and that value should be the same regardless of controller reporting it.Manni wrote: Thu Dec 23, 2021 6:31 pmHere is what can happen when you change the f/w and the disks are not reported the same way by the controller: https://forums.unraid.net/topic/33187-s ... ent-322305Pauven wrote: Thu Dec 23, 2021 6:21 pm Where are you getting this info from? The disk ID is the manufacturer's ID combined with a serial number of the disk. Controller shouldn't ever change that.
So I assume this could also happen when you change the controller, but the solution (new config) seems simple enough. I've tried that already during my tests to familiarise myself with non-essential data in the array.
Yeah, so then it's next to impossible for you to bork this up!Manni wrote: Thu Dec 23, 2021 6:31 pm Remember, I don't have any parity disks yet to speed up the migration (I did some performance tests initially, which confirmed the 50% drop in perf, so I recreated the array without them before starting my migration). I'll only add them when all the data is there and the new controller is happy with it. So that's one less thing to worry about.
 That's the beauty of Unraid, you can blow away and redefine your array a dozen times, and as long as you tell Unraid to trust the drives (and not zero them out), it will just work.  Just like if you plugged them into a Windows machine.  There is literally no striping or parity data on these drives, so you could pull one out and plug it into an external USB enclosure and take it on the road if you wanted, just so long as you had an OS that could read whatever disk format you chose (assuming you chose XFS, but I never asked).
  That's the beauty of Unraid, you can blow away and redefine your array a dozen times, and as long as you tell Unraid to trust the drives (and not zero them out), it will just work.  Just like if you plugged them into a Windows machine.  There is literally no striping or parity data on these drives, so you could pull one out and plug it into an external USB enclosure and take it on the road if you wanted, just so long as you had an OS that could read whatever disk format you chose (assuming you chose XFS, but I never asked).   I've done some research before going that way and the fact that the data isn't striped over all the disks is a big reason why I decided to move to UnRAID (and move away from conventional RAID boxes), so I fully understand how it works. Also given that my limitation is my gigabit network (I have no intention to move to 10Gb any time soon), I don't lose any performance vs RAID 6, which was a concern I had. I never got more than 115MB/s on my gigabit network, so UnRAID doesn't do badly (assuming the disks are fast enough individually obviously). The fact that I can have only one drive on when accessing the data is great (I organised it that way). Makes a lot more sense than having 24 drives spin up (as on my Synology!).
  I've done some research before going that way and the fact that the data isn't striped over all the disks is a big reason why I decided to move to UnRAID (and move away from conventional RAID boxes), so I fully understand how it works. Also given that my limitation is my gigabit network (I have no intention to move to 10Gb any time soon), I don't lose any performance vs RAID 6, which was a concern I had. I never got more than 115MB/s on my gigabit network, so UnRAID doesn't do badly (assuming the disks are fast enough individually obviously). The fact that I can have only one drive on when accessing the data is great (I organised it that way). Makes a lot more sense than having 24 drives spin up (as on my Synology!).