I'm not sure I would recommend this, but here's the controller I'm using: HighPoint 2760A, which is a 24-port SAS/SATA controller. It matches up with my case, so it greatly simplified cabling (just 6 cables for 24 drives). It came in 3 flavors, 8-port, 16-port and the 24-port I got. My 24-port model actually has 3 8-port controllers on it, and a PCI plex/hub thingy, so the OS sees it as 3 separate cards connected via a hub. The secret here is that each of these controllers is basically the same as the AOC-SASLP-MV8.
The reason I don't recommend this path (besides $$$ cost and high power consumption), is that there is a known issue with certain Seagate drives on LSI controllers, and these affect the MV8 which means my controller is affected as well. The issue is that certain 8TB/10TB Ironwold ST8000 drives will fail out of your array if you're on Unraid 6.9.x. That's why I'm still running 6.8.3, as I have two affected drives. It was really scary to have 2 drive fake fail at the same time, I thought I was going to lose data. There are some workarounds, but I haven't tried them yet.
One thing you need to consider for controller cards is if you're going the SAS route (requires SAS bays, probably a whole SAS compliant server case) or just individual SATA cables (sounds like what you're already doing). I think Jamie ended up taking the SATA route. You can easily mix and match SATA ports on your motherboard with extra SATA ports on a controller card, minimizing how many cards you need. If you get a SAS controller (like the LSI 16-port model you linked) you will need to use breakout cables (1 SAS to 4 SATA) unless you have a bays/backplanes with SAS connectors.
I just double-checked the controller card I recommended to Jamie, and it is the popular LSI 9201-8i. So Jamie, heads up if you have any 8TB Seagate Ironwolf type drives. These cards are cheap (if you can still find them), I think around $60-$70 each, but I haven't look in a long time. Since they are LSI, you may want to avoid if you have any Seagate 8TB or 10TB drives.
The LSI00244 that you linked looks like a newer version of my card, just in 16-port flavor. Again, LSI (hard to avoid, it seems) so good chance it will have the dreaded Seagate 8TB drive issue. That price is high for a HBA adapter, but still way cheaper than my 24-port card (I think I paid over $700 for it). If you have the spare slots, multiple 8-port cards like the LSI 9201-8i make more sense cost wise. There's also 16-port versions like the LSI 9201-16i. (HAHA, I just read that the LSI00244 is the 9201-16i!!!, so I've been talking about the same card all along - the price you found seems high).
The main thing you want to be cognizant of (and I haven't done the math for you) is that you don't overtax the available PCIe bandwidth. You need to double-check how many lanes and what PCIe generation a card uses (and your motherboard provides), and divide that by the number of SATA ports. For example, if you have 16 large & fast SATA II drives, they might peak out around 200 MB/s, so you would need about 3.2 TB/s of PCIe connectivity. PCIe 2.0 x4 only provides about 2 GB/s, so you would have a bottleneck. So a PCIe 2.0 x4 card is suitable for up to 8 SATA, but not 16. You need either PCIe 2.0 x8, or PCIe 3.0 x4, to have no bottlenecks with a 16-port card.
I'm sure you know this, but I'll state it anyway just in case. You need to verify the PCIe of the card and the individual motherboard slot. Some cards and/or slots are physically x16 but only wired for x8 or x4. On some motherboards, a x16 slot reverts to x4 if you enable certain other components that consume some of those lanes.
That LSI00244 / 9201-16i is a PCI-e 2.0 x8, so it has sufficient bandwidth if you put it in the right MB slot. I know I kept stating that the LSI cards have a known issue with 8TB Seagate drives, yet I've provided no alternatives. I guess I'm out of the loop, haven't looked at hardware much in years. There's some subsections on the Unraid forum where this stuff is discussed a lot, so for latest recommendations I'd go there:
https://forums.unraid.net/forum/33-storage-devices-and-controllers/
There's also some general wisdom to avoid SATA port multipliers (massive bottlenecks) and also avoid any Marvel controllers. Somehow my card shows up as a Marvel controller (Fix Common Problems always warns me) yet I also have the Seagate 8TB issue (maybe it's not just limited to LSI cards, what do I know...).
If you're not confused by now, then you haven't been paying attention. I know I've certainly succeeded in confusing myself...
Manni wrote: ↑Thu Dec 23, 2021 10:07 am
I've also found this case that looks great for up to 18-20 drives, so I could have 16 drives on the SATA board and a few SSDs using the MB SATA ports.
https://www.amazon.co.uk/gp/product/B08 ... UTF8&psc=1
That's a pretty case, but I don't think I would recommend it. Swapping drives is likely to be a pain. I think you need a solution using hot-swap bays. You can build your own, but I don't think this case can accommodate those bays.
It's definitely your choice. I know availability and price are factors, and that Fractal is available and cheap. But I personally feel the tradeoff is too great. This goes back to the case discussions that Jamie and I had earlier in this thread. My perspective hasn't changed. Most new cases these days do not have lots of external 5 1/4 drive bays, which is what you need for installing those hot-swap 3.5" bays. So you either need to be buying an older case design, or buying something purpose built for lots of external 3.5" drives.
Unfortunately, it seems like some of the manufacturers have left this space over the past few years, so your path will be harder.