Recently, I decided I needed something other than a test vSAN node or locally attached drives as I’m spending a lot of time (way too much) developing the items I’m blogging about, not even accounting for the time I spend to set up the infrastructure in my homelab (vRA, HPOO, AD, DNS, etc.).

This decision led me down the search for a NAS unit, and I was quickly shocked by their price point. ~$1000 for anything with a capacity larger than 4 or 5 drives, sans the actual drives themselves. I then started to look at DIY NAS solutions like FreeNAS, NAS4Free, etc. This led me to a project called XPENology, where someone figured out how to load Synology’s DSM on DIY hardware. Pretty neat! (May create a blog about this in the future. If you would like to see this, let me know.)

Long story, short(er), I built myself an XPENology box. In the process, I created a RAID 10 on 4 3TB 7200RPM HDDs. During the creation of the RAID Group, I executed a RAID Parity check which was estimated to take nearly 8 hours. I thought that was a little excessive so I started doing some research.

After about 30 minutes of digging on the interwebs, I found a cause and also a possible solution: Check and set the RAID throughput maximum.

To see what your kernel setting for the RAID throughputs are at, SSH into your box and run the following commands. You can see that on my XPENology DS3615xs the min is configured 10,000 and the max at 200,000. 

 cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
10000

 

As I was running this command, the RAID Parity check was at about 4%.

I then ran ‘cat /proc/mdstat’ (example below) and noticed that the RAID group that was being checked was running at 200,000-200,500K/sec. Looks like I was hitting my ‘speed_limit_max’ of 200,000.

Again, while the RAID parity check was running I executed the command:

 echo 400000 >/proc/sys/dev/raid/speed_limit_max

 

The results were immediate – Here are two examples:

As you can see the speed increased by around 30-40%, which cut a couple hours off of the check. Great success! I would guess with some faster drives (SAS) or SSDs, you could easily go over 400,000K/sec.

Results will vary greatly based on CPU, chipset, storage controller, RAID config and drive speed.

*This was performed on an XPENology build. I take no responsibility for any changes you make to your devices.