Adding in a new 4Tb disk – 

This is slow – it’s been going for almost 24 hours, and set to finish in 2355 minutes!

Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [========>…………]  reshape = 43.6% (1704858880/3902283648) finish=2355.7min speed=15546K/sec

 

 

Checking system variables:

 

Synology_BG> cat /proc/sys/dev/raid/speed_limit_min

100000

Synology_BG> cat /proc/sys/dev/raid/speed_limit_max

200000

Synology_BG> cat /sys/block/md2/md/stripe_cache_size

1024

 

 

Increasing stripe_cache_size

 

 

Synology_BG> echo 16384 >/sys/block/md2/md/stripe_cache_size

 

 

Notice the new finish time – still long but down from 2355 minutes to 1124 minutes!

 

Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [========>…………]  reshape = 43.9% (1716963456/3902283648) finish=1124.0min speed=32402K/sec

      

 

 

It can also be seen in the Dynology admin interface :

NewImage

where the disk utilisation is now in the region of 80% – 90%.

 

UPDATE :

I tried doubling the stripe size one more time :

Synology_BG> echo “32768” > stripe_cache_size 

Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [=============>…….]  reshape = 69.1% (2699382912/3902283648) finish=272.7min speed=73492K/sec

 

This increased disk utilisation etc. 

NewImage

but it also seems to stop almost anything but the rebuild action – even the terminal interface slows down to a crawl…

But there seems to be memory left, and the estimate to finish went down to 4 hours!

 

Update 2 :

Reducing the stripe_cache_size back to 16384 gave me control back of the terminal window, and the web interface, and actually kept the Disk utilisation around 100%.

Volume utilisation went down slightly though.

But throughput remained high.

Go figure.