Archive for August, 2015

Adding in a new 4Tb disk – 

This is slow – it’s been going for almost 24 hours, and set to finish in 2355 minutes!

Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [========>…………]  reshape = 43.6% (1704858880/3902283648) finish=2355.7min speed=15546K/sec



Checking system variables:


Synology_BG> cat /proc/sys/dev/raid/speed_limit_min


Synology_BG> cat /proc/sys/dev/raid/speed_limit_max


Synology_BG> cat /sys/block/md2/md/stripe_cache_size




Increasing stripe_cache_size



Synology_BG> echo 16384 >/sys/block/md2/md/stripe_cache_size



Notice the new finish time – still long but down from 2355 minutes to 1124 minutes!


Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [========>…………]  reshape = 43.9% (1716963456/3902283648) finish=1124.0min speed=32402K/sec




It can also be seen in the Dynology admin interface :


where the disk utilisation is now in the region of 80% – 90%.



I tried doubling the stripe size one more time :

Synology_BG> echo “32768” > stripe_cache_size 

Synology_BG> cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sdd5[3] sdc5[2] sda5[1] sdb5[0]

      7804567296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      [=============>…….]  reshape = 69.1% (2699382912/3902283648) finish=272.7min speed=73492K/sec


This increased disk utilisation etc. 


but it also seems to stop almost anything but the rebuild action – even the terminal interface slows down to a crawl…

But there seems to be memory left, and the estimate to finish went down to 4 hours!


Update 2 :

Reducing the stripe_cache_size back to 16384 gave me control back of the terminal window, and the web interface, and actually kept the Disk utilisation around 100%.

Volume utilisation went down slightly though.

But throughput remained high.

Go figure.

Hm! Guild Wars 2 (the base version) just went free to play today!

Play For Free Today | “Today we’re happy to announce that the Guild Wars 2 core game is available for everyone to play for free. With Guild Wars 2: Heart of Thorns™ launching soon, there’s no better time to introduce the game to your friends.”


It looks like there is a new version (2) of the Air Quality Egg :



Air Quality Egg Roadmap | Wicked Device Shop: “Air Quality Egg Roadmap

Posted on May 19, 2015 by admin in Uncategorized

The Air Quality Egg lets you monitor the air where you live, and see the data on your smart phone or computer. The data is open, and visible at,  fostering a community of people who collect the data necessary to create a global air quality picture. As such, we are committed to making the Air Quality Egg more powerful and easier to use.

With the release of our first v2 Egg next week, it is time to share the release schedule for the next year. We plan to release sensors for all EPA Criteria Pollutants identified in the National Ambient Air Quality Standards. The models are:

Model A (CO, NO2)    Nitrogen Dioxide (NO2), Carbon Monoxide (CO) – May 2015 Model B (O3, SO2)    Ozone (O3), Sulfur Dioxide (SO2) – July 2015 Model C (Particles)    Dust Particulates (PM) – September 2015 Model D (Voc)             Volatile organic compounds (VOC) – January 2016 Future model #          Lead (Pb). May be added to existing model, or a new model.

The new Egg features Wi-Fi, has increased accuracy, shows data in real time on its LCD panel, and comes with pre-calibrated sensors. And of course, it is still Open Source. For the full product description, check out the product page.


Carbon Copy Cloner is my “go to” application for copying or syncing large number of files between Mac’s regardless of their location – it works reliably and fast across the internet as well. I’ve used it for more years than I care to remember, and it’s saved me countless times.

But when I copied all my files from a set of disk volumes across to the Synology I saw a big difference between using CCC and just using the finder. I easily reached 40MB/s using Finder copy, but less than 20Mb/s using CCC.

Then I found this explanation.

Eject the network volume in the Finder

Our first recommendation is to eject your network shares in the Finder, then run your task again. We have run several tests and positively identified an issue in which the Finder will make repeated and ceaseless access attempts to the items of a folder on your network share if you simply open the network volume in the Finder. This persists even after closing the window. This is a Finder bug, and it exists in both Mavericks and Yosemite. If you eject the network volume(s), then run your CCC backup tasks, CCC will mount the network volume privately such that it is not browseable in the Finder.

Hm! After my previous posting on the speed of adding a new disk to my Synology disk station I had a closer look at the stats.

The Resource monitor seems to indicate that the disk(s) are running at close to 100% speed, and I suspect that the reason is in the image below from the SSH window.


I started off with 2 disks in RAID1 (mirroring) and it’s now updating the array to RAID5 – with is no small task.





I just put in my 3rd 4Tb disk yesterday – and today I see


so in 24 hours or so it has only finished around 15% of whatever it needs to do (rebuild of parity I assume) – another 2 dais until I can use the extra storage I assume.



So see the progress of the expansion, login as root via telnet or ssh and enter cat /proc/mdstat [+enter].



I’m not a beekeeper so therefore no expert – but this looks like a fantastically clean way of getting honey extracted from the frames in a beehive :