Date created: Thursday, October 7, 2021 11:09:31 AM. Last modified: Wednesday, August 23, 2023 8:38:41 AM

HTPCv3/Pi-NAS

Move Software RAID from Old NAS

Hard disk order in existing / legacy NAS:

gdisk -l /dev/sdb
Disk identifier (GUID): BC49445F-2559-4B6B-8387-09ACA853DB25

gdisk -l /dev/sdc
Disk identifier (GUID): 7D92F6A3-B46C-474C-8210-A86F5A8212F7

gdisk -l /dev/sdd
Disk identifier (GUID): B83091EF-B6EF-4F44-89A5-18C31E307701

gdisk -l /dev/sde
Disk identifier (GUID): 25EC6A05-8839-4F9D-801C-4860905FADCB

Match UUIDs /dev/sdX names with disk serial numbers:

$ udevadm info /dev/sdb | grep -E "ID_SERIAL|UUID"
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WMC4N1475240
E: ID_SERIAL_SHORT=WD-WMC4N1475240
E: ID_PART_TABLE_UUID=bc49445f-2559-4b6b-8387-09aca853db25

$ udevadm info /dev/sdc | grep -E "ID_SERIAL|UUID"
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WMC4N1911391
E: ID_SERIAL_SHORT=WD-WMC4N1911391
E: ID_PART_TABLE_UUID=7d92f6a3-b46c-474c-8210-a86f5a8212f7

$ udevadm info /dev/sdd | grep -E "ID_SERIAL|UUID"
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WMC4N1954715
E: ID_SERIAL_SHORT=WD-WMC4N1954715
E: ID_PART_TABLE_UUID=b83091ef-b6ef-4f44-89a5-18c31e307701

$ udevadm info /dev/sde | grep -E "ID_SERIAL|UUID"
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N0663527
E: ID_SERIAL_SHORT=WD-WCC4N0663527
E: ID_PART_TABLE_UUID=25ec6a05-8839-4f9d-801c-4860905fadcb

 

On the new machine /dev/sd[b,c,d,e] are /dev/sd[g,h,i,j] respectively.

With the disks in the new machine, and the mdadm.conf file copied over, the following command failed:

$ sudo mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md/0
mdadm: No super block found on /dev/sdg (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdg
mdadm: No super block found on /dev/sdj (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdj
mdadm: No super block found on /dev/sdi (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdi
mdadm: No super block found on /dev/sdh (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdh
...
mdadm: Merging with already-assembled /dev/md/htpc-ubuntu:0
mdadm: /dev/sdg1 is identified as a member of /dev/md/htpc-ubuntu:0, slot 0.
mdadm: /dev/sdj1 is identified as a member of /dev/md/htpc-ubuntu:0, slot 3.
mdadm: /dev/sdi1 is identified as a member of /dev/md/htpc-ubuntu:0, slot 2.
mdadm: /dev/sdh1 is identified as a member of /dev/md/htpc-ubuntu:0, slot 1.
mdadm: failed to add /dev/sdh1 to /dev/md/htpc-ubuntu:0: Device or resource busy
mdadm: failed to add /dev/sdi1 to /dev/md/htpc-ubuntu:0: Device or resource busy
mdadm: failed to add /dev/sdj1 to /dev/md/htpc-ubuntu:0: Device or resource busy
mdadm: failed to add /dev/sdg1 to /dev/md/htpc-ubuntu:0: Device or resource busy
mdadm: failed to RUN_ARRAY /dev/md/htpc-ubuntu:0: Input/output error

The OS had tried to build device md127 automatically. Just needed to stop that device and then the RAID was detected and started just fine:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : inactive sde1[2] sdc1[1] sdd1[0] sdb1[4]
11720536064 blocks super 1.2

$ sudo mdadm --stop /dev/md127
mdadm: stopped /dev/md127

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

$ sudo mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md/0
mdadm: No super block found on /dev/sdg (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdg
mdadm: No super block found on /dev/sdj (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdj
mdadm: No super block found on /dev/sdi (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdi
mdadm: No super block found on /dev/sdh (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdh
....
mdadm: /dev/sdg1 is identified as a member of /dev/md/0, slot 0.
mdadm: /dev/sdj1 is identified as a member of /dev/md/0, slot 3.
mdadm: /dev/sdi1 is identified as a member of /dev/md/0, slot 2.
mdadm: /dev/sdh1 is identified as a member of /dev/md/0, slot 1.
mdadm: added /dev/sdh1 to /dev/md/0 as 1
mdadm: added /dev/sdi1 to /dev/md/0 as 2
mdadm: added /dev/sdj1 to /dev/md/0 as 3
mdadm: added /dev/sdg1 to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 4 drives.

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sdg1[0] sdj1[4] sdi1[2] sdh1[1]
8790401640 blocks super 1.2 level 5, 4k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>

$ /usr/share/mdadm/checkarray --status /dev/md0
md0: idle

 

If the old mdadm.conf file is not available we can create a new one. Add the following to /etc/mdadm/mdadm.conf:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

Then run the following (as root) to append the disk array definition to the file:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

 

Add to fstab and mount, this remounts the RAID as read-write (note it was auo-read-only above and one could have used "mdadm --readwrite md0" but that would be temporary). Note that nofail is after auto, so that if the drive isn't present at boot time the system won't stall:

$ echo "/dev/md0 /media/9TB-Array ext4 rw,auto,nofail,errors=remount-ro,nosuid,nodev,uhelper=udisks2 0 0" >> /etc/fstab

$ mount -a

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdg1[0] sdj1[4] sdi1[2] sdh1[1]
8790401640 blocks super 1.2 level 5, 4k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>


$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Mar 3 14:14:29 2014
Raid Level : raid5
Array Size : 8790401640 (8383.18 GiB 9001.37 GB)
Used Dev Size : 2930133880 (2794.39 GiB 3000.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Thu Oct 14 12:51:45 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 4K

Consistency Policy : bitmap

Name : htpc-ubuntu:0
UUID : 8ea128cd:47130c7a:9fb4a5a5:66750b6b
Events : 16225

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 49 1 active sync /dev/sdd1
2 8 65 2 active sync /dev/sde1
4 8 33 3 active sync /dev/sdc1

 

Speed Test

USB is not a great protocol for disk access. A 4x3TB RAID-5 software array over a single USB3 port on a RasPi4 is not as slow as suspected:

Write:
$ dd status=progress if=/dev/zero of=/media/9TB-Array/zeros bs=1M count=1000
972029952 bytes (972 MB, 927 MiB) copied, 9 s, 108 MB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 9.61399 s, 109 MB/s

$ dd status=progress if=/dev/urandom of=/media/9TB-Array/urand bs=1M count=1000
1048576000 bytes (1.0 GB, 1000 MiB) copied, 57 s, 18.4 MB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 57.051 s, 18.4 MB/s

Read:

$ dd status=progress if=/media/9TB-Array/zeros of=/dev/null
990137856 bytes (990 MB, 944 MiB) copied, 13 s, 76.2 MB/s
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 13.9548 s, 75.1 MB/s

$ dd status=progress if=/media/9TB-Array/urand of=/dev/null
1023459328 bytes (1.0 GB, 976 MiB) copied, 18 s, 56.9 MB/s
2048000+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 18.4569 s, 56.8 MB/s

$ dd status=progress if=/media/9TB-Array/zeros of=/dev/null bs=1M count=1000
1009778688 bytes (1.0 GB, 963 MiB) copied, 7 s, 144 MB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 7.33777 s, 143 MB/s

$ dd status=progress if=/media/9TB-Array/urand2 of=/dev/null bs=1M count=1000
965738496 bytes (966 MB, 921 MiB) copied, 8 s, 121 MB/s
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 8.66835 s, 121 MB/s

 

Rebuilding A Disk

If a disk has come out of sync from the array, but is not faulty, it can be re-added to the array to trigger a rebuild of that disk:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdd1[1] sda1[0] sdb1[2]
      8790401640 blocks super 1.2 level 5, 4k chunk, algorithm 2 [4/3] [UUU_]
      bitmap: 11/22 pages [44KB], 65536KB chunk

unused devices: <none>
$ dmesg ... [ 16.657864] sdb: sdb1 [ 16.658648] sd 1:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). [ 16.659270] sd 1:0:0:0: [sdb] Attached SCSI disk [ 16.688640] sda: sda1 [ 16.689570] sd 0:0:0:0: [sda] Very big device. Trying to use READ CAPACITY(16). [ 16.690893] sd 0:0:0:0: [sda] Attached SCSI disk [ 16.843467] md: bind [ 16.902523] sdc: sdc1 [ 16.903293] sd 2:0:0:0: [sdc] Very big device. Trying to use READ CAPACITY(16). [ 16.904040] sd 2:0:0:0: [sdc] Attached SCSI disk [ 16.967163] sdd: sdd1 [ 16.968186] sd 3:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). [ 16.969042] sd 3:0:0:0: [sdd] Attached SCSI disk [ 16.986957] md: bind [ 17.109182] md: bind [ 17.222011] md: bind [ 17.250245] md: kicking non-fresh sdc1 from array! [ 17.250261] md: unbind [ 17.263527] md: export_rdev(sdc1) [ 17.268311] md/raid:md0: device sdd1 operational as raid disk 1 [ 17.268316] md/raid:md0: device sda1 operational as raid disk 0 [ 17.268318] md/raid:md0: device sdb1 operational as raid disk 2 [ 17.271160] md/raid:md0: allocated 4362kB [ 17.271316] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2 [ 17.273380] RAID conf printout: [ 17.273382] --- level:5 rd:4 wd:3 [ 17.273385] disk 0, o:1, dev:sda1 [ 17.273387] disk 1, o:1, dev:sdd1 [ 17.273390] disk 2, o:1, dev:sdb1 [ 17.274980] created bitmap (22 pages) for device md0 [ 17.276862] md0: bitmap initialized from disk: read 2 pages, set 123 of 44711 bits [ 17.299383] md0: detected capacity change from 0 to 9001371279360 [ 18.083510] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: errors=remount-ro ... $ sudo umount /media/9TB-Array/ $ sudo mdadm --manage /dev/md0 --re-add /dev/sdc mdadm: re-added /dev/sdc1 $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc1[4] sdd1[1] sda1[0] sdb1[2] 8790401640 blocks super 1.2 level 5, 4k chunk, algorithm 2 [4/3] [UUU_] [>....................] recovery = 0.1% (4204764/2930133880) finish=689.0min speed=70766K/sec bitmap: 11/22 pages [44KB], 65536KB chunk unused devices: <none> $ sudo bash -c "echo 500000 > /proc/sys/dev/raid/speed_limit_min" $ sudo bash -c "echo 500000 > /proc/sys/dev/raid/speed_limit_max" # Later, when it's finished: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc1[4] sdd1[1] sda1[0] sdb1[2] 8790401640 blocks super 1.2 level 5, 4k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none>

Previous page: Cisco 897VAW-E-K9 with Sky VDSL/FTTC
Next page: HTPCv2