This is an old revision of the document!
Table of Contents
RAID - mdadm - Growing an array
If a RAID is running out of space, additional disks can be added to the RAID to grow the array.
- Multiple drives can be added in once to grow the RAID much bigger if needed.
NOTE: The drive needs to be the same size as all the others of course.
Initial Array
mdadm --detail /dev/md0
returns:
/dev/md0: Version : 1.2 Creation Time : Tue Sep 6 18:31:41 2011 Raid Level : raid6 Array Size : 3144192 (3.00 GiB 3.22 GB) Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Thu Sep 8 18:54:26 2011 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : raidtest.loc:0 (local to host raidtest.loc) UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf Events : 2058 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 5 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde 6 8 80 4 active sync /dev/sdf
Add more drives
mdadm --add /dev/md0 /dev/sdg /dev/sdh
returns:
mdadm: added /dev/sdg mdadm: added /dev/sdh
NOTE: In this example 2 drives are added to the RAID.
Grow the array
mdadm --grow /dev/md0 --raid-devices=7
returns:
mdadm: Need to backup 7680K of critical section..
Expand the File System Volume
A RAID device is like a hard drive.
- Just because the hard drive is bigger, this does not mean the file system sees that extra space!
- Therefore, the file system volume needs to be expanded.
- This procedure has nothing to do with mdadm and is based on the file system.
resize2fs /dev/md0
returns:
resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/md0 is mounted on /mnt/md0; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/md0 to 1310080 (4k) blocks. The filesystem on /dev/md0 is now 1310080 blocks long.
Display Disks
df -hl
returns:
Filesystem Size Used Avail Use% Mounted on /dev/sda2 7.9G 2.9G 4.7G 39% / tmpfs 499M 0 499M 0% /dev/shm /dev/sda1 194M 25M 159M 14% /boot /dev/md0 5.0G 70M 4.7G 2% /mnt/md0
Result
NOTE: The disk failure has been dealt with.
- The original 4-disk RAID has been changed to a 5-disk array.
- The RAID size has been increased too, without ever bringing the file system offline.
- T converted to raid 6 and increased it's size by 2GB. All this was done without ever bringing the file system offline. You could have VMs or any other data running on there during any of these procedures. The procedures we went through could occur over the course of any time such as several years, and you never need to bring it offline
There are much more things to explore such as chunk sizes and other settings, but this article was only to really scratch the surface of mdadm. I have been using mdadm for over 5 years without any issues. My data has survived some pretty harsh crashes due to various hardware failure, hard power downs due to failing UPS, and other such crashes. At one point I even lost 2 drives out of a raid 5 forcing the array offline due to some unexplainable hardware issue. The drives were not failed, there was some other hardware issue. I was later on able to bring both drives back into the array, rebuild, run a fsck on the file system, and short of VMs and other active files, all the data was fine. I restored the backups of the VMs and was ready to rock and roll. It is truly a resilient system and I actually recommend it over hardware raid for mass data storage simply because you are not relying on a specific controller, and you can even spam arrays across multiple controllers.
That bit of info almost looks like an error but it's just some info you don't really have to worry about. As you can see, it worked:
[root@raidtest ~]# mdadm –detail /dev/md0 /dev/md0:
Version : 1.2 Creation Time : Tue Sep 6 18:31:41 2011 Raid Level : raid6 Array Size : 3144192 (3.00 GiB 3.22 GB) Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB) Raid Devices : 7 Total Devices : 7 Persistence : Superblock is persistent
Update Time : Thu Sep 8 18:58:53 2011 State : clean, recovering
Active Devices : 7 Working Devices : 7 Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Reshape Status : 7% complete
Delta Devices : 2, (5->7)
Name : raidtest.loc:0 (local to host raidtest.loc) UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf Events : 2073
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 5 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde 6 8 80 4 active sync /dev/sdf 8 8 112 5 active sync /dev/sdh 7 8 96 6 active sync /dev/sdg
[root@raidtest ~]#
(later)
[root@raidtest ~]# mdadm –detail /dev/md0 /dev/md0:
Version : 1.2 Creation Time : Tue Sep 6 18:31:41 2011 Raid Level : raid6 Array Size : 5240320 (5.00 GiB 5.37 GB) Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB) Raid Devices : 7 Total Devices : 7 Persistence : Superblock is persistent
Update Time : Thu Sep 8 19:01:15 2011 State : clean
Active Devices : 7 Working Devices : 7 Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric Chunk Size : 512K
Name : raidtest.loc:0 (local to host raidtest.loc) UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf Events : 2089
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 5 8 48 2 active sync /dev/sdd 4 8 64 3 active sync /dev/sde 6 8 80 4 active sync /dev/sdf 8 8 112 5 active sync /dev/sdh 7 8 96 6 active sync /dev/sdg