User Tools

Site Tools


raid:mdadm:growing_an_array

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
raid:mdadm:growing_an_array [2021/09/14 17:24] – created peterraid:mdadm:growing_an_array [2021/09/14 17:34] (current) peter
Line 122: Line 122:
 <code bash> <code bash>
 Filesystem            Size  Used Avail Use% Mounted on Filesystem            Size  Used Avail Use% Mounted on
-/dev/sda2             7.9G  2.9G  4.7G  39% / +...
-tmpfs                 499M      499M   0% /dev/shm +
-/dev/sda1             194M   25M  159M  14% /boot+
 /dev/md0              5.0G   70M  4.7G   2% /mnt/md0 /dev/md0              5.0G   70M  4.7G   2% /mnt/md0
 </code> </code>
Line 130: Line 128:
 ---- ----
  
-===== Result ===== +===== Final Result =====
- +
-**NOTE:**  The disk failure has been dealt with. +
- +
-  * The original 4-disk RAID has been changed to a 5-disk array. +
-  * The RAID size has been increased too, without ever bringing the file system offline. +
-  * T converted to raid 6 and increased it's size by 2GB. All this was done without ever bringing the file system offline. You could have VMs or any other data running on there during any of these procedures. The procedures we went through could occur over the course of any time such as several years, and you never need to bring it offline +
- +
-There are much more things to explore such as chunk sizes and other settings, but this article was only to really scratch the surface of mdadm. I have been using mdadm for over 5 years without any issues. My data has survived some pretty harsh crashes due to various hardware failure, hard power downs due to failing UPS, and other such crashes. At one point I even lost 2 drives out of a raid 5 forcing the array offline due to some unexplainable hardware issue. The drives were not failed, there was some other hardware issue. I was later on able to bring both drives back into the array, rebuild, run a fsck on the file system, and short of VMs and other active files, all the data was fine. I restored the backups of the VMs and was ready to rock and roll. It is truly a resilient system and I actually recommend it over hardware raid for mass data storage simply because you are not relying on a specific controller, and you can even spam arrays across multiple controllers. +
- +
- +
- +
- +
- +
-That bit of info almost looks like an error but it's just some info you don't really have to worry about. As you can see, it worked:+
  
 +<code bash>
 +mdadm --detail /dev/md0
 +</code>
  
 +returns:
  
-[root@raidtest ~]# mdadm --detail /dev/md0+<code bash>
 /dev/md0: /dev/md0:
         Version : 1.2         Version : 1.2
   Creation Time : Tue Sep  6 18:31:41 2011   Creation Time : Tue Sep  6 18:31:41 2011
      Raid Level : raid6      Raid Level : raid6
-     Array Size : 3144192 (3.00 GiB 3.22 GB)+     Array Size : 5240320 (5.00 GiB 5.37 GB)
   Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)   Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB)
    Raid Devices : 7    Raid Devices : 7
Line 159: Line 147:
     Persistence : Superblock is persistent     Persistence : Superblock is persistent
  
-    Update Time : Thu Sep  8 18:58:53 2011 +    Update Time : Thu Sep  8 19:01:15 2011 
-          State : clean, recovering+          State : clean
  Active Devices : 7  Active Devices : 7
 Working Devices : 7 Working Devices : 7
Line 168: Line 156:
          Layout : left-symmetric          Layout : left-symmetric
      Chunk Size : 512K      Chunk Size : 512K
- 
- Reshape Status : 7% complete 
-  Delta Devices : 2, (5->7) 
  
            Name : raidtest.loc: (local to host raidtest.loc)            Name : raidtest.loc: (local to host raidtest.loc)
            UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf            UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf
-         Events : 2073+         Events : 2089
  
     Number   Major   Minor   RaidDevice State     Number   Major   Minor   RaidDevice State
Line 184: Line 169:
                   112        5      active sync   /dev/sdh                   112        5      active sync   /dev/sdh
                    96        6      active sync   /dev/sdg                    96        6      active sync   /dev/sdg
-[root@raidtest ~]#+</code>
  
 +<WRAP info>
 +**NOTE:**  The RAID size has been increased, without ever bringing the file system offline.
  
-(later)+  * There could be VMs or any other data running on the RAID during any of these procedures.
  
 +</WRAP>
  
-[root@raidtest ~]# mdadm --detail /dev/md0 +----
-/dev/md0: +
-        Version : 1.2 +
-  Creation Time : Tue Sep  6 18:31:41 2011 +
-     Raid Level : raid6 +
-     Array Size : 5240320 (5.00 GiB 5.37 GB) +
-  Used Dev Size : 1048064 (1023.67 MiB 1073.22 MB) +
-   Raid Devices : 7 +
-  Total Devices : 7 +
-    Persistence : Superblock is persistent+
  
-    Update Time : Thu Sep  8 19:01:15 2011 
-          State : clean 
- Active Devices : 7 
-Working Devices : 7 
- Failed Devices : 0 
-  Spare Devices : 0 
  
-         Layout : left-symmetric 
-     Chunk Size : 512K 
- 
-           Name : raidtest.loc: (local to host raidtest.loc) 
-           UUID : e0748cf9:be2ca997:0bc183a6:ba2c9ebf 
-         Events : 2089 
- 
-    Number   Major   Minor   RaidDevice State 
-                   16        0      active sync   /dev/sdb 
-                   32        1      active sync   /dev/sdc 
-                   48        2      active sync   /dev/sdd 
-                   64        3      active sync   /dev/sde 
-                   80        4      active sync   /dev/sdf 
-                  112        5      active sync   /dev/sdh 
-                   96        6      active sync   /dev/sdg 
raid/mdadm/growing_an_array.1631640260.txt.gz · Last modified: 2021/09/14 17:24 by peter

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki