====== RAID - mdadm - Check the RAID status ======
cat /proc/mdstat
returns:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sdf3[0] sdd3[7] sdg3[6] sdh3[5] sdc3[4] sdb3[2] sda3[3] sde3[1]
46824429696 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
bitmap: 21/59 pages [84KB], 65536KB chunk
**NOTE:**
* **Personalities**: What RAID levels the kernel currently supports.
* This can be changed by either changing the raid modules or recompiling the kernel.
* Possible personalities include: [raid0] [raid1] [raid4] [raid5] [raid6] [linear] [multipath] [faulty]
* **[faulty]** is a diagnostic personality; it does not mean there is a problem with the array.
* **md0**: The device is /dev/md0.
* **active**: The RAID is active and started.
* An inactive array is usually faulty. Stopped arrays are not visible here.
* **raid6**: The type of RAID array and the component devices are:
* **sdf3[0]** is device 0.
* **sdd3[7]** is device 7.
* **sdg3[6]** is device 6.
* **sdh3[5]** is device 5.
* **sdc3[4]** is device 4.
* **sdb3[2]** is device 2.
* **sda3[3]** is device 3.
* **sde3[1]** is device 1.
* The order in which the devices appear in this line means nothing.
* **46824429696 blocks**: The usable size of the array in blocks.
* **super 1.0**: The array uses a 1.0 [[https://raid.wiki.kernel.org/index.php/RAID_superblock_formats|superblock]].
* **level 6**: Confirms this is a level 6 array.
* **64k chunk**: Has a chunk size of 64k.
* This is the size for 'chunks' and is only relevant to raid levels that involve striping (1,4,5,6,10).
* The address space of the array is conceptually divided into chunks and consecutive chunks are striped onto neighboring devices.
* **algorithm 2**: Uses algorithm 2.
* **[8/8]**: The first number is the number of a complete raid devices as defined. The second number is how many devices are in use.
* **[UUUUUUUU]**: The status of each device.
* **F**: The drive has failed.
* **U**: The drive is used and working fine.
* **_**: The drive is down.
----
mdadm --detail /dev/md1
returns:
/dev/md1:
Version : 1.0
Creation Time : Tue Mar 6 17:46:54 2018
Raid Level : raid6
Array Size : 46824429696 (44655.26 GiB 47948.22 GB)
Used Dev Size : 7804071616 (7442.54 GiB 7991.37 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 14 00:02:02 2021
State : active
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 1
UUID : 85d45e53:913fde8a:55ba7e06:ee932838
Events : 16304341
Number Major Minor RaidDevice State
0 8 83 0 active sync /dev/sdf3
1 8 67 1 active sync /dev/sde3
3 8 3 2 active sync /dev/sda3
2 8 19 3 active sync /dev/sdb3
4 8 35 4 active sync /dev/sdc3
5 8 115 5 active sync /dev/sdh3
6 8 99 6 active sync /dev/sdg3
7 8 51 7 active sync /dev/sdd3
----