This is an old revision of the document!
Table of Contents
RAID - mdadm - LVM on RAID
LVM can group multiple disks and RAID arrays into a Volume Group.
- This Volume Group can be split into Logical Volumes; in essence different Partitions.
- LVM provides the ability to resize these Logical Volumes very easily.
- LVM is not a replacement for RAID, as it only supports mirroring or stripe.
- Using LVM for resilience without RAID would result in twice as much storage being needed; a waste.
- Therefore it is best to be have LVM in conjunction with RAID.
NOTE: A filesystem that can grow must be used with LVM.
Resizing a RAID array
To resize an existing RAID5:
mdadm --add /dev/md1 /dev/sdb1 mdadm --grow /dev/md1 --raid-disks=4 (new number of disks)
NOTE: The –raid-disks=4 is the new number of disks.
- This will result in the raid having to restripe itself which can take a very long time.
Prepare The Disks
fdisk /dev/sda
and:
- Enter an n to create a new partition.
- Enter an e for an extended partition.
- Enter a p for a primary partition (1-4).
- Enter a 1* for primary partition number 1. * Accept the default sizes. * Enter a t to change the partition type. * Enter fd to change to Linux raid autodetect. * Enter a w to write the changes. —- ===== Wipe everything ===== <code bash> mdadm –stop /dev/md0 mdadm –zero-superblock /dev/sda1 mdadm –zero-superblock /dev/sdb1 mdadm –zero-superblock /dev/sdc1 </code> —- ===== Create the RAID Array ===== <code bash> mdadm -v –create /dev/md0 –chunk=128 –level=raid5 –raid-devices=4 /dev/sda1 \ /dev/sdb1 /dev/sdc1 missing </code> returns: <code bash> mdadm: layout defaults to left-symmetric mdadm: size set to 245111616K mdadm: array /dev/md0 started. </code>
NOTE: /dev/sdc1 is marked as missing.
- This will created the array as if one of the disks is dead.
- At a later stage the disk can be hot-added; which will result in the array rebuilding itself.
—- ===== Check the RAID Array ===== <code bash> mdadm –detail /dev/md0 </code> returns: <code bash> /dev/md0: Version : 00.90.01 Creation Time : Thu Jun 3 20:24:17 2004 Raid Level : raid5 Array Size : 735334656 (701.27 GiB 752.98 GB) Device Size : 245111552 (233.76 GiB 250.99 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 3 20:24:17 2004 State : clean, no-errors Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K Number Major Minor Raid Device State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 -1 removed UUID : d6ac1605:db6659e1:6460b9c0:a451b7c8 Events : 0.5078 </code> —- ===== Update the mdadm config file ===== <code bash> echo 'DEVICE /dev/sd*' >/etc/mdadm/mdadm.conf echo 'PROGRAM /bin/echo' »/etc/mdadm/mdadm.conf echo 'MAILADDR some@email_address.com' »/etc/mdadm/mdadm.conf mdadm –detail –scan »/etc/mdadm/mdadm.conf cat /etc/mdadm/mdadm.conf </code>
NOTE: This should be done whenever the config is changed, including adding a spare disk, mark a disk as faulty etc.
—- ===== Create a LVM Physical Volume ===== <code bash> pvcreate /dev/md0 </code> returns: <code bash> No physical volume label read from /dev/md0 Physical volume “/dev/md0” successfully created </code>
NOTE: This makes the RAID Array usable by LVM.
The LVM Physical Volume can be checked by running
pvdisplay
—- ===== Create a LVM Volume Group ===== <code bash> vgcreate myVolumeGroup1 /dev/md0 </code> returns: <code bash> Adding physical volume '/dev/md0' to volume group 'myVolumeGroup1' Archiving volume group “myVolumeGroup1” metadata. Creating volume group backup “/etc/lvm/backup/myVolumeGroup1” Volume group “myVolumeGroup1” successfully created </code>
NOTE: This Volume Group can be used to make logical volumes from.
NOTE: The LVM Volume Group can be checked by running
vgdisplay
—- ===== Create a LVM Logical Volume ===== <code bash> lvcreate -L 100G –name myDataVolume myVolumeGroup1 </code> returns: <code bash> Logical volume “myDataVolume” created </code>
NOTE: The LVM Logical Volume can be checked by running
lvdisplay
—- ===== Format the LVM Logical Volume ===== <code bash> mkfs.ext4 /dev/myVolumeGroup1/myDataVolume </code>
NOTE: Other filesystems besides ext4 can be used if preferred.
- Reiser
mkfs -treiserfs /dev/myVolumeGroup1/myDataVolume
- xfs
mkfs.xfs /dev/myVolumeGroup1/myDataVolume
—- ===== Mount the LVM Logical Partition ===== <code bash> mount /dev/myVolumeGroup1/myDataVolume /mnt/data </code> —- ===== Check the mount ===== <code bash> df -k </code> returns: <code bash> Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 19283776 697988 18585788 4% / /dev/hda1 97826 13003 79604 15% /boot /dev/myVolumeGroup1/myDataVolume 629126396 32840 629093556 1% /mnt/data </code> —- ===== Performance Enhancements / Tuning ===== There may be performance issues with LVM on RAID. A potential fix: <code bash> blockdev –setra 4096 /dev/md0 blockdev –setra 4096 /dev/myVolumeGroup1/myDataVolume </code>
WARNING: This may lock up the machine and destroy data.
- Ensure data is backed up before running this!
—- ===== References ===== https://www.mythtv.org/wiki/LVM_on_RAID