zfs:troubleshooting:data_recovery
ZFS - Troubleshooting - Data Recovery
Assume we have a 2 x 2 mirrored zpool:
sudo zpool create testpool mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf -f sudo zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0
Now populate it with some data and check sum the data:
dd if=/dev/urandom of=/testpool/random.dat bs=1M count=4096 md5sum /testpool/random.dat f0ca5a6e2718b8c98c2e0fdabd83d943 /testpool/random.dat
Now we simulate catastrophic data loss by overwriting one of the VDEV devices with zeros:
sudo dd if=/dev/zero of=/dev/sde bs=1M count=8192
And now initiate a scrub:
sudo zpool scrub testpool
And check the status:
sudo zpool status pool: testpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-9P scan: scrub in progress since Tue May 12 17:34:53 2015 244M scanned out of 1.91G at 61.0M/s, 0h0m to go 115M repaired, 12.46% done config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sde ONLINE 0 0 948 (repairing) sdf ONLINE 0 0 0
…now let us remove the drive from the pool:
sudo zpool detach testpool /dev/sde
..hot swap it out and add a new one back:
sudo zpool attach testpool /dev/sdf /dev/sde -f
..and initiate a scrub to repair the 2 x 2 mirror:
sudo zpool scrub testpool
zfs/troubleshooting/data_recovery.txt · Last modified: 2021/10/13 22:37 by peter