RAID 10 故障修复
================
当 RAID 10 发生故障了一块硬盘怎么办?
1. 模拟挂掉一块硬盘,从RAID 10 的四块硬盘组中,剔除一块硬盘
# 查寻 sd 相关的磁盘
root@longchi:~# fdisk -l | grep sd[a-z]
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 1054719 1050624 513M EFI System
/dev/sda3 1054720 209713151 208658432 99.5G Linux filesystem
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/sdc: 300 GiB, 322122547200 bytes, 629145600 sectors
Disk /dev/sdd: 400 GiB, 429496729600 bytes, 838860800 sectors
Disk /dev/sde: 400 GiB, 429496729600 bytes, 838860800 sectors
2. 剔除 RAID 10 中一块硬盘的命令 mdadm /dev/md0 -f /dev/sdb
# 剔除 RAID 10 中一块硬盘
root@longchi:~# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
3. 检查 RAID 10 的状态 mdadm -D /dev/md0
4. 即使挂掉了一块硬盘也不会影响整个 raid 10 的使用.需要重启机器,重新读取 RAID 信息 reboot
# 检查 RAID 10 的状态
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 14:25:03 2024State : clean, degraded Active Devices : 3Working Devices : 3Failed Devices : 1Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 406Number Major Minor RaidDevice State- 0 0 0 removed1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde0 8 16 - faulty /dev/sdb
# 继续向/longchiraid 里面写入数据,可以发现即使 /dev/sdb 已经挂了,还是可以继续向 /dev/md0 中写入数据。
root@longchi:~# cd /longchiraid
root@longchi:/longchiraid# ls
test.txt
root@longchi:/longchiraid# cp test.txt test.txt.2
root@longchi:/longchiraid# ls
test.txt test.txt.2
root@longchi:/longchiraid# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 790M 2.0M 788M 1% /run
/dev/sda3 ext4 98G 11G 82G 12% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 vfat 512M 6.1M 506M 2% /boot/efi
tmpfs tmpfs 790M 72K 790M 1% /run/user/126
/dev/md0 fuseblk 400G 389M 400G 1% /longchiraid
root@longchi:/longchiraid# cp test.txt test.txt.3
root@longchi:/longchiraid# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 790M 2.0M 788M 1% /run
/dev/sda3 ext4 98G 11G 82G 12% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 vfat 512M 6.1M 506M 2% /boot/efi
tmpfs tmpfs 790M 72K 790M 1% /run/user/126
/dev/md0 fuseblk 400G 545M 400G 1% /longchiraid
root@longchi:/longchiraid# ls
test.txt test.txt.2 test.txt.3
5. 只需要购买新的硬盘设备,重新加入到 RAID 10 磁盘阵列组中即可
- 注意:可能你的 RAID 设备,已经添加了开机自动挂载,/etc/fstab 文件中定义的挂载参数,因此还需要取消挂载
-重新加入磁盘阵列,必须先取消挂载 umount /dev/md0
# 第一步先离开挂载点文件目录
root@longchi:/longchiraid# cd
# 第二步取消挂载
root@longchi:~# umount /dev/md0
- 重新添加新的硬盘,加入至 /dev/md0 磁盘整列组中
# 将 /dev/sdb 设备重新加入至 /dev/md0 磁盘阵列组中 mdadm /dev/md0 -a /dev/sdb
root@longchi:~# mdadm /dev/md0 -a /dev/sdb
mdadm: Cannot open /dev/sdb: Device or resource busy # 设备挂了,需要重启系统 reboot
# 重启后检查设备
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 3Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 17:58:07 2024State : clean, degraded Active Devices : 3Working Devices : 3Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 418Number Major Minor RaidDevice State- 0 0 0 removed1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde
# 检查挂载情况,因为我们在前面做了开机自动挂载,所以此时 /dev/md0 为挂载情况。
root@longchi:~# mount -l | grep md0
/dev/md0 on /longchiraid type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)
# 成功取消挂载了
root@longchi:~# umount /dev/md0
root@longchi:~# mount -l | grep md0
root@longchi:~#
# 将 /dev/sdb 设备重新加入至 /dev/md0 磁盘阵列中
root@longchi:~# mdadm /dev/md0 -a /dev/sdb
mdadm: re-added /dev/sdb
# 查看 /dev/md0 磁盘阵列组的信息 mdadm -D /dev/md0
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 18:39:22 2024State : clean Active Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 423Number Major Minor RaidDevice State0 8 16 0 active sync set-A /dev/sdb1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde
root@longchi:~#
6. 此时可以检查磁盘阵列组的信息,等待一个修复的过程
# 检查 /dev/md0 的状态
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 18:39:22 2024State : clean Active Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 423Number Major Minor RaidDevice State0 8 16 0 active sync set-A /dev/sdb1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde
root@longchi:~#
7. 等待修复完毕,且激活的设备回到了 4 块,RAID 10,故障修复完毕。
# 看到如下两条参数,即故障修复完毕 Active Devices : 4Working Devices : 4
RAID 10 重启
=============
RAID 10 重启
1. 得创建 RAID 的配置文件 echo DEVICE /dev/sd[b-e] > /etc/mdadm.conf
root@longchi:~# echo DEVICE /dev/sd[b-e] > /etc/mdadm.conf
root@longchi:~# cat /etc/mdadm.conf
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
2. 扫描磁盘磁盘阵列信息,追加到这个文件(/etc/mdadm.conf)中 mdadm -Ds >> /etc/mdadm.conf
'-s' 参数 表示扫描(磁盘阵列)
'-D' 参数 表示查看(磁盘阵列)状态
root@longchi:~# mdadm -Ds >> /etc/mdadm.conf
root@longchi:~# cat /etc/mdadm.conf
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
ARRAY /dev/md/0 metadata=1.2 name=longchi:0 UUID=fe258a50:d23511f6:ab65bb52:4dcc42a2
root@longchi:~#
3. 取消 RAID 10 的挂载 (如果在挂载中,一定要先取消挂载,才可以重启)
root@longchi:~# umount /longchiraid
umount: /longchiraid: not mounted.
root@longchi:~#
4. 可以停止 RAID 10 了 mdadm -S /dev/md0 '-S' 参数 大写的S 表示 stop 的意思
# 停止 RAID 10
root@longchi:~# mdadm -S /dev/md0
mdadm: stopped /dev/md0
5. 检查一下磁盘阵列组的信息 mdadm -D /dev/md0 此时应该是看不到( /dev/md0)设备文件了。即是正常状态了
root@longchi:~# mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
root@longchi:~#
root@longchi:~# ls /dev/md*
ls: cannot access '/dev/md*': No such file or directory
6. 在存在配置文件的情况下,可以正常的在启动 RAID 10 了 mdadm -A /dev/md0 '-A' 参数 表示启动
# 启动 RAID 10
root@longchi:~# mdadm -A /dev/md0
mdadm: Fail create md0 when using /sys/model/md_mod/parameters/new_array
mdadm: /dev/md0 has been started with 4 drives.# 查看 /dev/md0 磁盘阵列信息
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 18:39:22 2024State : clean Active Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 423Number Major Minor RaidDevice State0 8 16 0 active sync set-A /dev/sdb1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde
root@longchi:~#
7. 此时可以正常查看 RAID 10 的信息
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Tue Feb 20 21:28:12 2024Raid Level : raid10Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 4Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 18:39:22 2024State : clean Active Devices : 4Working Devices : 4Failed Devices : 0Spare Devices : 0Layout : near=2Chunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : fe258a50:d23511f6:ab65bb52:4dcc42a2Events : 423Number Major Minor RaidDevice State0 8 16 0 active sync set-A /dev/sdb1 8 32 1 active sync set-B /dev/sdc2 8 48 2 active sync set-A /dev/sdd3 8 64 3 active sync set-B /dev/sde
root@longchi:~#
root@longchi:~# mount -l | grep md0
/dev/md0 on /longchiraid type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)
root@longchi:~# ls /longchiraid
test.txt test.txt.2 test.txt.3
root@longchi:~# umount /longchiraid
root@longchi:~# ls /longchiraid
root@longchi:~#
RAID 10 卸载
=================
RAID 10 删除
按照如下笔记操作
1. 卸载挂载中的设备
root@longchi:~# umount /longchiraid
root@longchi:~# ls /longchiraid
root@longchi:~# umount /dev/md0
umount: /dev/md0: not mounted.
root@longchi:~#
2. 停止 RAID 服务
root@longchi:~# mdadm -S /dev/md0
mdadm: stopped /dev/md0
root@longchi:~#
# 查看 /dev/md0 文件,他已经没有了
root@longchi:~# ls /dev/md0
ls: cannot access '/dev/md0': No such file or directory
root@longchi:~#
3. 卸载 RAID 10 中所有的磁盘信息 mdadm --misc --zero-superblock /dev/sd[b-e]
root@longchi:~# mdadm --misc --zero-superblock /dev/sdb
root@longchi:~# mdadm --misc --zero-superblock /dev/sdc
root@longchi:~# mdadm --misc --zero-superblock /dev/sdd
root@longchi:~# mdadm --misc --zero-superblock /dev/sde
root@longchi:~#
4.删除 RAID 10 的配置文件(/etc/mdadm.conf)
root@longchi:~# cat /etc/mdadm.conf
DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde
ARRAY /dev/md/0 metadata=1.2 name=longchi:0 UUID=fe258a50:d23511f6:ab65bb52:4dcc42a2
root@longchi:~#
root@longchi:~# rm /etc/mdadm.conf
root@longchi:~#
5. 此时再清除开机自动挂载的配置文件(/etc/fstab)
清除 /etc/fstab 文件的自动挂载信息配置
root@longchi:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda3 during installation
UUID=1d8e3f92-eeaf-4cde-a566-1c7eea428509 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda2 during installation
UUID=FEB1-E722 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0/dev/md0 /longchiraid ntfs defaults 0 0
root@longchi:~# vim /etc/fstab
root@longchi:~# tail -1 /etc/fstab
# /dev/md0 /longchiraid ntfs defaults 0 0
root@longchi:~#
RAID 与备份盘
========
RAID 与备份盘的学习
1. 还用刚才的四块硬盘做实验,三块搭建 RAID 阵列组,有一块当做备份盘
可以用 RAID 5 来搭建三块硬盘的阵列组 命令如下
mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# 创建磁盘阵列使用命令如下 创建 RAID 5 :
mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
# 该命令参数解释:
# '-C' 表示创建 RAID 阵列
# '-v' 显示创建RAID 的过程
# '/dev/md0' 表示创建 RAID 阵列的名字
# '-a yes' 表示自动激活该设备 '/dev/md0' 自动创建阵列设备文件
# '-n 3' 表示指定三块硬盘创建阵列
# '-l 5' 表示指定 RAID 级别为 'RAID 5'
# '-x 1' 表示指定一个备份盘
# 最后跟上四块硬盘的名字 如下
# '/dev/sd[b-e]' 表示指定即将使用的磁盘,即代表指定使用的四块硬盘# 查看 跟 sd 相关的硬盘
root@longchi:~# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sdb /dev/sdc /dev/sdd /dev/sde
root@longchi:~#
root@longchi:~# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 209583104K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: largest drive (/dev/sdd) exceeds size (209583104K) by more than 1%
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@longchi:~#
2. 检查一下 RAID 阵列组的信息与状态 mdadm -D /dev/md0
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 23:46:27 2024State : clean, degraded, recovering Active Devices : 2Working Devices : 4Failed Devices : 0Spare Devices : 2Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapRebuild Status : 64% completeName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 132Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc4 8 48 2 spare rebuilding /dev/sdd3 8 64 - spare /dev/sde
root@longchi:~#
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 23:52:18 2024State : clean, degraded, recovering Active Devices : 2Working Devices : 4Failed Devices : 0Spare Devices : 2Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapRebuild Status : 97% completeName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 200Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc4 8 48 2 spare rebuilding /dev/sdd3 8 64 - spare /dev/sde
root@longchi:~# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Wed Feb 21 23:52:46 2024State : clean Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 207Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc4 8 48 2 active sync /dev/sdd3 8 64 - spare /dev/sde
3. 针对阵列组进行格式化文件系统
root@longchi:~# mkfs.
mkfs.bfs mkfs.ext2 mkfs.ext4 mkfs.minix mkfs.ntfs
mkfs.cramfs mkfs.ext3 mkfs.fat mkfs.msdos mkfs.vfat
root@longchi:~# mkfs.ext4 -f /dev/md0
mkfs.ext4: invalid option -- 'f'
Usage: mkfs.ext4 [-c|-l filename] [-b block-size] [-C cluster-size][-i bytes-per-inode] [-I inode-size] [-J journal-options][-G flex-group-size] [-N number-of-inodes] [-d root-directory][-m reserved-blocks-percentage] [-o creator-os][-g blocks-per-group] [-L volume-label] [-M last-mounted-directory][-O feature[,...]] [-r fs-revision] [-E extended-option[,...]][-t fs-type] [-T usage-type ] [-U UUID] [-e errors_behavior][-z undo_file][-jnqvDFSV] device [blocks-count]
root@longchi:~# mkfs.ext4 /dev/md0
mke2fs 1.46.5 (30-Dec-2021)
/dev/md0 contains a ntfs file system
Proceed anyway? (y,N) y
Creating filesystem with 104791552 4k blocks and 26198016 inodes
Filesystem UUID: faba633b-80de-401f-8dcc-55d169b41ed7
Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done root@longchi:~#
4. 开始挂载,使用阵列组分区
root@longchi:~# ls /longchiraid
root@longchi:~#
root@longchi:~# mount /dev/md0 /longchiraid
root@longchi:~# mount -l | grep md0
/dev/md0 on /longchiraid type ext4 (rw,relatime,stripe=256)
5. 检查挂载情况,以及数据写入情况
root@longchi:~# mount -l | grep md0
/dev/md0 on /longchiraid type ext4 (rw,relatime,stripe=256)
# 查看所有挂载情况
root@longchi:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 790M 1.9M 788M 1% /run
/dev/sda3 ext4 98G 11G 82G 12% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 vfat 512M 6.1M 506M 2% /boot/efi
tmpfs tmpfs 790M 72K 790M 1% /run/user/126
/dev/md0 ext4 393G 28K 373G 1% /longchiraid
root@longchi:~#
root@longchi:~# df -hT | grep md0
/dev/md0 ext4 393G 28K 373G 1% /longchiraid
6. 此时可以写入数据,检查 RAID 是否正常使用
root@longchi:~#
root@longchi:~# cd /longchiraid
root@longchi:/longchiraid# echo {1..10000000} > test.txt
root@longchi:/longchiraid# cp test.txt test.txt.1
root@longchi:/longchiraid# cp test.txt test.txt.2
root@longchi:/longchiraid# cp test.txt test.txt.3
root@longchi:/longchiraid# cp test.txt test.txt.4
root@longchi:/longchiraid# ls
lost+found test.txt test.txt.1 test.txt.2 test.txt.3 test.txt.4
root@longchi:/longchiraid# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 790M 1.9M 788M 1% /run
/dev/sda3 ext4 98G 11G 82G 12% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 vfat 512M 6.1M 506M 2% /boot/efi
tmpfs tmpfs 790M 72K 790M 1% /run/user/126
/dev/md0 ext4 393G 377M 373G 1% /longchiraid
root@longchi:/longchiraid# cp test.txt test.txt.5
root@longchi:/longchiraid# df -hT
Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 790M 1.9M 788M 1% /run
/dev/sda3 ext4 98G 11G 82G 12% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/sda2 vfat 512M 6.1M 506M 2% /boot/efi
tmpfs tmpfs 790M 72K 790M 1% /run/user/126
/dev/md0 ext4 393G 452M 372G 1% /longchiraid
root@longchi:/longchiraid#
root@longchi:/longchiraid# df -hT | grep md0
/dev/md0 ext4 393G 452M 372G 1% /longchiraid
7. 见证备份磁盘的作用,从磁盘阵列组中删除一块硬盘,检查阵列情况
root@longchi:/longchiraid# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Thu Feb 22 00:21:40 2024State : clean Active Devices : 3Working Devices : 4Failed Devices : 0Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 207Number Major Minor RaidDevice State0 8 16 0 active sync /dev/sdb1 8 32 1 active sync /dev/sdc4 8 48 2 active sync /dev/sdd3 8 64 - spare /dev/sde
root@longchi:/longchiraid# # 从磁盘阵列组中删除一块硬盘
root@longchi:/longchiraid# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
root@longchi:/longchiraid#
8. 检查备份盘是否自动加入了阵列组,实验完成 mdadm -D /dev/md0
root@longchi:/longchiraid# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Thu Feb 22 00:31:36 2024State : clean, degraded, recovering Active Devices : 2Working Devices : 3Failed Devices : 1Spare Devices : 1Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapRebuild Status : 14% completeName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 239Number Major Minor RaidDevice State3 8 64 0 spare rebuilding /dev/sde1 8 32 1 active sync /dev/sdc4 8 48 2 active sync /dev/sdd0 8 16 - faulty /dev/sdb
# 备份盘已经接替工作了
root@longchi:/longchiraid# mdadm -D /dev/md0
/dev/md0:Version : 1.2Creation Time : Wed Feb 21 23:35:16 2024Raid Level : raid5Array Size : 419166208 (399.75 GiB 429.23 GB)Used Dev Size : 209583104 (199.87 GiB 214.61 GB)Raid Devices : 3Total Devices : 4Persistence : Superblock is persistentIntent Bitmap : InternalUpdate Time : Thu Feb 22 00:46:28 2024State : clean Active Devices : 3Working Devices : 3Failed Devices : 1Spare Devices : 0Layout : left-symmetricChunk Size : 512KConsistency Policy : bitmapName : longchi:0 (local to host longchi)UUID : e79936a0:022f080e:fcfc7e5f:f2114b46Events : 423Number Major Minor RaidDevice State3 8 64 0 active sync /dev/sde1 8 32 1 active sync /dev/sdc4 8 48 2 active sync /dev/sdd0 8 16 - faulty /dev/sdb
root@longchi:/longchiraid#