在RAID之上配置LVM:对RAID设备进行分区的另一种方法。
分区raid设备是否和分区ext2、ext3和ext4文件系统相同?
答案是不!
raid设备的分区不同于ext2、ext3和ext4文件系统的分区。
Raid设备分区不同于简单的文件系统分区方法。
为什么需要对RAID设备进行分区?
分区RAID设备与分区简单磁盘不同。所以,当你需要对一个raid设备进行分区或者调整其大小时该怎么做。
在RAID之上创建LVM是对RAID设备进行分区的替代方法之一。
众所周知,逻辑卷可以通过使用单个物理卷或者使用多个物理卷来创建。因此,如果我们使用一个物理卷创建了100GB的逻辑卷,而使用多个物理卷创建了另一个100GB的逻辑卷,那么在这种情况下,哪个逻辑卷具有更高的性能?
或者哪个逻辑卷更灵活?
或者哪个逻辑卷更稳定(即数据丢失的可能性最小)?
还是会有业绩变化?
通过这个实验实验室,你可以自己得到上述问题的答案。
你可以在你自己的机器上测试这个场景,把你的观点,你的答案放其中供世界其他地方使用。
这是一个实验实验室,我将在这里解释,但它揭示了raid和lvm的一些概念,所以我决定出版它。每个人都可以通过注释区自由注释或者欣赏它,但由于这完全是我的文章,所以如果我不喜欢它,我保留完全忽略注释的权利,并感谢那些在实验室实践这个实验室并在这里与世界其他地方分享他们的观点、结果和想法的人。
要在软件RAID5上创建LVM,我们需要通过下面我提到的几个简单步骤。
分区
将分区类型更改为raid
配置软件RAID5.
创建MD设备/dev/mdX。
选择或者选择设备类型。
选择Raid5中要使用的设备数。
选择要在RAID5中使用的备用设备。
更改或者配置RAID5阵列的布局。
配置装载点。
使用RAID创建物理卷。
使用RAID创建卷组。
逻辑卷创建。
格式化逻辑卷并配置装入点。
在/etc/fstab文件中输入永久条目。
测试磁盘故障情况及其对创建的raid和逻辑卷的影响。
步骤1:-我们将在这里创建4个分区/dev/sda6/dev/sda7/dev/sda8和/dev/sda9,每个分区的大小为100mb。
[root@satish ~]# fdisk /dev/sda The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n First cylinder (18869-19457, default 18869): Using default value 18869 Last cylinder or +size or +sizeM or +sizeK (18869-19457, default 19457): +100M Command (m for help): n First cylinder (18882-19457, default 18882): Using default value 18882 Last cylinder or +size or +sizeM or +sizeK (18882-19457, default 19457): +100M Command (m for help): n First cylinder (18895-19457, default 18895): Using default value 18895 Last cylinder or +size or +sizeM or +sizeK (18895-19457, default 19457): +100M Command (m for help): n First cylinder (18908-19457, default 18908): Using default value 18908 Last cylinder or +size or +sizeM or +sizeK (18908-19457, default 19457): +100M Command (m for help): p Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux /dev/sda6 18869 18881 104391 83 Linux /dev/sda7 18882 18894 104391 83 Linux /dev/sda8 18895 18907 104391 83 Linux /dev/sda9 18908 18920 104391 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [root@satish ~]#partprobe
步骤2:-更改分区类型。
[root@satish ~]# fdisk /dev/sda The number of cylinders for this disk is set to 19457. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): t Partition number (1-9): 9 Hex code (type L to list codes): fd Changed system type of partition 9 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 8 Hex code (type L to list codes): fd Changed system type of partition 8 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 7 Hex code (type L to list codes): fd Changed system type of partition 7 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-9): 6 Hex code (type L to list codes): fd Changed system type of partition 6 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 3825 30617600 7 HPFS/NTFS /dev/sda3 3825 11474 61440000 7 HPFS/NTFS /dev/sda4 11475 19457 64123447+ 5 Extended /dev/sda5 11475 18868 59392273+ 83 Linux /dev/sda6 18869 18881 104391 fd Linux raid autodetect /dev/sda7 18882 18894 104391 fd Linux raid autodetect /dev/sda8 18895 18907 104391 fd Linux raid autodetect /dev/sda9 18908 18920 104391 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. [root@satish ~]#partprobe
步骤3:-使用三个100mb分区/dev/sda6/dev/sda7和/dev/sda8创建Raid设备初始阵列。
同时我们创建了备用设备/dev/sda9.
[root@satish ~]# mdadm --create --verbose /dev/md5 --chunk=128 --level=5 --layout=right-asymmetric --raid-devices=3 /dev/sda6 /dev/sda7 /dev/sda8 --spare-devices=1 /dev/sda9 mdadm: /dev/sda6 appears to contain an ext2fs file system size=208640K mtime=Mon Jan 27 08:20:52 2013 mdadm: /dev/sda6 appears to be part of a raid array: level=raid0 devices=2 ctime=Mon Jan 27 08:11:06 2013 mdadm: /dev/sda7 appears to be part of a raid array: level=raid0 devices=2 ctime=Mon Jan 27 08:11:06 2013 mdadm: /dev/sda8 appears to contain an ext2fs file system size=104320K mtime=Tue Jan 28 07:49:32 2013 mdadm: /dev/sda8 appears to be part of a raid array: level=raid1 devices=2 ctime=Tue Jan 28 07:48:03 2013 mdadm: /dev/sda9 appears to contain an ext2fs file system size=104320K mtime=Tue Jan 28 07:49:32 2013 mdadm: /dev/sda9 appears to be part of a raid array: level=raid1 devices=2 ctime=Tue Jan 28 07:48:03 2013 mdadm: size set to 104320K Continue creating array? y mdadm: array /dev/md5 started.
以上命令说明:
--create:此选项用于创建新的raid设备。
--verbose:此选项帮助我们实时查看操作信息。
--level=5:此选项定义RAID级别。所以这是RAID5.
--raid devices=3:这告诉我们将在raid中使用的设备或者磁盘的数量。这里的磁盘数是3.
/dev/sda6/dev/sda7/dev/sda8:这些是将在raid中使用的磁盘。
--spare devices:此选项用于在创建raid阵列时添加备用磁盘,以便在磁盘发生故障时自动同步。
--layout:此选项说明所创建数组的布局或者对称性。
步骤4:-用日志文件系统格式化raid设备。
[root@satish ~]# mkfs.ext3 /dev/md5 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 52208 inodes, 208640 blocks 10432 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 26 block groups 8192 blocks per group, 8192 fragments per group 2008 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
步骤5:-查看RAID配置
如何查看或者显示当前所有活动的raid设备的基本信息。
[root@satish ~]# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md5 : active raid5 sda8[2] sda9[3](S) sda7[1] sda6[0] 208640 blocks level 5, 128k chunk, algorithm 1 [3/3] [UUU] unused devices: <none>
查看raid设备的详细信息。
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 01:30:52 2013 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : right-asymmetric Chunk Size : 128K UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.2 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 8 2 active sync /dev/sda8 3 8 9 - spare /dev/sda9
如何判断给定或者提及的设备是组件设备还是raid设备。
[root@satish ~]# mdadm --query /dev/sda9 /dev/sda9: is not an md array /dev/sda9: device 3 in 3 device active raid5 /dev/md5. Use mdadm --examine for more detail. [root@satish ~]# mdadm --query /dev/sda6 /dev/sda6: is not an md array /dev/sda6: device 0 in 3 device active raid5 /dev/md5. Use mdadm --examine for more detail.
如何判断给定或者提及的设备是组件设备还是raid设备。
[root@satish ~]# mdadm --query /dev/md5 /dev/md5: 203.75MiB raid5 3 devices, 1 spare. Use mdadm --detail for more detail. /dev/md5: No md super block found, not an md component.
如何更详细地检查raid中使用的设备。
[root@satish ~]# mdadm --examine /dev/sda9 /dev/sda9: Magic : a92b4efc Version : 0.90.00 UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Used Dev Size : 104320 (101.89 MiB 106.82 MB) Array Size : 208640 (203.78 MiB 213.65 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Update Time : Mon Jun 3 01:22:28 2013 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Checksum : 9fb5233f - correct Events : 2 Layout : right-asymmetric Chunk Size : 128K Number Major Minor RaidDevice State this 3 8 9 3 spare /dev/sda9 0 0 8 6 0 active sync /dev/sda6 1 1 8 7 1 active sync /dev/sda7 2 2 8 8 2 active sync /dev/sda8 3 3 8 9 3 spare /dev/sda9
如何列出数组行。
[root@satish ~]# mdadm --detail --scan ARRAY /dev/md5 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=74a1ed87:c7567887:280dbe38:ef27c774
如何查看或者列出特定设备的数组行。
[root@satish ~]# mdadm --detail --brief /dev/md5 ARRAY /dev/md5 level=raid5 num-devices=3 metadata=0.90 spares=1 UUID=74a1ed87:c7567887:280dbe38:ef27c774
步骤6:使用RAID5阵列创建物理卷。
[root@satish ~]# pvcreate /dev/md5 Physical volume "/dev/md5" successfully created
使用pvs检查物理卷属性。
[root@satish ~]# pvs PV VG Fmt Attr PSize PFree /dev/md5 lvm2 -- 203.75M 203.75M
使用pvdisplay命令详细检查物理卷信息。
[root@satish ~]# pvdisplay "/dev/md5" is a new physical volume of "203.75 MB" --- NEW Physical volume --- PV Name /dev/md5 VG Name PV Size 203.75 MB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID e5YCQh-0IFd-MYv2-2WzC-KHEx-pys3-z8w2Ud
步骤7:-使用vgcreate命令创建名为raid5的卷组。
[root@satish ~]# vgcreate raid5 /dev/md5 Volume group "raid5" successfully created You have new mail in /var/spool/mail/root
请参见使用vgs命令的卷组属性。
[root@satish ~]# vgs VG #PV #LV #SN Attr VSize VFree raid5 1 0 0 wz--n- 200.00M 200.00M
使用vgdisplay查看卷组信息的详细信息。
[root@satish ~]# vgdisplay --- Volume group --- VG Name raid5 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 200.00 MB PE Size 4.00 MB Total PE 50 Alloc PE/Size 0/0 Free PE/Size 50/200.00 MB VG UUID om3xvw-CGQX-mMwx-K03R-jf2p-zaqM-xjswMZ
步骤8:-使用lvcreate 创建逻辑卷
[root@satish ~]# lvcreate -L 150M raid5 -n lvm0 Rounding up size to full physical extent 152.00 MB Logical volume "lvm0" created
查看逻辑卷的属性。
[root@satish ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lvm0 raid5 -wi-a- 152.00M
详细查看逻辑卷信息。
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/raid5/lvm0 VG Name raid5 LV UUID UCrVf9-3cJx-0TlU-aSl0-Glqg-igic-UHtVgg LV Write Access read/write LV Status available # open 0 LV Size 152.00 MB Current LE 38 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 253:0
步骤9:-格式化lvm分区。
[root@satish ~]# mkfs.ext3 /dev/raid5/lvm0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 38912 inodes, 155648 blocks 7782 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 19 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 22 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
步骤10:-配置挂载点。
[root@satish ~]# mkdir /raid5 [root@satish ~]# mount /dev/raid5/lvm0 /raid5 [root@satish ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 55G 22G 31G 41% / tmpfs 502M 0 502M 0% /dev/shm /dev/mapper/raid5-lvm0 148M 5.6M 135M 4% /raid5
现在你可以扫描所有设备了
[root@satish ~]# lvmdiskscan /dev/ramdisk [ 16.00 MB] /dev/raid5/lvm0 [ 152.00 MB] /dev/ram [ 16.00 MB] /dev/sda1 [ 100.00 MB] /dev/ram2 [ 16.00 MB] /dev/sda2 [ 29.20 GB] /dev/ram3 [ 16.00 MB] /dev/sda3 [ 58.59 GB] /dev/ram4 [ 16.00 MB] /dev/ram5 [ 16.00 MB] /dev/root [ 56.64 GB] /dev/md5 [ 203.75 MB] LVM physical volume /dev/ram6 [ 16.00 MB] /dev/ram7 [ 16.00 MB] /dev/ram8 [ 16.00 MB] /dev/ram9 [ 16.00 MB] /dev/ram10 [ 16.00 MB] /dev/ram11 [ 16.00 MB] /dev/ram12 [ 16.00 MB] /dev/ram13 [ 16.00 MB] /dev/ram14 [ 16.00 MB] /dev/ram15 [ 16.00 MB] 3 disks 18 partitions 0 LVM physical volume whole disks 1 LVM physical volume
注意:lvm的配置文件是:
[root@satish ~]# vim /etc/lvm/lvm.conf
如果我们想详细了解物理卷以及与物理卷一起参与的驱动器,我们可以完成所有这些文件。这个文件将详细了解物理卷的创建,并排除故障。
[root@satish ~]# vim /etc/lvm/archive/vg00_00000.vg 1 # Generated by LVM2 version 2.02.46-RHEL5 (2009-06-18): Sat Apr 27 12:45:46 2013 2 3 contents = "Text Format Volume Group" 4 version = 1 5 6 description = "Created *before* executing 'vgcreate vg00 /dev/sda6 /dev/sda7 /dev/sda8'" 7 8 creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 9 creation_time = 1367081146 # Sat Apr 27 12:45:46 2013 10 11 vg00 { 12 id = "H3FYcT-1u28-i8ln-ehNm-DbFM-nelQ-3UFSnw" 13 seqno = 0 14 status = ["RESIZEABLE", "READ", "WRITE"] 15 flags = [] 16 extent_size = 8192 # 4 Megabytes 17 max_lv = 0 18 max_pv = 0 19 20 physical_volumes { 21 22 pv0 { 23 id = "cfz6P0-VVhD-fWUs-sbRj-0pgM-F0JM-76iVOg" 24 device = "/dev/sda6" # Hint only 25 26 status = ["ALLOCATABLE"] 27 flags = [] 28 dev_size = 208782 # 101.944 Megabytes 29 pe_start = 384 30 pe_count = 25 # 100 Megabytes 31 } 32 33 pv1 { 34 id = "FiouR5-VRUL-uoFp-6DCS-fJG0-cbUx-7S0gzk" 35 device = "/dev/sda7" # Hint only 36 37 status = ["ALLOCATABLE"] 38 flags = [] 39 dev_size = 208782 # 101.944 Megabytes 40 pe_start = 384 41 pe_count = 25 # 100 Megabytes 42 } 43 44 pv2 { 45 id = "oxIjRC-rQGQ-4kHH-K8xR-lJmn-lYOb-x3nYFR" 46 device = "/dev/sda8" # Hint only 47 48 status = ["ALLOCATABLE"] 49 flags = [] 50 dev_size = 208782 # 101.944 Megabytes 51 pe_start = 384 52 pe_count = 25 # 100 Megabytes 53 } 54 } 55 56 }
步骤11:-对于永久安装,在/etc/fstab文件中输入。
在/etc/fstab文件中添加以下行
/dev/raid5/lvm0 /raid5 ext3 defaults 0 0
如果raid配置中涉及的某个分区进入故障备用,怎么办?
出于测试目的,我将对分区/dev/sda8进行失败测试,以查看它对raid和lvm的影响结果。
[root@satish ~]# mdadm /dev/md5 --fail /dev/sda8 mdadm: set /dev/sda8 faulty in /dev/md5
现在我们可以看到raid阵列信息,它清楚地显示了我们在创建raid设备时提到的备用设备会自动替换有故障的设备设备。所以我们可以看到备用重建选项。
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 02:29:18 2013 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : right-asymmetric Chunk Size : 128K Rebuild Status : 21% complete UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.4 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 4 8 9 2 spare rebuilding /dev/sda9 3 8 8 - faulty spare /dev/sda8
数据重建需要一些时间。这就是为什么在一段时间后看到结果时,我们会发现分区现在与raid阵列完全同步。
[root@satish ~]# mdadm --detail /dev/md5 /dev/md5: Version : 0.90 Creation Time : Mon Jun 3 01:22:14 2013 Raid Level : raid5 Array Size : 208640 (203.78 MiB 213.65 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon Jun 3 02:29:31 2013 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : right-asymmetric Chunk Size : 128K UUID : 74a1ed87:c7567887:280dbe38:ef27c774 Events : 0.6 Number Major Minor RaidDevice State 0 8 6 0 active sync /dev/sda6 1 8 7 1 active sync /dev/sda7 2 8 9 2 active sync /dev/sda9 3 8 8 - faulty spare /dev/sda8
现在你可以在这里看到激活设备和故障设备的列表。
[root@satish ~]# cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md5 : active raid5 sda8[3](F) sda9[2] sda7[1] sda6[0] 208640 blocks level 5, 128k chunk, algorithm 1 [3/3] [UUU] unused devices: <none>
逻辑卷没有变化。
[root@satish ~]# lvdisplay --- Logical volume --- LV Name /dev/raid5/lvm0 VG Name raid5 LV UUID UCrVf9-3cJx-0TlU-aSl0-Glqg-igic-UHtVgg LV Write Access read/write LV Status available # open 1 LV Size 152.00 MB Current LE 38 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 253:0
[root@satish ~]# pvck /dev/md5 Found label on /dev/md5, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=258048
没有数据丢失。
[root@satish raid5]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 55G 22G 31G 41% / tmpfs 502M 0 502M 0% /dev/shm /dev/mapper/raid5-lvm0 148M 56M 84M 40% /raid5
Linux软件RAID和数据恢复:
我们在raid之上配置lvm非常容易,但是在崩溃或者数据丢失的情况下,我们需要恢复数据。所以数据变成很重要。所以这里我们将介绍在raid上创建lvm时的配置文件,因为这个文件帮助我们详细了解lvm的创建和算法。
[root@satish ~]# vim /etc/lvm/backup/raid5 1 # Generated by LVM2 version 2.02.46-RHEL5 (2009-06-18): Mon Jun 3 02:08:05 2013 2 3 contents = "Text Format Volume Group" 4 version = 1 5 6 description = "Created *after* executing 'lvcreate -L 150M raid5 -n lvm0'" 7 8 creation_host = "satish.com" # Linux satish.com 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 9 creation_time = 1370239685 # Mon Jun 3 02:08:05 2013 10 11 raid5 { 12 id = "om3xvw-CGQX-mMwx-K03R-jf2p-zaqM-xjswMZ" 13 seqno = 2 14 status = ["RESIZEABLE", "READ", "WRITE"] 15 flags = [] 16 extent_size = 8192 # 4 Megabytes 17 max_lv = 0 18 max_pv = 0 19 20 physical_volumes { 21 22 pv0 { 23 id = "e5YCQh-0IFd-MYv2-2WzC-KHEx-pys3-z8w2Ud" 24 device = "/dev/md5" # Hint only 25 26 status = ["ALLOCATABLE"] 27 flags = [] 28 dev_size = 417280 # 203.75 Megabytes 29 pe_start = 512 30 pe_count = 50 # 200 Megabytes 31 } 32 } 33 34 logical_volumes { 35 36 lvm0 { 37 id = "UCrVf9-3cJx-0TlU-aSl0-Glqg-igic-UHtVgg" 38 status = ["READ", "WRITE", "VISIBLE"]
现在如何删除上述场景。
简单删除步骤如下:
步骤1:从/etc/fstab文件中删除行。
步骤2:卸载lvm。
步骤3:使用lvremove命令删除lvm。
步骤4:使用vgremove命令删除卷组。
步骤5:使用pvremove命令删除物理卷。
步骤6:现在使raid中使用的分区失效。
Step7:然后停止阵列。
Step8:然后移除阵列。
步骤9:现在使用fdisk实用程序删除分区。
上述过程也可以通过创建一个循环设备来完成,而不是使用任何分区或者任何磁盘。