10

kvm系列专题:磁盘扩容和添加磁盘

 2 years ago
source link: https://www.zsythink.net/archives/4253
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

kvm系列专题:磁盘扩容和添加磁盘

开启精彩搜索

在使用虚拟机时,如果磁盘空间沾满,我们该怎样扩容呢?
这篇文章分两种情况来讨论
情况一、虚拟机没有使用逻辑卷,直接扩容磁盘。
情况二、虚拟机使用了逻辑卷,添加一个磁盘,然后扩容逻辑卷。

先来聊聊情况一,虚拟机没有使用逻辑卷,怎样直接扩容磁盘以及虚拟机内的文件系统。
整体的大致步骤如下,先看一下,有个思路:
一、做好磁盘镜像备份
二、增加磁盘镜像的容量
三、登录虚拟机,扩容文件系统

此处以使用qcow2磁盘镜像的虚拟机为例,qcow2磁盘只支持增大,不支持缩小,所以我的模板机一般都设置的默认磁盘大小比较小,100G或者200G,而且都不是立即分配磁盘空间的,等真正用满的时候,再扩容也不迟,除非是一些确定需要大空间的应用,我会在虚拟机投入使用前就完成扩容,还有,生产环境中,一定要提前备份好磁盘镜像,避免操作失败后数据丢失。

备份完成后,确认需要扩容的磁盘没有快照,qcow2磁盘如果有快照,是不能扩容的,可使用virsh snapshot-list命令查看是否有虚拟机快照,或者使用qemu-img snapshot -l查看磁盘镜像是否有单独的磁盘快照,比如,查看kvm1的磁盘镜像的快照,可以使用如下命令

qemu-img snapshot -l /var/lib/libvirt/images/kvm1.qcow2
qemu-img snapshot -l /var/lib/libvirt/images/kvm1.qcow2
qemu-img snapshot -l /var/lib/libvirt/images/kvm1.qcow2

磁盘镜像如果有快照,无法进行扩容

再次确认,已经备份了需要扩容的磁盘,然后开始操作,此处以kvm4为例,从宿主机可以看到,目前kvm4已经启动

[root@cos7 ~]# virsh list --all
Id Name State
----------------------------------------------------
4 kvm4 running
- kvm1 shut off
- kvm2 shut off
- kvm3 shut off
- kvm5 shut off
- kvm6 shut off
[root@cos7 ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 4     kvm4                           running
 -     kvm1                           shut off
 -     kvm2                           shut off
 -     kvm3                           shut off
 -     kvm5                           shut off
 -     kvm6                           shut off
[root@cos7 ~]# virsh list --all Id Name State ---------------------------------------------------- 4 kvm4 running - kvm1 shut off - kvm2 shut off - kvm3 shut off - kvm5 shut off - kvm6 shut off

登录kvm4,查看虚拟机磁盘状态,如下

[root@kvm4 ~]# fdisk -l
Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b9417
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 10227711 4064256 82 Linux swap / Solaris
/dev/vda3 10227712 104857599 47314944 83 Linux
[root@kvm4 ~]#
[root@kvm4 ~]#
[root@kvm4 ~]#
[root@kvm4 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda3 46G 1.3G 44G 3% /
/dev/vda1 1014M 142M 873M 14% /boot
tmpfs 379M 0 379M 0% /run/user/0
[root@kvm4 ~]# fdisk -l

Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b9417

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200    10227711     4064256   82  Linux swap / Solaris
/dev/vda3        10227712   104857599    47314944   83  Linux
[root@kvm4 ~]# 
[root@kvm4 ~]# 
[root@kvm4 ~]# 
[root@kvm4 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  8.5M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda3        46G  1.3G   44G   3% /
/dev/vda1      1014M  142M  873M  14% /boot
tmpfs           379M     0  379M   0% /run/user/0
[root@kvm4 ~]# fdisk -l Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000b9417 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 10227711 4064256 82 Linux swap / Solaris /dev/vda3 10227712 104857599 47314944 83 Linux [root@kvm4 ~]# [root@kvm4 ~]# [root@kvm4 ~]# [root@kvm4 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda3 46G 1.3G 44G 3% / /dev/vda1 1014M 142M 873M 14% /boot tmpfs 379M 0 379M 0% /run/user/0

从上述信息可以看出,目前kvm4虚拟机只有一块vda磁盘,有三个分区,/dev/vda1分区是boot分区,/dev/vda2是swap分区,/dev/vda3是根文件系统,根的大小是46G,44G可用,此处假设,根分区已经沾满了,我们需要扩容根分区,虽然最终目标是扩容根分区,但是需要先在宿主机上,为kvm4的磁盘进行扩容,扩容完磁盘后,再扩容分区的文件系统。
在宿主机上确认kvm4的磁盘镜像路径,如下:

[root@cos7 ~]# virsh domblklist kvm4
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/kvm4.qcow2
[root@cos7 ~]# virsh domblklist kvm4
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/kvm4.qcow2
hda        -
[root@cos7 ~]# virsh domblklist kvm4 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/kvm4.qcow2 hda -

可以看到,kvm4的磁盘镜像路径是/var/lib/libvirt/images/kvm4.qcow2,对应挂载到了kvm4的vda磁盘,先看一下目前磁盘镜像的大小

[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2
image: /var/lib/libvirt/images/kvm4.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 1.5G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2
image: /var/lib/libvirt/images/kvm4.qcow2
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 1.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: true
[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2 image: /var/lib/libvirt/images/kvm4.qcow2 file format: qcow2 virtual size: 50G (53687091200 bytes) disk size: 1.5G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true

如上所示,磁盘虚拟大小为50G,目前占用宿主机1.5G空间,磁盘格式是一个qcow2的磁盘。
现在,我们要扩容这个磁盘镜像,增容20G的空间,但是在操作之前,请先停止虚拟机,虽然不停止也可以增容成功,但是会报错,目前还不清楚,错误是否会对虚拟机以后的运行产生影响,保险起见,先停止虚拟机,然后再执行如下命令对磁盘镜像进行扩容。
停止虚拟机后,执行如下命令

[root@cos7 ~]# qemu-img resize /var/lib/libvirt/images/kvm4.qcow2 +20G
Image resized.
[root@cos7 ~]# qemu-img resize /var/lib/libvirt/images/kvm4.qcow2 +20G
Image resized.
[root@cos7 ~]# qemu-img resize /var/lib/libvirt/images/kvm4.qcow2 +20G Image resized.

增容后,再次查看磁盘镜像信息,如下,磁盘的virtual size已经从50G变成了70G

[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2
image: /var/lib/libvirt/images/kvm4.qcow2
file format: qcow2
virtual size: 70G (75161927680 bytes)
disk size: 2.5G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2
image: /var/lib/libvirt/images/kvm4.qcow2
file format: qcow2
virtual size: 70G (75161927680 bytes)
disk size: 2.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: true
[root@cos7 ~]# qemu-img info /var/lib/libvirt/images/kvm4.qcow2 image: /var/lib/libvirt/images/kvm4.qcow2 file format: qcow2 virtual size: 70G (75161927680 bytes) disk size: 2.5G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true

磁盘扩容成功后,启动kvm4虚拟机,登录到虚拟机内,再次查看磁盘信息,发现已经变成了75G

[root@kvm4 ~]# fdisk -l
Disk /dev/vda: 75.2 GB, 75161927680 bytes, 146800640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b9417
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 10227711 4064256 82 Linux swap / Solaris
/dev/vda3 10227712 104857599 47314944 83 Linux
[root@kvm4 ~]# fdisk -l

Disk /dev/vda: 75.2 GB, 75161927680 bytes, 146800640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b9417

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200    10227711     4064256   82  Linux swap / Solaris
/dev/vda3        10227712   104857599    47314944   83  Linux
[root@kvm4 ~]# fdisk -l Disk /dev/vda: 75.2 GB, 75161927680 bytes, 146800640 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000b9417 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 10227711 4064256 82 Linux swap / Solaris /dev/vda3 10227712 104857599 47314944 83 Linux

但是,根分区和对应的文件系统还没有扩容,如下,可以看到/dev/vda3分区仍然识别为48G,根文件系统仍然识别为46G。

[root@kvm4 ~]# fdisk -lu /dev/vda3
Disk /dev/vda3: 48.5 GB, 48450502656 bytes, 94629888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm4 ~]#
[root@kvm4 ~]#
[root@kvm4 ~]#
[root@kvm4 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda3 46G 1.3G 44G 3% /
/dev/vda1 1014M 142M 873M 14% /boot
tmpfs 379M 0 379M 0% /run/user/0
[root@kvm4 ~]# fdisk -lu /dev/vda3

Disk /dev/vda3: 48.5 GB, 48450502656 bytes, 94629888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@kvm4 ~]# 
[root@kvm4 ~]# 
[root@kvm4 ~]# 
[root@kvm4 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  8.5M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda3        46G  1.3G   44G   3% /
/dev/vda1      1014M  142M  873M  14% /boot
tmpfs           379M     0  379M   0% /run/user/0
[root@kvm4 ~]# fdisk -lu /dev/vda3 Disk /dev/vda3: 48.5 GB, 48450502656 bytes, 94629888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@kvm4 ~]# [root@kvm4 ~]# [root@kvm4 ~]# [root@kvm4 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda3 46G 1.3G 44G 3% / /dev/vda1 1014M 142M 873M 14% /boot tmpfs 379M 0 379M 0% /run/user/0

我们现在的目标就是扩容根分区和文件系统。
此处可以参考阿里云的文档进行扩容操作,链接如下:
https://help.aliyun.com/document_detail/113316.html?spm=5176.21213303.J_6028563670.36.241a3edapvGQJG&scm=20140722.S_help

我的虚拟机操作系统是centos7,分区是MBR分区,不是GPT分区,文件系统是xfs,所以,我的扩容过程可能跟实际遇到的情况不太一样,这里需要以实际情况为准,参考上面的文档,做出不同的细节处理,生产环境中最好提前克隆出一个机器或者做一个测试机进行模拟。

首先,在虚拟机中安装cloud-utils-growpart工具(如果是GPT分区,先安装gdisk),对分区进行扩容,centos7中命令如下

yum install -y cloud-utils-growpart
yum install -y cloud-utils-growpart
yum install -y cloud-utils-growpart

安装完成后,执行如下命令扩容对应分区,我需要扩容vda3,所以命令中是vda 3,vda和3中间有空格

[root@kvm4 ~]# growpart /dev/vda 3
CHANGED: partition=3 start=10227712 old: size=94629888 end=104857600 new: size=136572895 end=146800607
[root@kvm4 ~]# growpart /dev/vda 3
CHANGED: partition=3 start=10227712 old: size=94629888 end=104857600 new: size=136572895 end=146800607
[root@kvm4 ~]# growpart /dev/vda 3 CHANGED: partition=3 start=10227712 old: size=94629888 end=104857600 new: size=136572895 end=146800607

可以看到,分区的大小已经改变,使用命令确认,vda3分区的大小已经识别为69.9G

[root@kvm4 ~]# fdisk -lu /dev/vda3
Disk /dev/vda3: 69.9 GB, 69925322240 bytes, 136572895 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm4 ~]# fdisk -lu /dev/vda3

Disk /dev/vda3: 69.9 GB, 69925322240 bytes, 136572895 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm4 ~]# fdisk -lu /dev/vda3 Disk /dev/vda3: 69.9 GB, 69925322240 bytes, 136572895 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

如果扩容完成后,再次执行扩容命令,会提示NOCHANGE…it cannot be grown,这是因为,分区已经扩展完成了,没有富裕的空间可供扩展了,其实,还有一种情况也会报如下错误,导致无法扩容分区,我们一会儿再聊,先把一整套扩容流程走完,此处重复执行如下命令不会影响扩容。

[root@kvm4 ~]# growpart /dev/vda 3
NOCHANGE: partition 3 is size 136572895. it cannot be grown
[root@kvm4 ~]# growpart /dev/vda 3
NOCHANGE: partition 3 is size 136572895. it cannot be grown
[root@kvm4 ~]# growpart /dev/vda 3 NOCHANGE: partition 3 is size 136572895. it cannot be grown

vda3分区扩容完成后,需要扩容分区对应的文件系统,此处文件系统是xfs,所以执行xfs_growfs命令即可,如果是ext文件系统,需要使用resize2fs命令,此处运行的命令为xfs_growfs /dev/vda3
如下:

[root@kvm4 ~]# xfs_growfs /dev/vda3
meta-data=/dev/vda3 isize=512 agcount=4, agsize=2957184 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=11828736, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=5775, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 11828736 to 17071611
[root@kvm4 ~]# xfs_growfs /dev/vda3
meta-data=/dev/vda3              isize=512    agcount=4, agsize=2957184 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=11828736, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=5775, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 11828736 to 17071611
[root@kvm4 ~]# xfs_growfs /dev/vda3 meta-data=/dev/vda3 isize=512 agcount=4, agsize=2957184 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=11828736, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=5775, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 11828736 to 17071611

再次查看磁盘使用情况,可以看到,根分区已经从之前的46G扩容到了66G,如下

[root@kvm4 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vda3 66G 1.4G 64G 3% /
/dev/vda1 1014M 142M 873M 14% /boot
tmpfs 379M 0 379M 0% /run/user/0
[root@kvm4 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.9G     0  1.9G   0% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  8.5M  1.9G   1% /run
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/vda3        66G  1.4G   64G   3% /
/dev/vda1      1014M  142M  873M  14% /boot
tmpfs           379M     0  379M   0% /run/user/0
[root@kvm4 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/vda3 66G 1.4G 64G 3% / /dev/vda1 1014M 142M 873M 14% /boot tmpfs 379M 0 379M 0% /run/user/0

至此,整个扩容操作已经完成了。

此处有一个坑,刚才已经提到过,在扩容分区时,我们使用了growpart命令,如果重复执行此命令,会提示NOCHANGE…it cannot be grown,这是因为已经没有更多空间可供分区扩容了,但是,在某些特殊情况下,磁盘明明还有很多空间,在第一次执行growpart命令时就会提示it cannot be grown,现在咱们就来聊聊这种特殊情况。

如果磁盘有很多富裕空间可供分区扩容,但是每次执行growpart命令就提示it cannot be grown,很有可能是因为,你要扩容的分区不是磁盘的最后一个分区,上例中,我要扩容的vda3就是vda磁盘的最后一个分区,如下图,可以通过分区号或者分区区间很明显的看出,vda3就是vda的最后一个分区
16363641495252.jpg
这种情况下,第一次执行growpart命令扩容分区时,应该是可以正常执行的。
但是,如果你遇到的分区是下面的情况,而你想要扩容的分区恰巧不是磁盘的最后一个分区,则无法扩容成功,会直接提示it cannot be grown

[root@kvm1 ~]# fdisk -l
Disk /dev/vda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ac5a2
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 1044367359 521134080 83 Linux
/dev/vda3 1044367360 1048561663 2097152 82 Linux swap / Solaris
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 909M 0 909M 0% /dev
tmpfs 919M 0 919M 0% /dev/shm
tmpfs 919M 8.6M 911M 1% /run
tmpfs 919M 0 919M 0% /sys/fs/cgroup
/dev/vda2 497G 1.3G 496G 1% /
/dev/vda1 1014M 142M 873M 14% /boot
tmpfs 184M 0 184M 0% /run/user/0
[root@kvm1 ~]# fdisk -l

Disk /dev/vda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ac5a2

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200  1044367359   521134080   83  Linux
/dev/vda3      1044367360  1048561663     2097152   82  Linux swap / Solaris
[root@kvm1 ~]# 
[root@kvm1 ~]# 
[root@kvm1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        909M     0  909M   0% /dev
tmpfs           919M     0  919M   0% /dev/shm
tmpfs           919M  8.6M  911M   1% /run
tmpfs           919M     0  919M   0% /sys/fs/cgroup
/dev/vda2       497G  1.3G  496G   1% /
/dev/vda1      1014M  142M  873M  14% /boot
tmpfs           184M     0  184M   0% /run/user/0
[root@kvm1 ~]# fdisk -l Disk /dev/vda: 536.9 GB, 536870912000 bytes, 1048576000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000ac5a2 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 1044367359 521134080 83 Linux /dev/vda3 1044367360 1048561663 2097152 82 Linux swap / Solaris [root@kvm1 ~]# [root@kvm1 ~]# [root@kvm1 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 909M 0 909M 0% /dev tmpfs 919M 0 919M 0% /dev/shm tmpfs 919M 8.6M 911M 1% /run tmpfs 919M 0 919M 0% /sys/fs/cgroup /dev/vda2 497G 1.3G 496G 1% / /dev/vda1 1014M 142M 873M 14% /boot tmpfs 184M 0 184M 0% /run/user/0

如上,vda一共有三个分区,根对应的分区是vda2,即使vda磁盘有很多富裕空间,如果想要使用growpart命令扩容vda2,也是不行的,会直接提示it cannot be grown,因为上例中,vda2后面还有一个vda3分区,而上例中的情况比较好处理,因为最后一个分区是swap分区,我们只需要删除这个分区,然后再扩容vda2就好了,扩容完成后,再单独挂载一个磁盘作为swap分区或者使用文件作为swap分区,所以,在做模板机的时候要避开这个坑,别问我怎么知道的。

直接扩容磁盘和文件系统的情况已经描述清楚了,现在说说添加一个磁盘,扩展lvm逻辑卷的情况,其实这里描述的重点是怎样为虚拟机添加一块磁盘,磁盘添加后,是用来扩展逻辑卷,还是单独作为一个分区使用,就看具体情况了,所以此处只演示为虚拟机添加磁盘和卸载磁盘的操作,此处以kvm6为例进行演示,kvm6已经启动了,查看kvm6当前的磁盘有哪些,在宿主机执行如下命令:

[root@cos7 ~]# virsh domblklist kvm6
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/kvm6.qcow2
vdb /var/lib/libvirt/images/kvm6_vdb.qcow2
[root@cos7 ~]# virsh domblklist kvm6
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/kvm6.qcow2
vdb        /var/lib/libvirt/images/kvm6_vdb.qcow2
hda        -
[root@cos7 ~]# virsh domblklist kvm6 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/kvm6.qcow2 vdb /var/lib/libvirt/images/kvm6_vdb.qcow2 hda -

可以看到,kvm6已经有两块磁盘了,分别对应vda和vdb,现在,创建一块新的磁盘,我准备将新创建的磁盘对应到kvm6的vdc上,所以直接为磁盘文件命名为kvm6_vdc.qcow2,注意!在同路径下,不要与现有的盘重名,否则会清空同名盘,命令如下:

qemu-img create -f qcow2 /var/lib/libvirt/images/kvm6_vdc.qcow2 5G
qemu-img create -f qcow2 /var/lib/libvirt/images/kvm6_vdc.qcow2 5G
qemu-img create -f qcow2 /var/lib/libvirt/images/kvm6_vdc.qcow2 5G

我们已经创建好了磁盘,但是还没有将磁盘添加到kvm6上,在添加磁盘之前,先看看kvm6的配置文件,磁盘部分的配置如下,其他配置省略了

<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/kvm6.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/kvm6.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/kvm6.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>

从配置文件中可以再次印证,目前有kvm6有两块盘,当我们将次三块盘添加到kvm6以后,会多出一段第三块盘的配置。

确保kvm6开机后,使用如下命令,将磁盘添加到kvm6虚拟机上,如下添加磁盘的操作,只能在虚拟机开机的情况下执行。

virsh attach-disk --domain kvm6 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vdc --targetbus virtio --persistent
virsh attach-disk --domain kvm6 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vdc --targetbus virtio --persistent
virsh attach-disk --domain kvm6 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vdc --targetbus virtio --persistent

如上所示,使用virsh attach-disk为指定的虚拟机添加磁盘,–domain指定虚拟机名,–source指定要添加的磁盘文件,–subdriver指定磁盘文件的格式,由于我使用的是qcow2格式的磁盘,所以此处不能省略–subdriver参数,如果省略此参数,默认会被当做raw格式的磁盘,如果被当做raw格式的磁盘,磁盘挂载后,对应的磁盘空间会识别为磁盘实际占用的大小(与宿主机中识别的大小一样,测试的qcow2没有预分配空间),所以此处使用–subdriver qcow2指明磁盘格式,–target参数用来指定磁盘文件对应虚拟机中的哪个磁盘,这里有个坑,如果这个磁盘是第一次被使用,那么使用target指定的目标是通常准确的,如果这块盘其他虚拟机使用过,那么target指定后的目标可能会出现误差(有一定几率),这样描述可能让你无法理解是什么意思,此处先不聊这个小坑,文章最后再做详细的解释。–targetbus virtio用来指定是否使用virtio总线驱动,由于我的虚拟机是centos7,默认支持virtio,所以添加上了,具体还需要根据虚拟机操作系统的实际情况,决定是否添加,–persistent参数表示配置是否永久生效,如果不添加–persistent参数,磁盘只是临时添加到虚拟机中,虚拟机断电重启后,不会再有对应的新磁盘,也就是说,不加–persistent参数,虚拟机的xml配置文件是不会被修改的,如果加上–persistent参数,虚拟机的xml配置文件会被自动修改,添加上新磁盘的配置。

执行上述命令后,再次查看kvm6的xml配置文件,会发现多出了新磁盘的配置,如下

<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/kvm6.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/kvm6_vdc.qcow2'/>
<target dev='vdc' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/kvm6.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/kvm6_vdc.qcow2'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/kvm6.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/kvm6_vdb.qcow2'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/kvm6_vdc.qcow2'/> <target dev='vdc' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

如上所示,最后一块盘的配置就是我们添加的新盘,因为我们在添加磁盘时,使用了–persistent参数,进入虚拟机,查看磁盘列表,已经可以看到新加入的磁盘了。

[root@kvm6 ~]# fdisk -l
Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ba397
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 104857599 51379200 8e Linux LVM
Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 53.8 GB, 53812920320 bytes, 105103360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm6 ~]# fdisk -l

Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ba397

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200   104857599    51379200   8e  Linux LVM

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-root: 53.8 GB, 53812920320 bytes, 105103360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm6 ~]# fdisk -l Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000ba397 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 104857599 51379200 8e Linux LVM Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-root: 53.8 GB, 53812920320 bytes, 105103360 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

现在要做的就是初始化磁盘了,根据需求不同,做出不同的操作,比如,是创建pv加入到卷组中,还是作为一个单独的分区使用,就看具体情况了,为了方便演示,此处直接当做一个挂载点去使用,需要注意的是,在设置完成后,配置好自动挂载,此处不分区,直接为vdc创建xfs文件系统。

[root@kvm6 ~]# mkfs.xfs /dev/vdc
meta-data=/dev/vdc isize=512 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@kvm6 ~]# mkfs.xfs /dev/vdc
meta-data=/dev/vdc               isize=512    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@kvm6 ~]# mkfs.xfs /dev/vdc meta-data=/dev/vdc isize=512 agcount=4, agsize=327680 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1310720, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

创建一个测试目录当做挂载点,挂载新的硬盘

[root@kvm6 ~]# mkdir /testdisk
[root@kvm6 ~]# mount /dev/vdc /testdisk
[ 1010.692185] XFS (vdc): Mounting V5 Filesystem
[ 1010.711786] XFS (vdc): Ending clean mount
[root@kvm6 ~]#
[root@kvm6 ~]#
[root@kvm6 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 51G 1.3G 49G 3% /
/dev/vda1 1014M 150M 865M 15% /boot
tmpfs 379M 0 379M 0% /run/user/0
/dev/vdc 5.0G 33M 5.0G 1% /testdisk
[root@kvm6 ~]# mkdir /testdisk
[root@kvm6 ~]# mount /dev/vdc /testdisk
[ 1010.692185] XFS (vdc): Mounting V5 Filesystem
[ 1010.711786] XFS (vdc): Ending clean mount
[root@kvm6 ~]# 
[root@kvm6 ~]# 
[root@kvm6 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 1.9G     0  1.9G   0% /dev
tmpfs                    1.9G     0  1.9G   0% /dev/shm
tmpfs                    1.9G  8.5M  1.9G   1% /run
tmpfs                    1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   51G  1.3G   49G   3% /
/dev/vda1               1014M  150M  865M  15% /boot
tmpfs                    379M     0  379M   0% /run/user/0
/dev/vdc                 5.0G   33M  5.0G   1% /testdisk
[root@kvm6 ~]# mkdir /testdisk [root@kvm6 ~]# mount /dev/vdc /testdisk [ 1010.692185] XFS (vdc): Mounting V5 Filesystem [ 1010.711786] XFS (vdc): Ending clean mount [root@kvm6 ~]# [root@kvm6 ~]# [root@kvm6 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/centos-root 51G 1.3G 49G 3% / /dev/vda1 1014M 150M 865M 15% /boot tmpfs 379M 0 379M 0% /run/user/0 /dev/vdc 5.0G 33M 5.0G 1% /testdisk

实际使用中,记得修改fstab,把磁盘自动挂载(当然如果是逻辑卷就另说),而且需要注意,设置自动挂载时,应该用磁盘的UUID,使用UUID是最准确最保险的。

磁盘挂载已经完成了,现在来看看怎样卸载磁盘,操作如下
如果之前设置过自动挂载,先把对应的自动挂载配置注释了,避免卸载磁盘后,找不到对应的磁盘影响启动。
确定取消自动挂载后,在虚拟机系统内卸载对应的挂载点,卸载挂载点后,确保虚拟机正在运行,在宿主机中执行如下命令,

virsh detach-disk kvm6 /var/lib/libvirt/images/kvm6_vdc.qcow2 --persistent
virsh detach-disk kvm6 /var/lib/libvirt/images/kvm6_vdc.qcow2 --persistent
virsh detach-disk kvm6 /var/lib/libvirt/images/kvm6_vdc.qcow2 --persistent

执行上述命令后,磁盘卸载操作直接生效,并且永久生效,与添加磁盘同理,–persistent参数会修改配置文件,将对应的磁盘配置段删除。

卸载下来的磁盘也可以挂载到其他虚拟机上使用,比如,我刚刚从kvm6上卸载了kvm6_vdc.qcow2这块磁盘,现在我想把这块磁盘挂载到kvm5上使用,也是可以的,但是可能会遇到一些小问题,我们先来查看一下kvm5目前都有哪些磁盘

[root@cos7 ~]# virsh domblklist kvm5
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/kvm5.qcow2
[root@cos7 ~]# virsh domblklist kvm5
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/kvm5.qcow2
hda        -
[root@cos7 ~]# virsh domblklist kvm5 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/kvm5.qcow2 hda -

如上所示,只有一块磁盘,对应虚拟机中的vda,此时,我们执行如下命令,将kvm6卸载下的盘挂载到kvm5上,临时挂载命令如下:

[root@cos7 ~]# virsh attach-disk --domain kvm5 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vde --targetbus virtio
Disk attached successfully
[root@cos7 ~]# virsh domblklist kvm5
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/kvm5.qcow2
vde /var/lib/libvirt/images/kvm6_vdc.qcow2
[root@cos7 ~]# virsh attach-disk --domain kvm5 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vde --targetbus virtio
Disk attached successfully

[root@cos7 ~]# virsh domblklist kvm5
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/kvm5.qcow2
vde        /var/lib/libvirt/images/kvm6_vdc.qcow2
hda        -
[root@cos7 ~]# virsh attach-disk --domain kvm5 --source /var/lib/libvirt/images/kvm6_vdc.qcow2 --subdriver qcow2 --target vde --targetbus virtio Disk attached successfully [root@cos7 ~]# virsh domblklist kvm5 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/kvm5.qcow2 vde /var/lib/libvirt/images/kvm6_vdc.qcow2 hda -

如上所示,我们将kvm6_vdc.qcow2挂载到了kvm5上,并且指定kvm6_vdc.qcow2对应kvm5的vde,从宿主机上查看,kvm6_vdc.qcow2的确对应的是vde,但是进入kvm5,会发现,新加入的盘符并不是vde,而是vdc,如下所示

[root@kvm5 /]# fdisk -l
Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ba397
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 2099199 1048576 83 Linux
/dev/vda2 2099200 104857599 51379200 8e Linux LVM
Disk /dev/mapper/centos-root: 48.4 GB, 48444211200 bytes, 94617600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm5 /]# fdisk -l

Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ba397

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048     2099199     1048576   83  Linux
/dev/vda2         2099200   104857599    51379200   8e  Linux LVM

Disk /dev/mapper/centos-root: 48.4 GB, 48444211200 bytes, 94617600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@kvm5 /]# fdisk -l Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000ba397 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 104857599 51379200 8e Linux LVM Disk /dev/mapper/centos-root: 48.4 GB, 48444211200 bytes, 94617600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/vdc: 5368 MB, 5368709120 bytes, 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

之前实验过,有时会被识别成顺位最新的一个盘符,比如,虚拟机只有一块vda,在宿主机中指定的target是vdc,但是自动识别成第二个盘符vdb的,不过有时也会准确的按照target指定的盘符去识别,通过实验无法确定原因,所以这是一个注意点,我们只需要记住,如果磁盘是使用过的,当添加到虚拟机时,使用–target指定盘符后,在虚拟机中实际识别的结果可能是有误差的,有可能与在宿主机中查看的结果不一致,需要以实际情况为准,最好按照盘符的顺位指定target,中间不要隔着没有使用的盘符,比如vda后应该是vdb,那么就指定用vdb,而不是直接跳过vdb,指定vdc或者vdd,这样容易出问题,如果出现误差,可以尝试卸载后再次挂载。


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK