镜缘浮影 小人本住在 苏州的城外 家里有屋又有田 生活乐无边

KVM基础

2016-02-23
wilmosfang
kvm
原文地址 http://soft.dog/2016/02/23/kvm-basic/

前言

KVM (Kernel-based Virtual Machine) 是基于 虚拟化扩展指令集 (Intel VT or AMD-V) 在 linux x86 平台上的 完全虚拟化 解决方案

  • KVM是完全虚拟化(需要硬件支持,CPU,BIOS)
  • KVM是开源的
  • KVM的核心是一个内核模块,用户空间组件由QEMU来提供

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

KVM is open source software. The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.

这里简单分享一下 KVM 的相关基础 ,详细内容可以参考 官方文档


概要


环境

[root@kvm-demo data]# cat /etc/redhat-release 
CentOS release 6.7 (Final)
[root@kvm-demo data]# uname -a 
Linux kvm-demo 2.6.32-573.8.1.el6.x86_64 #1 SMP Tue Nov 10 18:01:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@kvm-demo data]# egrep --color '(vmx|svm)' /proc/cpuinfo 
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
[root@kvm-demo data]# 

Note: 要确保有虚拟指令集 Intel VT or AMD-V ,否则无法正常运行,另外一个就是主板的BIOS里也要确保虚拟化是打开的


安装

使用 yum groupinstall 'Virtualization xxx'

确保以下软件组被正确安装

  • Virtualization
  • Virtualization Client
  • Virtualization Tools
[root@kvm-demo ~]# yum grouplist 
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Group Process
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * epel: mirrors.yun-idc.com
 * extras: mirrors.skyshe.cn
 * updates: mirrors.163.com
Installed Groups:
   Additional Development
   Base
   CIFS file server
   ...
   ...
   Virtualization
   Virtualization Client
   Virtualization Tools
   Web Server
   X Window System
   iSCSI Storage Client
Installed Language Groups:
   Arabic Support [ar]
   Armenian Support [hy]
   ...
   ...
   Venda Support [ve]
Available Groups:
   Backup Client
   Backup Server
   ...
   ...
   Xfce
Available Language Groups:
   Afrikaans Support [af]
   ...
   ...
   Zulu Support [zu]
Done
[root@kvm-demo ~]# 

可以看看都安装了哪些具体内容

[root@kvm-demo ~]# yum groupinfo Virtualization
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Group Process
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * epel: mirrors.yun-idc.com
 * extras: mirrors.skyshe.cn
 * updates: mirrors.163.com

Group: Virtualization
 Description: Provides an environment for hosting virtualized guests.
 Mandatory Packages:
   qemu-kvm
 Optional Packages:
   qemu-kvm-tools
   vios-proxy
[root@kvm-demo ~]# yum groupinfo 'Virtualization Tools'
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Group Process
Loading mirror speeds from cached hostfile
 * base: mirrors.pubyun.com
 * epel: mirrors.yun-idc.com
 * extras: mirrors.skyshe.cn
 * updates: mirrors.163.com

Group: Virtualization Tools
 Description: Tools for offline virtual image management.
 Default Packages:
   libguestfs
 Optional Packages:
   libguestfs-bash-completion
   libguestfs-gfs2
   libguestfs-java
   libguestfs-mount
   libguestfs-rescue
   libguestfs-rsync
   libguestfs-tools
   libguestfs-xfs
   virt-v2v
[root@kvm-demo ~]# yum groupinfo 'Virtualization Client'
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Group Process
Loading mirror speeds from cached hostfile
 * base: mirrors.pubyun.com
 * epel: mirrors.yun-idc.com
 * extras: mirrors.skyshe.cn
 * updates: mirrors.163.com

Group: Virtualization Client
 Description: Clients for installing and managing virtualization instances.
 Mandatory Packages:
   python-virtinst
   virt-manager
   virt-viewer
 Default Packages:
   virt-top
[root@kvm-demo ~]# 

Tip: 其中 virt-manager 是非常好用的一个GUI工具,可以方便的创建虚拟机 (它是Redhat使用Python开发的)

virt_manager.jpg


创建网桥

为了和宿主机有平等的网络层级 (能使用到上面的dhcp),可以通过构建网桥的办法来实现

配置

[root@kvm-demo network-scripts]# grep -v "^#" ifcfg-eth0 


DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BRIDGE=br0
[root@kvm-demo network-scripts]# grep -v "^#" ifcfg-br0
DEVICE=br0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.234
NETMASK=255.255.255.0
GATEWAY=192.168.1.2
[root@kvm-demo network-scripts]# 

重启网络

为了使配置生效,得重启网络

[root@kvm-demo network-scripts]# /etc/init.d/network restart ; ifup br0 
Shutting down interface br0:                               [  OK  ]
Shutting down interface eth0:                              [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface br0:  Device br0 does not seem to be present, delaying initialization.
                                                           [FAILED]
Bringing up interface eth0:                                [  OK  ]
Determining if ip address 192.168.1.234 is already in use for device br0...
[root@kvm-demo network-scripts]# 

Note: 最好使用 /etc/init.d/network restart ; ifup br0 , 因为并不一定重启网络会带着 br0 一起重启,如果是远程操作,这个时候就有很大的风险与主机失联,这时原网卡没有配置ip,br0又没启动,会产生很大的麻烦,只能去机房通过服务器的console进行恢复

这时会多出一个br0的网桥

[root@kvm-demo network-scripts]# ifconfig 
br0       Link encap:Ethernet  HWaddr E8:39:35:53:4B:E9  
          inet addr:192.168.1.234  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::ea39:35ff:fe53:4be9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1399 errors:0 dropped:0 overruns:0 frame:0
          TX packets:882 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:250070 (244.2 KiB)  TX bytes:118534 (115.7 KiB)

eth0      Link encap:Ethernet  HWaddr E8:39:35:53:4B:E9  
          inet6 addr: fe80::ea39:35ff:fe53:4be9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:67254 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39584 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13622758 (12.9 MiB)  TX bytes:5178583 (4.9 MiB)
          Interrupt:20 Memory:fb000000-fb020000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

virbr0    Link encap:Ethernet  HWaddr 52:54:00:83:A4:A0  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

[root@kvm-demo network-scripts]# 

创建虚拟机

使用 virt-manager

[root@kvm-demo data]# virt-manager 
Xlib:  extension "RANDR" missing on display "localhost:11.0".
[root@kvm-demo data]# 

打开 virt-manager 的管理界面

kvm1.png

设置虚拟机名(这个不是主机名),指定安装源

kvm2.png

指定镜像地址,和OS类型

kvm3.png

指定内存大小和CPU个数

kvm4.png

指定磁盘大小,分配方式和存放地址

kvm5.png

选择网络的形式,使用之前配置的网桥 br0

kvm6.png

kvm7.png

这个界面可以再一次修改硬件配置

kvm8.png

然后进行系统环境的配置(这里可以根据需求安装软件,分区配置,时区和安全配置,kdump设定,用户密码设置),然后安装系统

kvm9.png

安装完成,重启后,就可以进入系统

kvm10.png


配置网络

由于我们虚拟机接入局域网是使用的网桥,也没有手动配置IP,所以开始会被自动分配一个IP,但作为服务器不能使用动态IP,最好静态指定,所以要修改一下网络配置

当前网络状况

[root@docker network-scripts]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.115  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::5054:ff:febf:a922  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:bf:a9:22  txqueuelen 1000  (Ethernet)
        RX packets 185900  bytes 286675297 (273.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 107256  bytes 6279256 (5.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 2  bytes 96 (96.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 96 (96.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:f1:ae:c5  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@docker network-scripts]#

修改配置,重启网络

[root@docker ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE="Ethernet"
#BOOTPROTO="dhcp"
BOOTPROTO="static"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
NAME="eth0"
UUID="ae8d4dad-49b5-47ef-857d-55ef2376646f"
DEVICE="eth0"
ONBOOT="yes"
IPADDR0=192.168.1.211
PREFIX0=24
GATEWAY0=192.168.1.2
DNS0=210.22.84.3
DNS1=114.114.114.114
[root@docker ~]# /etc/init.d/network restart 
Restarting network (via systemctl):                        [  OK  ]
[root@docker ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:bf:a9:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.211/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:febf:a922/64 scope link 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 52:54:00:f1:ae:c5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
    link/ether 52:54:00:f1:ae:c5 brd ff:ff:ff:ff:ff:ff
[root@docker ~]#
[root@docker ~]# ping www.baidu.com
PING www.a.shifen.com (61.135.169.121) 56(84) bytes of data.
64 bytes from 61.135.169.121: icmp_seq=1 ttl=53 time=25.9 ms
64 bytes from 61.135.169.121: icmp_seq=2 ttl=53 time=25.9 ms
64 bytes from 61.135.169.121: icmp_seq=3 ttl=53 time=26.4 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 25.954/26.131/26.456/0.265 ms
[root@docker ~]# 

现在可以像一台普通服务器一样进行使用了

后面的篇章中再对虚拟机的管理进行详细演示


修改运行级别

查看当前运行级别

[root@docker ~]# cat /etc/inittab 
# inittab is no longer used when using systemd.
#
# ADDING CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
#
# Ctrl-Alt-Delete is handled by /usr/lib/systemd/system/ctrl-alt-del.target
#
# systemd uses 'targets' instead of runlevels. By default, there are two main targets:
#
# multi-user.target: analogous to runlevel 3
# graphical.target: analogous to runlevel 5
#
# To view current default target, run:
# systemctl get-default
#
# To set a default target, run:
# systemctl set-default TARGET.target
#
[root@docker ~]# systemctl get-default
graphical.target
[root@docker ~]# runlevel 
N 5
[root@docker ~]#

Tip: Centos7 中不再是使用 /etc/inittab 来配置运行级别,而是使用 systemd 来进行管理

[root@docker ~]# ll /lib/systemd/system/runlevel3.target
lrwxrwxrwx. 1 root root 17 Feb 23 10:08 /lib/systemd/system/runlevel3.target -> multi-user.target
[root@docker ~]# ll /lib/systemd/system/runlevel5.target
lrwxrwxrwx. 1 root root 16 Feb 23 10:08 /lib/systemd/system/runlevel5.target -> graphical.target
[root@docker ~]# 

修改默认运行级别

systemctl set-default multi-user.target 可以用来将默认(开机启动后)运行级别设定为3

[root@docker ~]# ll /etc/systemd/system/default.target
lrwxrwxrwx. 1 root root 36 Feb 23 10:25 /etc/systemd/system/default.target -> /lib/systemd/system/graphical.target
[root@docker ~]# ll /lib/systemd/system/multi-user.target
-rw-r--r--. 1 root root 492 Nov 20 12:49 /lib/systemd/system/multi-user.target
[root@docker ~]# systemctl set-default multi-user.target
Removed symlink /etc/systemd/system/default.target.
Created symlink from /etc/systemd/system/default.target to /usr/lib/systemd/system/multi-user.target.
[root@docker ~]# ll /etc/systemd/system/default.target
lrwxrwxrwx. 1 root root 41 Feb 23 13:46 /etc/systemd/system/default.target -> /usr/lib/systemd/system/multi-user.target
[root@docker ~]# 
[root@docker ~]# runlevel 
N 5
[root@docker ~]# 

切换当前的运行级别

[root@docker ~]# runlevel 
N 5
[root@docker ~]# systemctl isolate multi-user.target
PolicyKit daemon disconnected from the bus.
We are no longer a registered authentication agent.
PolicyKit daemon reconnected to bus.
Attempting to re-register as an authentication agent.
We are now a registered authentication agent.
[root@docker ~]# 
[root@docker ~]# runlevel 
5 3
[root@docker ~]# systemctl get-default
multi-user.target
[root@docker ~]# 

切换完当前运行级别后,图形界面会变成CLI界面

kvm11.png


命令汇总

  • cat /etc/redhat-release
  • uname -a
  • egrep --color '(vmx|svm)' /proc/cpuinfo
  • yum grouplist
  • yum groupinfo Virtualization
  • yum groupinfo 'Virtualization Tools'
  • yum groupinfo 'Virtualization Client'
  • grep -v "^#" ifcfg-eth0
  • grep -v "^#" ifcfg-br0
  • /etc/init.d/network restart ; ifup br0
  • virt-manager
  • cat /etc/sysconfig/network-scripts/ifcfg-eth0
  • /etc/init.d/network restart
  • cat /etc/inittab
  • ll /lib/systemd/system/runlevel3.target
  • ll /lib/systemd/system/runlevel5.target
  • ll /lib/systemd/system/multi-user.target
  • systemctl set-default multi-user.target
  • ll /etc/systemd/system/default.target
  • systemctl isolate multi-user.target
  • runlevel
  • systemctl get-default

原文地址 http://soft.dog/2016/02/23/kvm-basic/

评论