 
Introduction
Debian 12 Bookworm is the latest stable release of the Debian operating system, featuring various improvements and updates. One of its key features is the integration of Kernel-based Virtual Machine (KVM), a virtualization solution.
- Overview
- Install KVM.
- Create Virtual Machine.
- Create Virtual Machine (GUI).
- Basic Operation for VM.
- VNC connection Setting.
- Management tools for VM.
- VNC connection (Client) : Debian And Windows.
- Nested KVM.
- Live Migration.
- Storage Migration.
- UEFI boot for Virtual Machine.
- Enable TPM 2.0.
- GPU Passthrough.
- Use VirtualBMC.
Overview
What
Debian 12 Bookworm is the twelfth major release of the Debian operating system, known for its stability and wide range of software packages. It includes updates to the Linux kernel and various software components, including KVM for virtualization.
Who
Debian 12 Bookworm is developed by the Debian Project, a community-driven project with contributors from around the world. It is intended for users who need a reliable and stable operating system, including individuals, developers, and organizations.
Where
Debian 12 Bookworm can be downloaded from the official Debian website and is available for installation on a variety of hardware platforms, including personal computers, servers, and embedded systems.
When
Debian 12 Bookworm was released in 2024, following a period of extensive testing and development by the Debian community.
Why
Debian 12 Bookworm includes several new features and improvements, making it a compelling choice for users. Below are the pros and cons of using Debian 12 with KVM:
| Pros | Cons | 
|---|---|
| Highly stable and reliable | Requires familiarity with Linux | 
| Wide range of software packages | May lack the latest software versions | 
| Strong community support | Complex setup for beginners | 
How
Setting up Debian 12 Bookworm with KVM involves the following steps:
| Step 1: | Install Debian 12 on your machine. | 
| Step 2: | Install KVM and related packages using the package manager. | 
| Step 3: | Configure KVM and create virtual machines as needed. | 
Consequences
Using Debian 12 Bookworm with KVM can have several consequences:
| Positive | 
 | 
| Negative | 
 | 
Conclusion
Debian 12 Bookworm with KVM offers a stable and reliable platform for virtualization. While it may require some technical expertise to set up and manage, its strong community support and extensive software repository make it a valuable choice for various use cases. Users can benefit from its enhanced virtualization capabilities, though they should be aware of the potential complexities and performance considerations.
Install KVM.
It's Virtualization with KVM ( Kernel-based Virtual Machine ) + QEMU. It requires that the CPU on your computer has a feature Intel VT or AMD-V.
Step [1]Install required packages.
                     
root@bizantum:~# apt -y install qemu-kvm libvirt-daemon-system libvirt-daemon virtinst bridge-utils libosinfo-bin
# confirm modules are loaded
root@bizantum:~# lsmod | grep kvm
kvm_intel             380928  0
kvm                  1142784  1 kvm_intel
irqbypass              16384  1 kvm
                    
                
        Step [2]Configure Bridge networking for KVM virtual machines.
                     
root@bizantum:~# ip address
1: lo:  <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0:  <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:88:7f:bc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.30/24 brd 10.0.0.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe88:7fbc/64 scope link
       valid_lft forever preferred_lft forever
root@bizantum:~# vi /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug enp1s0
# change existing setting like follows
iface enp1s0 inet manual
#address 10.0.0.30
#network 10.0.0.0
#netmask 255.255.255.0
#broadcast 10.0.0.255
#gateway 10.0.0.1
#dns-nameservers 10.0.0.10
# add bridge interface setting
# for [hwaddress] line, specify the same MAC address with physical one ([enp1s0] on this example)
# set this param explicitly if different MAC address is assigned to bridge ingterface
auto br0
iface br0 inet static
address 10.0.0.30
network 10.0.0.0
netmask 255.255.255.0
broadcast 10.0.0.255
gateway 10.0.0.1
dns-nameservers 10.0.0.10
bridge_ports enp1s0
bridge_stp off
hwaddress ether 52:54:00:88:7f:bc
root@bizantum:~# reboot
root@bizantum:~# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 52:54:00:88:7f:bc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe88:7fbc/64 scope link
       valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:88:7f:bc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.30/24 brd 10.0.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::6420:42ff:fe81:e8fe/64 scope link
       valid_lft forever preferred_lft forever
                    
                
      Create Virtual Machine
Install GuestOS and Create a Virtual Machine. This example shows to create VM with installing Debian 12.
Step [1] On this example, Download an ISO file of Debian 12 on a directory first and Install GuestOS from the ISO on the text mode. It's OK on the console or remote connection via SSH and so on. Furthermore, Virtual Machine's images are placed at [/var/lib/libvirt/images] by default as a Storage Pool, however this example shows to create and use a new Storage Pool. (OK to use any place you like)
                     
# create a Storage Pool directory
root@bizantum:~# mkdir -p /var/kvm/images
root@bizantum:~# virt-install \
--name debian12 \
--ram 4096 \
--disk path=/var/kvm/images/debian12.img,size=20 \
--vcpus 2 \
--os-variant debian11 \
--network bridge=br0 \
--graphics none \
--console pty,target_type=serial \
--location /home/debian-12.0.0-amd64-DVD-1.iso \
--extra-args 'console=ttyS0,115200n8 serial' 
# installation starts
Starting install...
Retrieving file vmlinuz...                                  | 6.5 MB  00:00
Retrieving file initrd.gz...                                |  17 MB  00:00
Allocating 'debian12.img'                                   |  20 GB  00:00
.....
.....
# after this, installation proceeds with the common procedure
                    
                
        The example of options above means like follows. There are many options for others, make sure with [man virt-install].
| Name | specify Name Virtual Machine | 
|---|---|
| --ram | specify the amount of memory of Virtual Machine | 
| --disk path=xxx,size=xxx | [path=xxx] : specify the location of disks of Virtual Machine (default is [/var/lib/libvirt/images]) [size=xxx] : specify the amount of disk of Virtual Machine | 
| --vcpus | specify the virtual CPUs | 
| --os-variant | specify the kind of GuestOS possible to show the list of available OS with the command below [# osinfo-query os] | 
| --network | specify network type of Virtual Machine | 
| --graphics | specify the kind of graphics possible to specify spice, vnc, none and so on | 
| --console | specify the console type | 
| --location | specify the location of installation source where from | 
| --extra-args | specify parameters that are set in Kernel | 
Step [2]For installing on text mode, it's the same with common installation procedure.
                     
Debian GNU/Linux 12 debian ttyS0
debian login: root
Password:
Linux debian 6.1.0-9-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.27-1 (2023-05-08) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@debian:~#
                    
                
        Step [3]Move to GuestOS to HostOS with Ctrl + ] key. Move to HostOS to GuestOS with a command [virsh console (name of virtual machine)].
                    
root@debian:~#     # Ctrl + ] key
root@bizantum:~#        # Host's console
root@bizantum:~# virsh console debian12    # switch to Guest's console
Connected to domain 'debian12'
Escape character is ^]     # Enter key
root@debian:~#     # Guest's console
                    
                
        Step [4]It's easy to replicate a copy VM from current VM with a command below.
                    
root@bizantum:~# virt-clone --original debian12 --name template --file /var/kvm/images/template.img
Allocating 'template.img'                                   | 1.6 GB  00:00 ...
Clone 'template' created successfully.
# disk image
root@bizantum:~# ll /var/kvm/images/template.img
-rw------- 1 root root 1979973632 Jun 19 18:57 /var/kvm/images/template.img
# configuration file
root@bizantum:~# ll /etc/libvirt/qemu/template.xml
-rw------- 1 root root 6685 Jun 19 18:57 /etc/libvirt/qemu/template.xml
                    
                
      Create Virtual Machine (GUI)
If you installed Desktop Environment, it's possible to create a Virtual machine on GUI. This example shows to install Windows Server 2022 on GUI.
Step [1]Install required packages.
                     
root@bizantum:~# apt -y install virt-manager qemu-system
                    
                
        Step [2]Start Desktop and run [Virtual Machine Manager] with root privilege, then, Click [New] button (it's just the PC-icon on upper-left) and open a wizard to create a new Virtual Machine
 
        Step [3]Specify the installation source. It selects local media on this example.
 
        Step [4]Select Installation media or ISO image, and specify OS type and version. Generally OS type and version are detected automatically from the installation media.
 
        Step [5]Specify the amount of memory and number of virtual CPUs.
 
        Step [6]Specify the amount of disk, and also the Path of it if you set the custom Path. (default is [/var/lib/libvirt/images])
 
        Step [7]Set Virtual Machine's name and also confirm selections.
 
        Step [8]Windows Server 2022 installer starts.
 
        Step [9]Installation finished and Windows Server 2022 is running now.
 
      Basic Operation for VM
This is the Basic Operarion example with virsh command which is included in Libvirt package.
Step [1]Start Virtual Machine.
                     
# Start Virtual Machine [debian12]
root@bizantum:~# virsh start debian12
Domain debian12 started
# start and connect to console of [debian12]
root@bizantum:~# virsh start debian12 --console
Domain debian12 started
Connected to domain debian12
                    
                
        Step [2]Stop Virtual Machine.
                     
# Stop Virtual Machine [debian12]
root@bizantum:~# virsh shutdown debian12
Domain debian12 is being shutdown
# Stop Virtual Machine [debian12] forcely
root@bizantum:~# virsh destroy debian12
Domain debian12 destroyed
                    
                
        Step [3]Set auto-start for Virtual Machines.
                    
# Enable auto-start for [debian12]
root@bizantum:~# virsh autostart debian12
Domain debian12 marked as autostarted
# Disable auto-start for [debian12]
root@bizantum:~# virsh autostart --disable debian12
Domain debian12 unmarked as autostarted
                    
                
        Step [4]List all Virtual Machines.
                    
# List all active Virtual Machines
root@bizantum:~# virsh list
 Id    Name             State
----------------------------------------
 2     debian12         running
# List all Virtual Machines included inactives
root@bizantum:~# virsh list --all
 Id    Name             State
----------------------------------------
 -     debian12         shut off
 -     debian12_org     shut off
 -     Win2k22          shut off
                    
                
        Step [5]Switch console, Move to GuestOS to HostOS with Ctrl + ] key. Move to HostOS to GuestOS with a command [virsh console (name of virtual machine)].
                    
# connect to [debian12]
root@bizantum:~# virsh console debian12 
Connected to domain debian12
Escape character is ^]    # Enter key
Debian GNU/Linux 12 debian ttyS0
debian login::            # switched on Guest
Password:
root@debian:~#            # Ctrl + ] key
root@bizantum:~#               # switched on Host
                    
                
        Step [6]For Other options below, there are many options, try to verify them.
                    
root@bizantum:~# virsh --help
virsh [options]... [<command_string>]
virsh [options]... <command> [args...]
  options:
    -c | --connect=URI      hypervisor connection URI
    -d | --debug=NUM        debug level [0-4]
    -e | --escape <char>    set escape sequence for console
    -h | --help             this help
    -k | --keepalive-interval=NUM
                            keepalive interval in seconds, 0 for disable
    -K | --keepalive-count=NUM
                            number of possible missed keepalive messages
    -l | --log=FILE         output logging to file
    -q | --quiet            quiet mode
    -r | --readonly         connect readonly
    -t | --timing           print timing information
    -v                      short version
    -V                      long version
         --version[=TYPE]   version, TYPE is short or long (default short)
  commands (non interactive mode):
 Domain Management (help keyword 'domain')
    attach-device                  attach device from an XML file
    attach-disk                    attach disk device
    attach-interface               attach network interface
    autostart                      autostart a domain
    blkdeviotune                   Set or query a block device I/O tuning parameters.
    blkiotune                      Get or set blkio parameters
    blockcommit                    Start a block commit operation.
    blockcopy                      Start a block copy operation.
    blockjob                       Manage active block operations
    blockpull                      Populate a disk from its backing image.
    blockresize                    Resize block device of domain.
    change-media                   Change media of CD or floppy drive
    console                        connect to the guest console
    cpu-baseline                   compute baseline CPU
    cpu-compare                    compare host CPU with a CPU described by an XML file
    cpu-stats                      show domain cpu statistics
    create                         create a domain from an XML file
    define                         define (but don't start) a domain from an XML file
    desc                           show or set domain's description or title
    destroy                        destroy (stop) a domain
    detach-device                  detach device from an XML file
    detach-disk                    detach disk device
    detach-interface               detach network interface
    domdisplay                     domain display connection URI
    domfsfreeze                    Freeze domain's mounted filesystems.
    domfsthaw                      Thaw domain's mounted filesystems.
    domfsinfo                      Get information of domain's mounted filesystems.
    domfstrim                      Invoke fstrim on domain's mounted filesystems.
    domhostname                    print the domain's hostname
    domid                          convert a domain name or UUID to domain id
    domif-setlink                  set link state of a virtual interface
    domiftune                      get/set parameters of a virtual interface
    domjobabort                    abort active domain job
    domjobinfo                     domain job information
    domname                        convert a domain id or UUID to domain name
    domrename                      rename a domain
    dompmsuspend                   suspend a domain gracefully using power management functions
    dompmwakeup                    wakeup a domain from pmsuspended state
    domuuid                        convert a domain name or id to domain UUID
    domxml-from-native             Convert native config to domain XML
    domxml-to-native               Convert domain XML to native config
    dump                           dump the core of a domain to a file for analysis
    dumpxml                        domain information in XML
    edit                           edit XML configuration for a domain
    event                          Domain Events
    inject-nmi                     Inject NMI to the guest
    iothreadinfo                   view domain IOThreads
    iothreadpin                    control domain IOThread affinity
    iothreadadd                    add an IOThread to the guest domain
    iothreaddel                    delete an IOThread from the guest domain
    send-key                       Send keycodes to the guest
    send-process-signal            Send signals to processes
    lxc-enter-namespace            LXC Guest Enter Namespace
    managedsave                    managed save of a domain state
    managedsave-remove             Remove managed save of a domain
    managedsave-edit               edit XML for a domain's managed save state file
    managedsave-dumpxml            Domain information of managed save state file in XML
    managedsave-define             redefine the XML for a domain's managed save state file
    memtune                        Get or set memory parameters
    perf                           Get or set perf event
    metadata                       show or set domain's custom XML metadata
    migrate                        migrate domain to another host
    migrate-setmaxdowntime         set maximum tolerable downtime
    migrate-getmaxdowntime         get maximum tolerable downtime
    migrate-compcache              get/set compression cache size
    migrate-setspeed               Set the maximum migration bandwidth
    migrate-getspeed               Get the maximum migration bandwidth
    migrate-postcopy               Switch running migration from pre-copy to post-copy
    numatune                       Get or set numa parameters
    qemu-attach                    QEMU Attach
    qemu-monitor-command           QEMU Monitor Command
    qemu-monitor-event             QEMU Monitor Events
    qemu-agent-command             QEMU Guest Agent Command
    reboot                         reboot a domain
    reset                          reset a domain
    restore                        restore a domain from a saved state in a file
    resume                         resume a domain
    save                           save a domain state to a file
    save-image-define              redefine the XML for a domain's saved state file
    save-image-dumpxml             saved state domain information in XML
    save-image-edit                edit XML for a domain's saved state file
    schedinfo                      show/set scheduler parameters
    screenshot                     take a screenshot of a current domain console and store it into a file
    set-lifecycle-action           change lifecycle actions
    set-user-password              set the user password inside the domain
    setmaxmem                      change maximum memory limit
    setmem                         change memory allocation
    setvcpus                       change number of virtual CPUs
    shutdown                       gracefully shutdown a domain
    start                          start a (previously defined) inactive domain
    suspend                        suspend a domain
    ttyconsole                     tty console
    undefine                       undefine a domain
    update-device                  update device from an XML file
    vcpucount                      domain vcpu counts
    vcpuinfo                       detailed domain vcpu information
    vcpupin                        control or query domain vcpu affinity
    emulatorpin                    control or query domain emulator affinity
    vncdisplay                     vnc display
    guestvcpus                     query or modify state of vcpu in the guest (via agent)
    setvcpu                        attach/detach vcpu or groups of threads
    domblkthreshold                set the threshold for block-threshold event for a given block device or it's backing chain element
 Domain Monitoring (help keyword 'monitor')
    domblkerror                    Show errors on block devices
    domblkinfo                     domain block device size information
    domblklist                     list all domain blocks
    domblkstat                     get device block stats for a domain
    domcontrol                     domain control interface state
    domif-getlink                  get link state of a virtual interface
    domifaddr                      Get network interfaces' addresses for a running domain
    domiflist                      list all domain virtual interfaces
    domifstat                      get network interface stats for a domain
    dominfo                        domain information
    dommemstat                     get memory statistics for a domain
    domstate                       domain state
    domstats                       get statistics about one or multiple domains
    domtime                        domain time
    list                           list domains
 Host and Hypervisor (help keyword 'host')
    allocpages                     Manipulate pages pool size
    capabilities                   capabilities
    cpu-models                     CPU models
    domcapabilities                domain capabilities
    freecell                       NUMA free memory
    freepages                      NUMA free pages
    hostname                       print the hypervisor hostname
    maxvcpus                       connection vcpu maximum
    node-memory-tune               Get or set node memory parameters
    nodecpumap                     node cpu map
    nodecpustats                   Prints cpu stats of the node.
    nodeinfo                       node information
    nodememstats                   Prints memory stats of the node.
    nodesuspend                    suspend the host node for a given time duration
    sysinfo                        print the hypervisor sysinfo
    uri                            print the hypervisor canonical URI
    version                        show version
 Interface (help keyword 'interface')
    iface-begin                    create a snapshot of current interfaces settings, which can be later committed (iface-commit) or restored (iface-rollback)
    iface-bridge                   create a bridge device and attach an existing network device to it
    iface-commit                   commit changes made since iface-begin and free restore point
    iface-define                   define an inactive persistent physical host interface or modify an existing persistent one from an XML file
    iface-destroy                  destroy a physical host interface (disable it / "if-down")
    iface-dumpxml                  interface information in XML
    iface-edit                     edit XML configuration for a physical host interface
    iface-list                     list physical host interfaces
    iface-mac                      convert an interface name to interface MAC address
    iface-name                     convert an interface MAC address to interface name
    iface-rollback                 rollback to previous saved configuration created via iface-begin
    iface-start                    start a physical host interface (enable it / "if-up")
    iface-unbridge                 undefine a bridge device after detaching its slave device
    iface-undefine                 undefine a physical host interface (remove it from configuration)
 Network Filter (help keyword 'filter')
    nwfilter-define                define or update a network filter from an XML file
    nwfilter-dumpxml               network filter information in XML
    nwfilter-edit                  edit XML configuration for a network filter
    nwfilter-list                  list network filters
    nwfilter-undefine              undefine a network filter
 Networking (help keyword 'network')
    net-autostart                  autostart a network
    net-create                     create a network from an XML file
    net-define                     define an inactive persistent virtual network or modify an existing persistent one from an XML file
    net-destroy                    destroy (stop) a network
    net-dhcp-leases                print lease info for a given network
    net-dumpxml                    network information in XML
    net-edit                       edit XML configuration for a network
    net-event                      Network Events
    net-info                       network information
    net-list                       list networks
    net-name                       convert a network UUID to network name
    net-start                      start a (previously defined) inactive network
    net-undefine                   undefine a persistent network
    net-update                     update parts of an existing network's configuration
    net-uuid                       convert a network name to network UUID
 Node Device (help keyword 'nodedev')
    nodedev-create                 create a device defined by an XML file on the node
    nodedev-destroy                destroy (stop) a device on the node
    nodedev-detach                 detach node device from its device driver
    nodedev-dumpxml                node device details in XML
    nodedev-list                   enumerate devices on this host
    nodedev-reattach               reattach node device to its device driver
    nodedev-reset                  reset node device
    nodedev-event                  Node Device Events
 Secret (help keyword 'secret')
    secret-define                  define or modify a secret from an XML file
    secret-dumpxml                 secret attributes in XML
    secret-event                   Secret Events
    secret-get-value               Output a secret value
    secret-list                    list secrets
    secret-set-value               set a secret value
    secret-undefine                undefine a secret
 Snapshot (help keyword 'snapshot')
    snapshot-create                Create a snapshot from XML
    snapshot-create-as             Create a snapshot from a set of args
    snapshot-current               Get or set the current snapshot
    snapshot-delete                Delete a d[  659.538623] serial8250: too much work for irq4
omain snapshot
    snapshot-dumpxml               Dump XML for a domain snapshot
    snapshot-edit                  edit XML for a snapshot
    snapshot-info                  snapshot information
    snapshot-list                  List snapshots for a domain
    snapshot-parent                Get the name of the parent of a snapshot
    snapshot-revert                Revert a domain to a snapshot
 Storage Pool (help keyword 'pool')
    find-storage-pool-sources-as   find potential storage pool sources
    find-storage-pool-sources      discover potential storage pool sources
    pool-autostart                 autostart a pool
    pool-build                     build a pool
    pool-create-as                 create a pool from a set of args
    pool-create                    create a pool from an XML file
    pool-define-as                 define a pool from a set of args
    pool-define                    define an inactive persistent storage pool or modify an existing persistent one from an XML file
    pool-delete                    delete a pool
    pool-destroy                   destroy (stop) a pool
    pool-dumpxml                   pool information in XML
    pool-edit                      edit XML configuration for a storage pool
    pool-info                      storage pool information
    pool-list                      list pools
    pool-name                      convert a pool UUID to pool name
    pool-refresh                   refresh a pool
    pool-start                     start a (previously defined) inactive pool
    pool-undefine                  undefine an inactive pool
    pool-uuid                      convert a pool name to pool UUID
    pool-event                     Storage Pool Events
 Storage Volume (help keyword 'volume')
    vol-clone                      clone a volume.
    vol-create-as                  create a volume from a set of args
    vol-create                     create a vol from an XML file
    vol-create-from                create a vol, using another volume as input
    vol-delete                     delete a vol
    vol-download                   download volume contents to a file
    vol-dumpxml                    vol information in XML
    vol-info                       storage vol information
    vol-key                        returns the volume key for a given volume name or path
    vol-list                       list vols
    vol-name                       returns the volume name for a given volume key or path
    vol-path                       returns the volume path for a given volume name or key
    vol-pool                       returns the storage pool for a given volume key or path
    vol-resize                     resize a vol
    vol-upload                     upload file contents to a volume
    vol-wipe                       wipe a vol
 Virsh itself (help keyword 'virsh')
    cd                             change the current directory
    echo                           echo arguments
    exit                           quit this interactive terminal
    help                           print help
    pwd                            print the current directory
    quit                           quit this interactive terminal
    connect                        (re)connect to hypervisor
  (specify help <group> for details about the commands in the group)
  (specify help <command> for details about the command)
                    
                
      VNC connection Setting
Set VNC connection to connect to virtual machines with VNC.
Step [1] Edit existing virtual machine's configuration and start virtual machine with VNC like follows. The example on this site shows to create a virtual machine without graphics, so it's OK to change settings like follows, however, if you created virtual machine with graphics, Remove [<graphics>***] and [<video>***] sections in configuration file.
                     
# edit configuration of VM
root@bizantum:~# virsh edit debian12
<domain type='kvm'>
  <name>debian12</name>
  <uuid>675bbee5-76d4-4611-b07e-08bb83740e1a</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://debian.org/debian/11"/>
    </libosinfo:libosinfo>
  .....
  .....
    # add like follows
    # set any password for [passwd=***] section for VNC connection
    <graphics type='vnc' port='5900' autoport='no' listen='0.0.0.0' passwd='password'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='virtio' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>
Domain 'debian12' XML configuration edited.
root@bizantum:~# virsh start debian12
Domain debian12 started
                    
                
        Step [2]That's OK. Refer to the next section to connect to virtual machines from VNC client.
Step [3]By the way, if you'd like to enable VNC on initial creating of virtual machine, specify like follows. Then, it's possible to install Guest OS with VNC that requires GUI installation like Windows without installing Desktop Environment on KVM Host computer.
                    
root@bizantum:~# virt-install \
--name Win2k22 \
--ram 6144 \
--disk path=/var/kvm/images/Win2k22.img,size=100 \
--vcpus 4 \
--os-variant win2k22 \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0,password=password \
--video virtio \
--cdrom /home/Win2022_EN-US_20348.169.210806-2348.fe.iso
                    
                
      Management tools for VM.
Install useful tools for virtual machine management.
Step [1]Install required packages.
                     
root@bizantum:~# apt -y install libguestfs-tools
                    
                
        Step [2] Get official OS image and Create a Virtual Machine. ( If you'd like to create VM from OS installation, refer to here of [1] )
                     
# display available OS template
root@bizantum:~# virt-builder -l
.....
.....
debian-6                 x86_64     Debian 6 (Squeeze)
debian-7                 sparc64    Debian 7 (Wheezy) (sparc64)
debian-7                 x86_64     Debian 7 (wheezy)
debian-8                 x86_64     Debian 8 (jessie)
debian-9                 x86_64     Debian 9 (stretch)
debian-10                x86_64     Debian 10 (buster)
debian-11                x86_64     Debian 11 (bullseye)
debian-12                x86_64     Debian 12 (bookworm)
.....
.....
# create an image of Debian 12
root@bizantum:~# virt-builder debian-12 --format qcow2 --size 10G -o /var/kvm/images/debian-12.qcow2 --root-password password:myrootpassword
[   9.2] Downloading: http://builder.libguestfs.org/debian-12.xz
[  67.6] Planning how to build this image
[  67.6] Uncompressing
[  71.0] Resizing (using virt-resize) to expand the disk to 10.0G
[  90.1] Opening the new disk
[  93.1] Setting a random seed
[  93.2] Setting passwords
[  93.8] Finishing off
                   Output file: /var/kvm/images/debian-12.qcow2
                   Output size: 10.0G
                 Output format: qcow2
            Total usable space: 9.8G
                    Free space: 8.6G (87%)
# to configure VM with the image above, run virt-install
root@bizantum:~# virt-install \
--name debian-12 \
--ram 4096 \
--disk path=/var/kvm/images/debian-12.qcow2 \
--vcpus 2 \
--os-variant debian11 \
--network bridge=br0 \
--graphics none \
--noautoconsole \
--boot hd \
--noreboot \
--import 
Starting install...
Creating domain... 
Domain creation completed.
You can restart your domain by running:
  virsh --connect qemu:///system start debian-12
                    
                
        Step [3][ls] a directory in a virtual machine.
                    
root@bizantum:~# virt-ls -l -d debian12 /root
total 24
drwx------  3 0 0 4096 Jun 19 23:57 .
drwxr-xr-x 18 0 0 4096 Jun 19 23:52 ..
-rw-------  1 0 0   16 Jun 19 23:57 .bash_history
-rw-r--r--  1 0 0  571 Apr 10  2021 .bashrc
-rw-r--r--  1 0 0  161 Jul  9  2019 .profile
drwx------  2 0 0 4096 Jun 19 23:52 .ssh
                    
                
        Step [4][cat] a file in a virtual machine.
                    
root@bizantum:~# virt-cat -d debian12 /etc/passwd
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
.....
.....
                    
                
        Step [5]Edit a file in a virtual machine.
                    
root@bizantum:~# virt-edit -d debian12 /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/debian--vg-root /               ext4    errors=remount-ro 0       1
# /boot was on /dev/vda1 during installation
UUID=8e8d5f88-16cd-473a-88de-1900abeff03c /boot ext2 defaults  0       2
/dev/mapper/debian--vg-swap_1 none  swap    sw              0       0
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
                    
                
        Step [6]Display disk usage in a virtual machine.
                    
root@bizantum:~# virt-df -d debian12
Filesystem                           1K-blocks       Used  Available  Use%
debian12:/dev/sda1                      465124      58923     381267   13%
debian12:/dev/debian-vg/root          18982140    1010616   16981932    6%
                    
                
        Step [7]Mount disk image of a virtual machine.
                    
root@bizantum:~# guestmount -d debian12 -i /mnt
root@bizantum:~# ll /mnt
total 73
lrwxrwxrwx  1 root root     7 Jun 19 18:51 bin -> usr/bin
drwxr-xr-x  4 root root  1024 Jun 19 18:55 boot
drwxr-xr-x  4 root root  4096 Jun 19 18:51 dev
drwxr-xr-x 67 root root  4096 Jun 19 18:56 etc
drwxr-xr-x  3 root root  4096 Jun 19 18:55 home
.....
.....
                    
                
      VNC connection (Client) : Debian
Connect to a Virtual machine that is running with enabling VNC.
Step [1]On Debian client with Desktop Environment, Run [apt -y install virt-viewer] to install Virt Viewer and start [Remote Viewer] like follows.
 
        Step [2]Input [vnc://(server's hostname or IP address):(port you set)] and click the [Connect] button.
 
        Step [3]Input password you set and Click the [OK] button.
 
        Step [4]If successfully passed authentication, it's possible to connect like follows.
 
        VNC connection (Client) : Windows
It's possible to connect to VMs with VNC from Windows clients.
Step [5] Install any VNC Viewer on Windows client. This example is based on UltraVNC viewer. ⇒ http://www.uvnc.com/downloads/ultravnc.html After installing application, Input [(server's hostname or IP address):(port you set)] like follows and connect.
 
        Step [6]VNC password is required to input for authentication.
 
        Step [7]If successfully passed authentication, it's possible to connect to VM with VNC like follows.
 
      Nested KVM
Configure nested KVM. It's possible to install KVM Hypervisor and create virtual machines as nested KVM on KVM host
Step [1]Enable the setting for Nested KVM.
                     
# show the current setting ( if the result is [Y], it's OK )
root@bizantum:~# cat /sys/module/kvm_intel/parameters/nested
Y
# if the result is [N], change like follows and reboot the system )
# specify [kvm_intel] for Intel CPU
# specify [kvm_amd] for AMD CPU
root@bizantum:~# echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf
                    
                
        Step [2] Edit configuration of an existing virtual machine you'd like to set Nested like follows. That's OK, it's possible to create virtual machine on GuestOS.
                     
# edit config of a VM [debian12]
root@bizantum:~# virsh edit debian12
# change [cpu mode] like follows
<cpu mode='host-passthrough' check='none' migratable='on'/>
                    
                
      Live Migration
This is the example to use Live Migration feature for Virtual Machines. It requires 2 KVM host server and also a storage server like follows. Configure DNS or hosts to resolve names or IP addresses normally, first.
                      +----------------------+
                      |   [  NFS Servver  ]  |
                      |    nfs.bizantum.lab  |
                      |                      |
                      +-----------+----------+
                                  |10.0.0.35
                                  |
+----------------------+          |          +----------------------+
|  [   KVM Host #1  ]  |10.0.0.21 | 10.0.0.22|  [  KVM Host #2   ]  |
|                      +----------+----------+                      |
|  kvm01.bizantum.lab  |                     |  kvm02.bizantum.lab  |
+----------------------+                     +----------------------+
        
        Step [1] Configure Storage server that virtual machine images are placed. For Storage server, it's OK to use NFS, iSCSI, GlusterFS and so on. It uses NFS on this example.
Step [2]Configure 2 KVM host server and mount a directory provided from Storage server on the same mount point on both KVM server. It uses [/var/kvm/images] for mount point on this example.
Step [3]Create and Start a Virtual Machine on a KVM Host and run Live Migration like follows. For connection between KVM Hosts, they connects to others with SSH, so it had better to set SSH key-pair with root account before running live migration.
                    
# edit the setting of a VM you'd like to run live migration
root@kvm01:~# virsh edit debian12
 <disk type='file' device='disk'>
      # add : change cache mode to [none]
      <driver name='qemu' type='qcow2' discard='unmap' cache='none'/>
      <source file='/var/kvm/images/debian12.img'/>
root@kvm01:~# virsh start debian12
root@kvm01:~# virsh list
 Id   Name       State
--------------------------
 1    debian12   running
root@kvm01:~# virsh migrate --live debian12 qemu+ssh://10.0.0.22/system
root@kvm01:~# virsh list
 Id   Name       State
--------------------------
# VM migrated
### on another KVM Host ###
root@kvm02:~# virsh list
 Id   Name       State
--------------------------
 1    debian12   running
# back to the KVM Host again like follows
root@kvm02:~# virsh migrate --live debian12 qemu+ssh://10.0.0.21/system
root@kvm02:~# virsh list
 Id   Name       State
--------------------------
                    
                
      Storage Migration
This is the example to use Storage Migration feature for Virtual Machines. The different point of Storage Migration from the Common Live Migration is that Storage Migration does not need a Storage server that has Virtual Machine images, when executing Storage Migration, Virtual Machine image on a KVM host is migrated on another KVM host like follows.
                          Storage Migration
                        <------------------->
                        
+----------------------+                     +----------------------+
|  [   KVM Host #1  ]  |10.0.0.21   10.0.0.22|  [  KVM Host #2   ]  |
|                      +---------------------+                      |
|  kvm01.bizantum.lab  |                     |   kvm02.bizantum.lab |
+----------------------+                     +----------------------+
        
        Step [1]Configure 2 KVM Host Server and Create a Virtual Machine on a KVM Host. Configure DNS or hosts to resolve names or IP addresses normally, first.
Step [2]Confirm the file size of a Virtual Machine image like follows on a KVM host, and next, move to another KVM host to create an empty disk image like follows.
                     
# show the size of Virtual machine
root@kvm01:~# qemu-img info /var/kvm/images/debian12.img
image: /var/kvm/images/debian12.img
file format: qcow2
virtual size: 20 GiB (21474836480 bytes)
disk size: 6.37 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false
### on another KVM host ###
# create a disk which is the same size of a Virtual Machine
root@kvm02:~# qemu-img create -f qcow2 -o preallocation=falloc /var/kvm/images/debian12.img 21474836480
                    
                
        Step [3]That's OK, Run Storage Migration like follows.
                    
root@kvm01:~# virsh list
 Id   Name       State
--------------------------
 1    debian12   running
root@kvm01:~# virsh migrate --live --copy-storage-all debian12 qemu+ssh://10.0.0.22/system
root@kvm01:~# virsh list
 Id   Name       State
--------------------------
# VM migrated
### on another KVM host ###
root@kvm02:~# virsh list
 Id   Name       State
--------------------------
 1    debian12   running
                    
                
      UEFI boot for Virtual Machine
Boot Virtual Machines with UEFI (Unified Extensible Firmware Interface).
Step [1]Install UEFI Firmware for Virtual Machines.
                     
root@bizantum:~# apt -y install ovmf
                    
                
        Step [2] To set UEFI, Specify [--boot uefi] When creating Virtual Machine.
                     
root@bizantum:~# virt-install \
--name Win2k22 \
--ram 6144 \
--disk path=/var/kvm/images/Win2k22.img,size=40 \
--vcpus 4 \
--os-variant win2k22 \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0,password=password \
--video virtio \
--cdrom /home/Win2022_EN-US_20348.169.210806-2348.fe.iso \
--boot uefi
                    
                
        Step [3]Virtual Machine starts on UEFI mode.
 
        Step [4]After instalation, you can find [UEFI] on [BIOS Mode].
 
      Enable TPM 2.0
Create a Virtual Machine with enabling TPM 2.0. This example shows to install Windows 11.
Step [1]Install required packages.
                     
root@bizantum:~# apt -y install ovmf swtpm swtpm-tools
                    
                
        Step [2]Create a Windows 11 Virtual Machine. Enable TPM 2.0 and ScureBoot for installing Windows 11.
                     
root@bizantum:~# virt-install \
--name Windows_11 \
--ram 6144 \
--disk path=/var/kvm/images/Windows_11.img,size=50 \
--cpu host-passthrough \
--vcpus=4 \
--os-variant=win10 \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0,password=password \
--video virtio \
--cdrom /home/Win11_22H2_English_x64v1.iso \
--features kvm_hidden=on,smm=on \
--tpm backend.type=emulator,backend.version=2.0,model=tpm-tis \
--boot loader=/usr/share/OVMF/OVMF_CODE.secboot.fd,loader_ro=yes,loader_type=pflash,nvram_template=/usr/share/OVMF/OVMF_VARS.ms.fd 
                    
                
        Step [3]Windows 11 installer starts.
 
        Step [4]Installation finished and Windows 11 is running.
 
         
      GPU Passthrough
Configure GPU Passthrough for Virtual Machines. By this configuration, it's possible to use GPU on Virtual Machines and run GPU Computing. Before configuration, Enable VT-d (Intel) or AMD IOMMU (AMD) on BIOS Setting first.
Step [1] Enable IOMMU feature on KVM Host.
                     
root@bizantum:~# vi /etc/default/grub
# line 10 : add
# for AMD CPU, set [amd_iommu=on]
# for Intel CPU, set [intel_iommu=on]
GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt"
root@bizantum:~# update-grub
# show PCI identification number and [vendor-ID:device-ID] of Graphic card
# PCI number ⇒ it matches [03:00.*] below
# vendor-ID:device-ID ⇒ it matches [10de:***] below
root@bizantum:~# lspci -nn | grep -i nvidia
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 770] [10de:1184] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)
root@bizantum:~# vi /etc/modprobe.d/vfio.conf
# create new : for [ids=***], specify [Vendor-ID : Device-ID]
options vfio-pci ids=10de:1184,10de:0e0a
root@bizantum:~# echo 'vfio-pci' > /etc/modules-load.d/vfio-pci.conf
root@bizantum:~# reboot
# confirm IOMMU is enabled
root@bizantum:~# dmesg | grep -E "DMAR|IOMMU"
[    0.011924] ACPI: DMAR 0x000000005C6CEB70 0000D4 (v01 ALASKA A M I    00000001 INTL 20091013)
[    0.011947] ACPI: Reserving DMAR table memory at [mem 0x5c6ceb70-0x5c6cec43]
[    0.022477] DMAR: IOMMU enabled
[    0.096903] DMAR: Host address width 46
[    0.096904] DMAR: DRHD base: 0x000000fbffd000 flags: 0x0
[    0.096911] DMAR: dmar0: reg_base_addr fbffd000 ver 1:0 cap 8d2008c10ef0466 ecap f0205b
[    0.096914] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[    0.096918] DMAR: dmar1: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
[    0.096920] DMAR: RMRR base: 0x0000005ce2b000 end: 0x0000005ce3afff
[    0.096921] DMAR: ATSR flags: 0x0
[    0.096923] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[    0.096925] DMAR-IR: IOAPIC id 1 under DRHD base  0xfbffc000 IOMMU 1
[    0.096927] DMAR-IR: IOAPIC id 2 under DRHD base  0xfbffc000 IOMMU 1
[    0.096928] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[    0.096929] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    0.096930] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    0.097541] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.381331] DMAR: No SATC found
[    0.381333] DMAR: IOMMU feature sc_support inconsistent
[    0.381334] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.381336] DMAR: dmar0: Using Queued invalidation
[    0.381339] DMAR: dmar1: Using Queued invalidation
[    0.384032] DMAR: Intel(R) Virtualization Technology for Directed I/O
[    0.687176] AMD-Vi: AMD IOMMUv2 functionality not available on this system - This is not a bug.
# confirm vfio_pci is enabled
root@bizantum:~# dmesg | grep -i vfio
[    3.065228] VFIO - User Level meta-driver version: 0.3
[    3.095618] vfio-pci 0000:03:00.0: vgaarb: deactivate vga console
[    3.100120] vfio-pci 0000:03:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[    3.100314] vfio_pci: add [10de:1184[ffffffff:ffffffff]] class 0x000000/00000000
[    3.148678] vfio_pci: add [10de:0e0a[ffffffff:ffffffff]] class 0x000000/00000000
                    
                
        Step [2]That's OK. For example, create a Debian 12 Virtual machine with GPU. Specify PCI identification number of GPU for [--host-device].
                     
root@bizantum:~# virt-install \
--name debian12 \
--ram 8192 \
--disk path=/var/kvm/images/debian12.img,size=30 \
--vcpus 4 \
--os-variant debian11 \
--network bridge=br0 \
--graphics none \
--console pty,target_type=serial \
--location /home/debian-12.0.0-amd64-DVD-1.iso \
--extra-args 'console=ttyS0,115200n8' \
--host-device 03:00.0 \
--features kvm_hidden=on \
--machine q35 
                    
                
        Step [3]After creating Virtual machine, Confirm GPU is found on it like follows.
                    
root@bizantum:~# lspci | grep -i nvidia
05:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 770] (rev a1)
                    
                
      Use VirtualBMC
Install VirtualBMC to enable IPMI commands to Virtual machines. VirtualBMC supports a small part of IPMI commands like power on/off operation, however, it is sometimes useful.
Step [1]Install VirtualBMC on KVM Host.
                     
root@bizantum:~# apt -y install python3-pip python3-venv ipmitool
# create Python virtual environment under the [/opt/virtualbmc]
root@bizantum:~# python3 -m venv --system-site-packages /opt/virtualbmc
# install VirtualBMC
root@bizantum:~# /opt/virtualbmc/bin/pip3 install virtualbmc
# create a systemd setting file
root@bizantum:~# vi /usr/lib/systemd/system/virtualbmc.service
# create new
[Unit]
Description=Virtual BMC Service
After=network.target libvirtd.service
[Service]
Type=simple
ExecStart=/opt/virtualbmc/bin/vbmcd --foreground
ExecStop=/bin/kill -HUP $MAINPID
User=root
Group=root
[Install]
WantedBy=multi-user.target
root@bizantum:~# systemctl daemon-reload
root@bizantum:~# systemctl enable --now virtualbmc.service
# show status (OK if no error is shown)
root@bizantum:~# /opt/virtualbmc/bin/vbmc list
                    
                
        Step [2]Set VirtualBMC to virtual machines.
                     
root@bizantum:~# virsh list --all
 Id   Name   State
-----------------------
 -    rx-7   shut off
 -    rx-8   shut off
# set VirtualBMC to a VM [rx-7]
# for [port], [username], [password], it's OK to set any values you like
root@bizantum:~# /opt/virtualbmc/bin/vbmc add rx-7 --port 6230 --username vbmcadmin --password adminpassword
root@bizantum:~# /opt/virtualbmc/bin/vbmc list
+-------------+--------+---------+------+
| Domain name | Status | Address | Port |
+-------------+--------+---------+------+
| rx-7        | down   | ::      | 6230 |
+-------------+--------+---------+------+
# start VirtualBMC
root@bizantum:~# /opt/virtualbmc/bin/vbmc start rx-7
root@bizantum:~# /opt/virtualbmc/bin/vbmc list
+-------------+---------+---------+------+
| Domain name | Status  | Address | Port |
+-------------+---------+---------+------+
| rx-7        | running | ::      | 6230 |
+-------------+---------+---------+------+
root@bizantum:~# /opt/virtualbmc/bin/vbmc show rx-7
+-----------------------+----------------+
| Property              | Value          |
+-----------------------+----------------+
| active                | True           |
| address               | ::             |
| domain_name           | rx-7           |
| libvirt_sasl_password | ***            |
| libvirt_sasl_username | None           |
| libvirt_uri           | qemu:///system |
| password              | ***            |
| port                  | 6230           |
| status                | running        |
| username              | vbmcadmin      |
+-----------------------+----------------+
# show status of power on [rx-7] via VirtualBMC
root@bizantum:~# ipmitool -I lanplus -H 127.0.0.1 -p 6230 -U vbmcadmin -P adminpassword power status
Chassis Power is off
# power on via VirtualBMC
root@bizantum:~# ipmitool -I lanplus -H 127.0.0.1 -p 6230 -U vbmcadmin -P adminpassword power on
Chassis Power Control: Up/On
root@bizantum:~# virsh list --all
 Id   Name   State
-----------------------
 3    rx-7   running
 -    rx-8   shut off
# power off via VirtualBMC
root@bizantum:~# ipmitool -I lanplus -H 127.0.0.1 -p 6230 -U vbmcadmin -P adminpassword power off
Chassis Power Control: Down/Off
root@bizantum:~# virsh list --all
 Id   Name   State
-----------------------
 -    rx-7   shut off
 -    rx-8   shut off
                    
                
        Step [3] If you'd like to use VirtualBMC not on KVM Host but on other Hosts, configure like follows. For SSH key-pair settings, it had better to change sshd setting to [PermitRootLogin prohibit-password] after setting key-pair.
                    
# generate SSH key-pair and set to own host
root@bizantum:~# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
root@bizantum:~# mv ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
root@bizantum:~# ssh 10.0.0.30 hostname
bizantum.bizantum.lab
root@bizantum:~# virsh list --all
 Id   Name   State
-----------------------
 -    rx-7   shut off
 -    rx-8   shut off
# set VirtualBMC to [rx-8]
# for [--libvirt-uri], specify KVM Host
root@bizantum:~# /opt/virtualbmc/bin/vbmc add rx-8 --port 6231 --username vbmcadmin --password adminpassword --libvirt-uri qemu+ssh://root@10.0.0.30/system
root@bizantum:~# /opt/virtualbmc/bin/vbmc start rx-8
root@bizantum:~# /opt/virtualbmc/bin/vbmc list
+-------------+---------+---------+------+
| Domain name | Status  | Address | Port |
+-------------+---------+---------+------+
| rx-7        | running | ::      | 6230 |
| rx-8        | running | ::      | 6231 |
+-------------+---------+---------+------+
root@bizantum:~# /opt/virtualbmc/bin/vbmc show rx-8
+-----------------------+----------------------------------+
| Property              | Value                            |
+-----------------------+----------------------------------+
| active                | True                             |
| address               | ::                               |
| domain_name           | rx-8                             |
| libvirt_sasl_password | ***                              |
| libvirt_sasl_username | None                             |
| libvirt_uri           | qemu+ssh://root@10.0.0.30/system |
| password              | ***                              |
| port                  | 6231                             |
| status                | running                          |
| username              | vbmcadmin                        |
+-----------------------+----------------------------------+
# that's OK
# for SSH key-pair generated on KVM Host,
# it needs to transfer private-key [id_rsa] to the Hosts you'd like to use VirtualBMC
# for example, execute ipmitool on [rx-7] host to [rx-8] host
root@rx-7:~# ll .ssh
total 20
drwx------ 2 root root 4096 Jun 20 07:16 ./
drwx------ 5 root root 4096 Jun 20 07:13 ../
-rw------- 1 root root    0 Jun 20 07:06 authorized_keys
-rw------- 1 root root 2602 Jun 20 07:15 id_rsa
-rw------- 1 root root  978 Jun 20 07:14 known_hosts
root@rx-7:~# ssh 10.0.0.30 hostname
dlp.bizantum.lab
root@rx-7:~# ipmitool -I lanplus -H 10.0.0.30 -p 6231 -U vbmcadmin -P adminpassword power status
Chassis Power is off
root@rx-7:~# ipmitool -I lanplus -H 10.0.0.30 -p 6231 -U vbmcadmin -P adminpassword power on
Chassis Power Control: Up/On
root@rx-7:~# ssh 10.0.0.30 "virsh list"
 Id   Name   State
----------------------
 4    rx-7   running
 5    rx-8   running
                    
                
      - Get link
- X
- Other Apps
 






 
 Posts
Posts
 
 
Comments
Post a Comment
Thank you for your comment! We appreciate your feedback, feel free to check out more of our articles.
Best regards, Bizantum Blog Team.