Difference between revisions of "Basic libvirt usage"
m |
m |
||
Line 1: | Line 1: | ||
<yambe:breadcrumb>Libvirt</yambe:breadcrumb> | |||
=Basic libvirt usage= | =Basic libvirt usage= | ||
Revision as of 11:39, 20 November 2013
<yambe:breadcrumb>Libvirt</yambe:breadcrumb>
Basic libvirt usage
'libvirt' provides common consistent interface among various back-end virtualization platforms. Hence if we use 'libvirt' then same commands/functions would work for Xen on Cent OS 5.5 as the ones that work for KVM on Fedora 12. Hence one should preferably use 'virsh' commands to operate on virtual machines when using Linux. We can use command
virsh help
to see detailed help on how to use virsh.
The lab on Xen networking hosted at http://www.sbarjatiya.com/website/courses/2010/monsoon/system_and_resource_virtualization/labs/06-networking_of_xen_VMs.pdf uses virsh commands only and can be used to learn about Xen networking, libvirt and 'virsh' commands.
Cloning VM using virt-clone
To clone VM using virt-clone use:
- If VM has many snapshots then revert to desired snapshot using 'virsh snapshot-revert <domain> <snapshot-name>' command.
- Then clone using syntax:
- virt-clone --original <source-domain> --name <destination-domain> --file <destination-vm-disk-path> --prompt
-
- For example,
- virt-clone --original ubuntu-12.04 --name deploy-2013-11-10 --file /mnt/data1/vms/deploy-2013-11-10/deploy-2013-11-10.qcow2 --prompt
- For example,
Cloning VM with snapshots
Note that for VMs cloned in this manner snapshots of source-domain are not available in destination-domain even though the same amount of disk space is used as source. So space used in storing other snapshots gets wasted in destination. To clone VMs with snapshot following process can be used:
- Clone VM using virt-clone as mentioned above
- Use 'qemu-img snapshot -l <qcow2-image>' command to list snapshots stored in destination VM file
- Use 'qemu-img snapshot -a <snapshot-name>' command to apply snapshot to qcow2 file. Start with first snapshot with ID 1 (both ID and name can be used).
- Now create snapshot of VM using 'virsh snapshot-create-as <domain-name> <snapshot-name> <description>'
- Apply next snapshot using 'qemu-img snapshot -a ' command and create snapshot again using 'virsh snapshot-create-as'
- Once this is done for all snapshots use 'qemu-img snapshot -l <qcow2-image>' to see available snapshots. Note that in this case all the snapshots created in above manner will appear twice. The snapshots will have different IDs and timestamps.
- Delete all older snaphosts have been applied and sort-of-cloned with new name using 'qemu-img snapshot -d <snaphost-ID> <qcow2-image>'
DHCP and DNS masquerading for VMs
Even when we create 'host-only' network using libvirt, still the VM is able to resolve the DNS queries for which it should ideally be able to contact DNS servers outside base machine. libvirt uses 'dnsmasq' service which listens on port 53 of virtual bridge interfaces and port 67 on any interface of base machine for DNS and DHCP requests from guests. This service forwards DNS requests to actual DNS servers configured in base and assigns DHCP IP addresses to clients. Hence even on host-only networks VMs are able to resolve domain names.
Note that we should not stop 'dnsmasq' service on base machine. Even restarting it may cause problems for virtual networks. If we have stopped 'dnsmasq' service by mistake, then we should use
virsh net-destroy <network_name> cd /etc/libvirt/qemu/networks virsh net-create <network_name.xml>
to recreate network. After this if some VMs are running then we have to manually add their virtual interfaces to the bridges using something like
brctl addif <bridge_name> <virtual_interface_name>
To find which virtual interfaces should be added where use:
virsh dumpxml <vm_name>
to see list of virtual interfaces for VM and to which virtual bridge they should ideally be connected to.
We can verify that 'dnsmasq' service is running and listening on proper ports using:
netstat -alnp | grep dnsmasq service dnsmasq status
Communication among multiple VM networks
Default configuration of NATted networks may cause packets going to other virtual networks also to be source NATted (MASQUERADED). Hence if we are using multiple virtual networks then e should preferably change
-A POSTROUTING -s <range> ! -d <range> -j MASQUERADE
to
-A POSTROUTING -s <range> -o <outgoing_bridge_name> -j MASQUERADE
Here 'outgoing_bridge_name refers' to bridge name for physical interface of base machine like 'br0' or 'xenbr0'. If guests are configured to use this bridge then they can also get LAN IP and outside machines can initiate direct connections to guest.
Checksum offloading
Modern OS may offload calculating of checksum to NIC. In that case if we capture packets that are being sent they may get captured with wrong checksums. Note that OS won't even bother to initialize checksum field to all zeros, so some garbage and incorrect checksum would always be present.
The virtual network driver provided by KVM/libvirtd combination does not calculates checksums on outgoing packets. Hence OS assumes NIC will put proper checksum and that does not happens. The receiving machine discards packets as checksum is incorrect.
This problem is very hard to diagnoze and firewall rules would allow packet/requests to be delivered to application listening on specific ports. But application would not show the requests/queries in query logs as if it has not received queries. 'tcpdump/wireshark' would show that requests packets have reached the machine. In both 'tcpdump' and 'wireshark' output it would be indicated that checksum is wrong and it is possible that OS offloaded checksum calculation to NIC. To really verify whether virtual NIC is putting proper checksum we can capture packets on destination machine, where the packets should arrive with proper checksum.
To check if checksum offloading is being used, we can use:
ethtool --show-offload eth<n>
To disable offloading we can use:
ethtool --offload eth<n> tx off ethtool --offload eth<n> rx off
To ensure that offloading is disabled by default when guest boots, to make change permanent we need to add the above checksum disabling commands to '/etc/rc.d/rc.local' file.
Disabling cache for guest disks
Disabling cache for guest disk can improve performance significantly as per http://www.mail-archive.com/libvir-list@redhat.com/msg11017.html and number of other sites. We can disable caching of guest disk on host using:
<disk type='file'> ... <driver ... cache="none"/> ... </disk>
- Libvirt domain file format is documented at http://libvirt.org/formatdomain.html