Pages

Saturday, January 17, 2009

Running Windows on Sun xVM with Solaris Express Community Edition

Introduction

I'm waiting to see Sun xVM Server. It's a long wait, this, and I hope to see it soon. I've being using Sun xVM VirtualBox since quite a while and I'm really happy with it. The underlying technology is completely different, but I'm experiencing a feeling of hope that makes me think that Sun xVM Server quality will be at least the same.

Recently, I needed to set up an Active Directory for development purposes and while part of the team is happily running its .net development stack on top of a Sun xVM VirtualBox Windows guest, a quick test drive showed us that running a server on top of Sun xVM VirtualBox wasn't practical at all. That's why I set up a Solaris Express Community Edition machine: not only I wanted to test Sun xVM but I needed it.

All the commands shown in this post were executed on Solaris Express Community Edition build 103. Be aware that the output and the commands themselves are not stable yet.

The feeling of running unsupported software

It's unpleasant. Solaris Express Community Edition is rock solid. I'm using it on many machines and never let me down. But running a critical component on such an experimental technology, well, was something I wanted to avoid. That's the rationale behind using Solaris 10 even in our development machines, where I'm sure that Solaris Express Community Edition (or even OpenSolaris now), would greatly do their job; and the developers would also probably be happier.

I waited months hoping that Sun released Sun xVM Server just in time for us to be on schedule but project deadlines pushed me to deploy Solaris Express Community Edition instead. Documentation is not as up-to-date or easily retrievable as Solaris 10 is, but with a little help from Google and especially from a Sun white paper, Install Sun xVM Hypervisor and Use It to Configure Domains, setting up Windows 2003 Server guests was not that hard.

Setting up Windows

Setting up Windows 2003 Server was not as straight forward as I thought. The first times I tried it, indeed, I get stuck for a CD-related problem and that's where Sun's cited white paper really helped me a lot.

Checking up the system

The first thing to check is if xVM is installed and running. The following command should produce this output:

# /usr/bin/pkginfo | grep SUNWxvm
system SUNWxvmdomr Hypervisor Domain Tools (Root)
system SUNWxvmdomu Hypervisor Domain Tools (Usr)
system SUNWxvmh Hypervisor Header Files
system SUNWxvmhvm Hypervisor HVM
system SUNWxvmipar xVM PV IP address agent (Root)
system SUNWxvmipau xVM PV IP address agent (Usr)
system SUNWxvmpv xVM Paravirtualized Drivers
system SUNWxvmr Hypervisor (Root)
system SUNWxvmu Hypervisor (Usr)

Once logged in in the system running the hypervisor, be sure that relevant services are running:

# /usr/bin/svcs | grep xvm
[...]
online 0:15:07 svc:/system/xvm/console:default
online 0:15:08 svc:/system/xvm/xend:default
online 0:15:08 svc:/system/xvm/domains:default
[...]
online 0:15:09 svc:/system/xvm/store:default

The default network interface

The hypervisor, unless differently instructed, will use the first available NIC when setting up the network for its guests:

# dladm show-link
LINK CLASS MTU STATE OVER
e1000g0 phys 1500 up --

To specify the desired NIC for the guests, you can set xend service's config/default-nic property:

# /usr/sbin/svccfg -s xend 'setprop config/default-nic = astring: “yourNIC”'

and then restarting the services with svcadm.

Assigning space on the disk

You can assign space to your guest both using a dedicated ZPool or on a regular file. Using ZFS is undoubtedly easier but I had no ZPool available on that machine so I had to setup a regular file on UFS:

# mkfile 20g file-path

I created two 20 GB files which I used as disks for the new guests.

Installing the host OS

From here ahead, I just suggest you to follow the instructions on the white paper I linked above. Everything went as planned, including the glitches during Windows 2003 Server installation described in that document.

Running the guest

Basic commands

In this xVM version, you still have to use both virsh and xm to have a complete set of administrative command while managing your hosts. In the future virsh should completely replace xm but that's not yet the case.

Booting and shutting down a domain

To boot and to shutdown a domain you can use the following commands:

# virsh start [domain-name]
# virsh shutdown [domain-name]

In the case of Windows, I still prefer to connect to it and shut it down from its GUI.

Rebooting a guest

A guest may also be rebooted directly with the following command:

# virsh reboot [domain-name]

Suspending and resuming a guest

To suspend and subsequently resume a guest you can use the following commands:

# virsh suspend [domain-name]
# [...]
# virsh resume [domain-name]

Dumping a domain configuration

To dump the domain configuration in the case you need to examine it and modify it, you can use the following command (virsh will dump it on the standard output):

# virsh dump-xml [domain-name]

Loading a domain configuration

If you previously dumped and modified a domain configuration, you can redefine the domain using this command:

# virsh define [domain-configuration]

Determining the VNC display

To determine the display that VNC is using for a particular domain, you can use the following command:

# virsh vncdisplay [domain-name]

Examining existing domains

To determine the status of every existing domain, you can use the following command:

# virsh list --all
Id Name State
---------------------------------
0 Domain-0 running
2 winsrv2003 blocked

Domain winsrv2003 is listed as in a blocked state. This usually means that the domain is not running because it may be idle or waiting for I/O.

Block device related commands

The following commands are used to manage guests' block devices.

Mounting a CDROM

To mount a cdrom on a guest you can either directly mount the physical device or mounting and ISO image of the medium. The commands are:

# xm block-attach [domain-name] [device-type]:[path-to-device] [physical-drive]:[device-name] [options]

For example:

# xm block-attach [domain-name] phy:[path-to-device] hdb:cdrom r
# xm block-attach [domain-name] file:[path-to-device] hdb:cdrom r

Checking for device status

To check the block device status you can use the following command:

# xm block-list [domain-name] --long
(768
((backend-id 0)
(virtual-device 768)
(device-type disk)
(state 1)
(backend /local/domain/0/backend/vbd/11/768)
)
)
(5632
((backend-id 0)
(virtual-device 5632)
(device-type cdrom)
(state 1)
(backend /local/domain/0/backend/vbd/11/5632)
)
)

Unmounting a device

After detecting the device ID using the block-list command described in the previous section, you can use the following command to unmount a device:

# xm block-detach [domain-name] [device-id] -f

Before detaching a block device, the device should be unmounted and ejected in the guest OS first.

Impressions

The Windows guest runs pretty well, even if the machine seems slower while running the xVM kernel. I'm still doubting, moreover, if there's sort of a memory leak because as time passes by, vmstat shows that free memory and free swap are almost zero and the machine is indeed swapping on the disk.

For example, I'm running two Windows 2003 Server since a couple of SXCE build iterations and on build 103 I still continue to have the same problem. The machine is a Sun Ultra 20 M2 with 8 GB of RAM memory. Both domUs were dedicated 1 GB setting both mem-set and mem-max. This problem even shows up the same way even if I only boot one domU. When I boot the Windows domU, everything's fine and memory usage seems reasonable: dom0 has more or less 6 GB of dedicated memory and an unlimited mem-max. As times goes by, free memory goes down, free swap goes down and the machine begins to swap to disk and a moment comes I have to reboot. The effect is pretty clear with vmstat: free memory goes down to more or less 100 MB, free swap goes down too and the machine begins slowing down.

I found the description of this bug on Solaris Express release notes:

xVM Hypervisor Running Out of Memory

When running some non-Solaris domUs, you could encounter an issue where xVM hypervisor runs out of memory. This will generally be reflected by error messages generated to the dom0 console, in some cases in such high quantities that a reboot of the dom0 might be required to recover.

To avoid this, it is suggested that when running a non-Solaris domU, you manually balloon the amount of memory used by dom0 down to a smaller amount before booting the domU.

For example, if the dom0 is using 3500Mb, which can be determined via the xm list command, you would issue the following command to reduce its memory usage to 3000Mb:

xm mem-set Domain-0 3000

This should not be necessary when using a build-81 based dom0, or later.

This bug seems to explain the behavior I'm experiencing but it seems not applicable because I'm running build 103, while this bug is related to builds earlier than 81.

Other glitches

I experienced problems mounting and ejecting ISO images on Windows 2003 Server cdrom. Indeed, up to Solaris Express Community Edition build 103, I was experiencing bug 6749195: empty CD-ROM disappears from HVM domains. And when it disappeared, you had to reboot the domain. This made xVM unusable on production, at least if you needed the cdrom even only from time to time.

On build 103 and 104 I'm still noticing instabilities in xm block-attach behavior and I prefer using xm block-configure even when mounting an iso image on an empty cdrom.

Conclusions

Globally, I must say I'm pretty happy with xVM, such as I tried it on Solaris Express Community Edition since build 103. Nevertheless, as I said in the opening, the feeling of running unsupported software isn't that good when part of your business is relying on it. Still, I'm missing the performance, the stability and the ease of use of Sun xVM VirtualBox but I hope they'll find their way into Sun xVM Server.

No comments:

Post a Comment