Thursday, June 17, 2010

Autoreset Differencing Disks Bug in VirtualBox 3.2 Up to 3.2.4

Since in the latest port I've been suggesting you give VirtualBox a try (including as a small server virtualization platform) I think this is a bug worth knowing: VirtualBox 3.2 up to (at least) VirtualBox 3.2.4 is ignoring the autoreset property of differencing disks. You can just check that the

$ VBoxManage modifyhd name|uuid --autoreset on|off

command will have no effect. What's worse, it seems that the affected versions of VirtualBox are sticking with an autoreset value set to off.

This basically means that, after stopping and starting a virtual machine, the differencing disks won't be wiped off automatically hence changes made into your guest instance will be persisted upon reboot. This defies one of the common use cases where VirtualBox immutable images and differencing disks are most useful.

If you don't know what immutable images and differencing disks are, keep in touch. A future blog post about these features has already been scheduled.

Update: I verified that VirtualBox 3.2.8 is not affected any longer by this bug.

Friday, June 11, 2010

SSH Hangs On Exit When Using nohup

I recently discovered how ssh sometimes hangs on exit when I've launched some process with nohup. The first suspect was standard output and it costed me some time to realize that the problem was related to the input stream of the process that I had launched with nohup.

Indeed, I solved the problem (for scripts) by launching commands this way:

$ nohup my-command </dev/null &

After finding a workaround I indeed found that this is a well known "problem" but it's not a bug, insted. As far as I could check, ssh is respecting the POSIX standard when not closing the session if a process is still attached to the tty. Other programs which did not show this behavior, such as some telnet, are behaving in a non-compliant way.

Anyway, the previous workaround is fine for me.

A small tip: Should your (Open)SSH session "hang" in such a situation, you can just use the ~. sequence to disconnect it. Don't worry, your nohup-ed process will keep running anyway.

VirtualBox as an Enterprise Server Virtualization Solution

Introduction

Some posts ago I quickly argued that VirtualBox might be used as a server virtualization platform in some scenarios instead of relying on more complex enterprise solutions such as the very Oracle VM or a VMWare. VirtualBox is a great Type 2 Hypervisor that has been rapidly growing in the past few years and now supports a wide range of both host and guest operating systems. Although VirtualBox is the heart of Sun/Oracle offering for desktop virtualization and although Solaris comes with Xen as a Type 1 Hypervisor, I argue that VirtualBox may be a solution to seriously take into consideration especially when using Solaris as a host operating system since VirtualBox itself can leverage Solaris features such as:
  • ZFS.
  • Crossbow (network virtualization and resource control).
  • RBAC, Projects and Resource control.

Solaris comes with other virtualization technologies such as Zones and Containers. If you need a Solaris instance, the quickest way to virtualize one is creating a zone. If you're using Solaris, then, you might want to consider Zones instead of a Type 1 hypervisor. Having said that, VirtualBox might help you in the case you're running Zones alongside other guests: instead of dedicating physical machines to zones and other physical machines to a Type 1 hypervisor such as Oracle VM or VMWare (both based on Linux), you might want to consider OpenSolaris' Xen or VirtualBox.

OpenSolaris' Xen is a Type 1 hypervisor built on the Solaris kernel: as such, it virtualizes guest OSs alongside Solaris Zones on the same physical machine. VirtualBox, being a Type 2 hypervisor, can be executed on a Solaris host alongside Zones as well.

In this post we'll make a quick walkthrough to describe how VirtualBox can be used in a Solaris environment as a server virtualization platform.

Installing VirtualBox

Installing VirtualBox on the Solaris Operating System is very easy. Download VirtualBox, gunzip and untar the distribution (please substitute [virtualbox] with the actual file name of the software bundle you downloaded):

$ gunzip [virtualbox].tar.gz
$ tar xf [virtualbox].tar

If you're upgrading VirtualBox you've got to remove the previous version before installing the new one:

# pkgrm SUNWvbox

Install the VirtualBox native packages:

# pkgadd -d ./[virtualbox].pkg

Clone a Virtual Machine with Solaris ZFS

After installing an OS instance, ZFS may help you to spare time and space with ZFS snapshots and clones. ZFS allows you to instantly snapshot a file system and, optionally, to clone it as promote it to a ZFS file system as well. This way, for example, you could:
  • Install a guest instance (such as a Debian Linux.)
  • Take a snapshot of the virtual machine.
  • Clone it as many times as you need it.

Not only will you spare precious storage space: you'll be executing a set of identical virtual machine in practically no time. If you needed to upgrade your guest OS, you would upgrade the initial image and then you would snapshot and clone it again. If you carefully plan and analyze your requirements in advance, ZFS snapshots and clones may be a real value for your virtual machine deployments.

In an older post I made a quick ZFS snapshost and clones walkthrough.

Solaris Network Virtualization

One of the stoppers that, years ago, would prevent me to use VirtualBox in a server environment was the lack of a network virtualization layer. Basically, you were left with unsuitable choices for configuring your guests' networks on a server environment:
  • NAT: NAT wasn't flexible nor easy to administer. Since you were NAT-ting many guests on the same physical cards, you would quickly find yourself in a "port hell."
  • Dedicated adapter. This is the most flexible option, obviously, but it had a major problem: network adapters are a finite number. You would encounter this problems when configuring Solaris Zones as well.

The solution to all of this problem is called "Crossbow." You can read a previous blog post to discover Solaris Network Virtualization and get started with it.

VirtualBox introduced a feature, called Bridged Networking, that will let guests use NICs (both physical and virtual) with a "net filter" driver. When using VirtualBox Bridged Networking with Crossbow, please take into account the following:
  • A Crossbow VNIC cannot be shared between guest instances.
  • A Crossbow VNIC and the guest network interface must have the same MAC address.

Since Crossbow will let you easily create as many Virtual NIC as you need, the previous points aren't a real issue anyway.

After creating a VNIC for exclusive use of a VirtualBox guest you won't even need to plumb it and bring it up. VirtualBox will do that for you.

Configuring Bridged Networking

To configure bridged networking over a VNIC for a VirtualBox guest you can use the VirtualBox UI or VirtualBox command line interface utilities, such VBoxManage:

$ VBoxManage modifyvm <uid|name>
  --nic<1-N> bridged
  --bridgeadapter<1-N> devicename 

Configuring SMP

VirtualBox introduced SMP support some versions ago. That was a huge step forward: guests can now be assigned a number of CPUs to execute on. As usual, you can use both VirtualBox UI or CLIs to configure your guests. On the following line are summarized VBoxManage options related with CPU management:

$ VBoxManage modifyvm <uid|name>
  --cpus <number>
  --cpuidset <leaf> <eax> <ebx> <ecx> <edx>
  --cpuidremove <leaf>
  --cpuidremoveall
  --cpuhotplugging <on|off>
  --plugcpu <id>
  --unplugcpu <id>
  
Option names are self-explanatory. Nevertheless, if you need further information, please check VirtualBox official documentation.

Controlling VirtualBox Guest Resources with Solaris Resource Control

An interesting feature of Solaris is its Resource Control facility. You can, in fact, execute VirtualBox guests in a Solaris Project and apply fine-grained resource control policies to each of your running guests. That means, for example, that a VirtualBox guest with two CPUs can be executed in the context of a Solaris Project with a resource control policy that limits its cpu (project.cpu-cap) to 150%. Although your guests may use two CPUs concurrently, the total CPU that it may use is limited to 150%.

To apply resource control policies to your guest one strategy could be the following:
  • Create an user for every set of guest that will be subject to a resource control policy.
  • Create a default project for each of these users and define the resource control policies that you need to apply.
  • Execute VirtualBox guests with the defined users.

This way, Solaris will automatically apply the default resource control policies to every process of such users, such as the very same VirtualBox guest instances.

For a walkthrough to get started with Solaris Projects and Resource Controls, you can read a previous blog post.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest remotely you can use VirtualBox command line interfaces, such as VBoxManage. With VBoxManage, for example, you will be able to:
  • Create guest instances.
  • Modify guest instances.
  • Start, pause and shutdown guest instances.
  • Control the status of your instances.
  • Teleport instances on demand to another machine.

Starting a VirtualBox Guest

To start a VirtualBox guest remotely you can use VBoxManage or the specific backend command. VBoxManage will start an instance with the following syntax:

$ VBoxManage startvm <uid|name>
  [--type gui|sdl|vrdp|headless]

VBoxManage startvm has been deprecated in favor of the specific backend commands. Since in a server environment you will probably launch guests with the headless backend, the suggested command will be:

$ VBoxHeadless --startvm <uid|name>

Please take into account that VBoxHeadless will not return until the guest has terminated its execution. To start a guest with VBoxHeadless on a remote connection to your server, then, you should use nohup for it not to terminate on shell termination:

$ nohup VBoxHeadless --startvm <uid|name> &

What if my ssh session won't exit?

You might experience a strange issue with VBoxHeadless, which is related with (Open)SSH behavior. After issuing the previous command, the ssh session will seem to hang until the guest execution is terminated. This issue is not related to VBoxHeadless but to (Open)SSH's behavior. Please read this post for an explanation. Meanwhile, the workaround I'm aware of are the following: either invoke VBoxHeadless using /dev/null as standard input:

$ nohup VBoxHeadless --startvm <uid|name> < /dev/null &

or terminate manually the ssh session with the ~. sequence after issuing the exit command.

Accessing a Remote VirtualBox Guest with RDP

VirtualBox has a built-in RDP facility that will let you access a guest console remotely using the RDP protocol. If you start an headless guest the VirtualBox RDP server will be enabled by default. To access the instance remotely, then, a suitable client such as rdesktop (for UNIX systems) will be sufficient.

Controlling a VirtualBox Guest Remotely

To control a VirtualBox guest you could either:
  • Launch the shutdown sequence on the guest itself, which is the procedure I recommend.
  • Use VBoxManage controlvm to send a suitable signal to the guest such as an acpipowerbutton, acpisleepbutton or a hard poweroff signal:

$ VBoxManage controlvm <uid|name>
  pause|resume|poweroff|savestate|
    acpipowerbutton|acpisleepbutton

VirtualBox, as outline in the syntax of the preceding example, will let you pause a virtual machine or even saving its state to disk for a later quick resume.

Teleporting a VirtualBox Guest to Another Server

VirtualBox now supports guest instances teleporting. Teleporting lets you move a running instance to another server with minimal service disruption. To teleport a VirtualBox (teleport-enabled) guest to another (VirtualBox-enabled) machine you can just issue the following command:

$ VBoxManage <uid|name> \
  teleport --host <name> --port <port> \
  --maxdowntime <ms> \
  --password <passwd> \

Flush Requests

VirtualBox, by default, might ignore IDE/SATA FLUSH requests issued by its guests. This is an issue if you're using Solaris ZFS which, by design, assumes that FLUSH requests are never ignored. In such case, just configure your virtual machine not to ignore such requests.

For IDE disks:

$ VBoxManage setextradata "name" \
  "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0

The x parameters is:

ValueDescription
0Primary master
1Primary slave
2Secondary master
3Secondary slave


For SATA disks:

VBoxManage setextradata "name" \
  "VBoxInternal/Devices/ahci/0/LUN#[x]/Config/IgnoreFlush"

In this case the x parameter is just the disk number.

Next Step

As you can see, VirtualBox is a sophisticated piece of software which is now ready for basic enterprise server virtualization. This post just shows you the beginning, though. VirtualBox offers you many other services I haven't covered (yet.) The Solaris operating system will offer you rock-solid enterprise service that will enhance your overall VirtualBox experience when used as a host.

If you're planning to virtualize guest operating systems in your environment and if your requirements fits in the picture, I suggest you strongly consider using VirtualBox on a Solaris host.

If you already use Solaris, VirtualBox will live alongside other Solaris virtualization facilities such as Solaris Zones.









Getting Started with Solaris Network Virtualization ("Crossbow")

Solaris Network Virtualization

OpenSolaris Project Crossbow aim is bringing a flexible Network Virtualization and Resource Control layer to Solaris. A Crossbow-enabled version of Solaris enables the administrator to create virtual NICs (and switches) which, from a guest operating system or Zone standpoint, are indistinguishable from physical NICs. You will be able to create as many NICs as your guests need and configure them independently. More information on Crossbow and official documentation can be found on the project's homepage.

This post is just a quick walkthrough to get started with Solaris Network Virtualization capabilities.

Creating a VNIC

To create a VNIC on a Solaris host you can use the procedure described hereon. Show the physical links and decide which one you'll use:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --

In this machine I only have one physical link, e1000g0. Create a VNIC using the physical NIC you chose:

# dladm create-vnic -l e1000g0 vnic1

Your VNIC is now created and you can use it with Solaris network monitoring and management tools:

$ dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
vboxnet0    phys      1500   unknown  --         --
vnic1       vnic      1500   up       --         e1000g0

Note that a random MAC address has been chosen for your VNIC:

$ dladm show-vnic
LINK         OVER         SPEED  MACADDRESS        MACADDRTYPE         VID
vnic1        e1000g0      100    2:8:20:a8:af:ce   random              0

You can now use your VNIC as a "classical" physical link. You can plumb it and bring it up with the classical Solaris procedures like ifconfig and Solaris configuration files.

Resource Control

Solaris network virtualization is tightly integrated with Solaris Resource Control. After a VNIC is created you can attach resource control parameters to it such as a control for maximum bandwidth consumption or CPU usage.

Bandwidth Management

As if it were a physical link, you can use the dladm command to establish a maximum bandwidth limit on a whole VNIC:

# dladm set-linkprop -p maxbw=300 vnic4
# dladm show-linkprop vnic4
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
vnic4        autopush        --   --             --             -- 
vnic4        zone            rw   --             --             -- 
vnic4        state           r-   unknown        up             up,down 
vnic4        mtu             r-   1500           1500           1500 
vnic4        maxbw           rw     300          --             -- 
vnic4        cpus            rw   --             --             -- 
vnic4        priority        rw   high           high           low,medium,high 
vnic4        tagmode         rw   vlanonly       vlanonly       normal,vlanonly 
vnic4        protection      rw   --             --             mac-nospoof,
                                                                ip-nospoof,
                                                                restricted 
vnic4        allowed-ips     rw   --             --             -- 

vnic4 maximum bandwidth limit is now set to 300.

If you want to read an introduction to Solaris Projects and Resource Control you can read this blog post.

Using VNICs

VNICs are useful on a variety of use cases. VNICs are one of the building blocks of a full fledged network virtualization layer offered by Solaris. The possibility of creating VNICs on the fly will open the door to complex network setups and resource control policies.

VNICs are especially useful when used in conjunction with other virtualization technologies such as:
  • Solaris Zones.
  • Oracle VM.
  • Oracle VM VirtualBox.

Using VNICs with Solaris Zones

Solaris Zones can use a shared or an exclusive IP stack. An exclusive IP stack has its own instance of variables used by the TCP/IP stack and are not shared with the global zone. This basically means that a Solaris Zone with an exclusive IP stack can have:
  • Its own routing table.
  • Its own ARP table.

and whatever parameter Solaris lets you set on your IP stack.

Before Crossbow the number of physical links on a server was a serious problem when you needed to set up a large number of Solaris Zones when an exclusive IP stack was desirable. Crossbow now removes that limit and having a large number of exclusive IP stack non global Zones is not an issue any longer.

Other Virtualization Software

The same reasoning applies for other virtualization software such as Oracle VM or Oracle VM VirtualBox. For every guest instance you need, you will create the VNICs you'll need for exclusive use of your guest operating system.

On another post I'll focus on VirtualBox and describe how VNICs can be used with its guests.

Next Steps

There's more to Solaris Network Virtualization, these are just the basics. For instance, you will be able to fully virtualize a network topology by using:
  • VNICs.
  • Virtual Switches.
  • Etherstubs.
  • VLANs.

As far as it concerns resource control, bandwith limit is just the beginning. Solaris Network Virtualization will let you finely control your VNIC usage on a:
  • Per-transport basis.
  • Per-protocol basis.
  • CPU consumption per VNIC basis.

To discover what else Solaris Network Virtualization can do for you, keep on reading this blog and checkout the official project documentation. You could also install an OpenSolaris guest with VirtualBox and experiment yourself. There's nothing like a hands-on session.






Sunday, June 6, 2010

Registering JIRA as a Solaris SMF-Managed Service

My company and many of its clients use Atlassian JIRA as a bug or project tracking solution. JIRA is distributed in two bundles:
  • As a Java EE application to deploy on a compliant application server.
  • As a standalone distribution bundled with Apache Tomcat.

I usually recommend, unless you already have a Java EE application server to leverage or some other requirement to take into account, to install JIRA standalone distribution. It's very easy to install, configure and deploy. I would also recommend you to deploy JIRA on Solaris.

Solaris has differential advantages when compared with other operating system. Just to name a few (not to repeat myself over and over):
  • ZFS and ZFS snapshots and clones.
  • Zones.
  • DTrace.
  • RBAC, Projects and Resource Caps.
  • SMF.

A Typical Deployment Scenario

My typical deployment scenario is the following:
  • I deploy JIRA standalone (once for each JIRA version) on a dedicated ZFS filesystem.
  • I snapshot and clone the corresponding filesystem whenever I need a new JIRA instance.
  • Every JIRA instance is configured and executed on a dedicated Solaris 10 Sparse Zone.
  • I define a resource cap for each zone that runs JIRA.
  • I configure JIRA as a Solaris SMF-Managed Service.

Most of the previous tasks have been described earlier in this blog. An overview of a JIRA installation on Solaris has been given on this post.

The purpose of this entry is describing how to configure JIRA as a Solaris SMF-Managed Service.

Configuring JIRA with Solaris SMF

Solaris 10 Service Management Facility (SMF) often praised advantages include the ability for Solaris to check the health and dependencies of your service and take a corrective action in case something goes wrong. Registering a service with Solaris SMF isn't strictly necessary but I encourage you to do so. Although JIRA is pretty easy to start and stop and has got very view dependencies (or none...) with other services, preparing a service manifest will enable you to install and deploy JIRA on Solaris in a question of minutes.

Dependencies

As stated above, JIRA's dependencies really depend on your deployment. As a minimum, JIRA global dependencies will include the two following (obvious) services:
  • svc:/milestone/network:default
  • svc:/system/filesystem/local:default

Because of its own nature, there's probably no reason to have JIRA instances running when network services are not available. The dependency with the local filesystem service is obvious, too.

Any other dependency will probably be a service instance dependency. Since JIRA, in the simplest deployment scenario, will just depend on a database, we will assume that our JIRA instance will depend on a specific Solaris PostgreSQL service:
  • svc:/application/database/postgresql:version_82

Our first manifest fragment will be the following:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='application/atlassian/jira' type='service' version='0'>
    <dependency name='network' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/milestone/network:default'/>
    </dependency>
    <dependency name='filesystem-local' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/system/filesystem/local:default'/>
    </dependency>
    
    <instance name='version_411' enabled='true'>
      <dependency name='postgresql' grouping='require_all' restart_on='none' type='service'>
        <service_fmri value='svc:/application/database/postgresql:version_82'/>
      </dependency>
      
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>Atlassian JIRA</loctext>
      </common_name>
    </template>
  </service>
</service_bundle>

Beware that in this example we're using PostgreSQL services as defined in Solaris 10 05/10. Please check your operating system to define correct dependencies. A missing dependency will make Solaris put the service into the offline state until the dependency can be resolved.

Execution Methods

The next thing to do is define JIRA execution methods. We could directly use JIRA startup.sh and shutdown.sh scripts but we want to avoid coupling the SMF service definition with JIRA implementation details. Rather, we will take the common approach of writing a script to declare execution methods. This approach, moreover, will let us parametrize distinct JIRA instances in the SMF manifest and thus manage multiple JIRA instances with just a SMF service manifest and one execution methods script.

A draft script is the following:

#!/sbin/sh
#
#
#ident  "@(#)JIRA 4.1.1   06/05/10 SMI"

. /lib/svc/share/smf_include.sh

# SMF_FMRI is the name of the target service.
# This allows multiple instances 
# to use the same script.

getproparg() {
  val=`svcprop -p $1 $SMF_FMRI`
  [ -n "$val" ] && echo $val
}

# ATLBIN is the JIRA installation directory
# of each JIRA instance.
ATLBIN=`getproparg atlassian/bin`

# JIRASTARTUP is JIRA startup script
JIRASTARTUP=$ATLBIN/bin/startup.sh

# JIRASHUTDOWN is JIRA shutdown script
JIRASHUTDOWN=$ATLBIN/bin/shutdown.sh

# Check if the SMF framework is correctly initialized.
if [ -z $SMF_FMRI ]; then
  echo "Error: SMF framework variables are not initialized."
  exit $SMF_EXIT_ERR
fi

# check if JIRA scripts are available
if [ ! -x $JIRASTARTUP ]; then
  echo "Error: JIRA startup script cannot be found."
  exit $SMF_EXIT_ERR
fi

if [ ! -x $JIRASHUTDOWN ]; then
  echo "Error: JIRA shutdown script cannot be found."
  exit $SMF_EXIT_ERR
fi

case "$1" in
'start')
  $JIRASTARTUP
  ;;

'stop')
  $JIRASHUTDOWN
  ;;

*)
  echo "Usage: $0 {start|stop}"
  exit 1
  ;;

esac
exit $SMF_EXIT_OK
 
The previous scripts just define the start and stop service methods. As far as I know there's no easy JIRA refresh method so we will skip it in our scripts.

Service Instance Parameters

As shortly explained in the previous section, SMF services provide the ability of being parametrized by virtue of an easy syntax to declare parameters in the service manifest and use them in the service method script. In the previous example we're using a property, called atlassian/bin, to declare the installation directory of each of ours JIRA instances. Let's declare it in our service manifest for a fictional JIRA 4.1.1 instance:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='application/atlassian/jira' type='service' version='0'>
    <dependency name='network' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/milestone/network:default'/>
    </dependency>
    <dependency name='filesystem-local' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/system/filesystem/local:default'/>
    </dependency>
    <exec_method name='start' type='method' exec='/root/bin/svc/method/atlassian/jira start' timeout_seconds='60'/>
    <exec_method name='stop' type='method' exec='/root/bin/svc/method/atlassian/jira stop' timeout_seconds='60'/>
    <instance name='version_411' enabled='true'>
      <dependency name='postgresql' grouping='require_all' restart_on='none' type='service'>
        <service_fmri value='svc:/application/database/postgresql:version_82'/>
      </dependency>
      <property_group name='atlassian' type='application'>
        <propval name='bin' type='astring'
          value='/opt/atlassian/atlassian-jira-enterprise-4.1.1-standalone'/>
      </property_group>
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>Atlassian JIRA</loctext>
      </common_name>
    </template>
  </service>
</service_bundle>

Environment Variables

JIRA needs at least the JAVA_HOME environment variable to be set for startup and shutdown scripts to work correctly. Thus we need to modify our manifest so that SMF will set it before launching the execution methods:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='application/atlassian/jira' type='service' version='0'>
    <dependency name='network' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/milestone/network:default'/>
    </dependency>
    <dependency name='filesystem-local' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/system/filesystem/local:default'/>
    </dependency>
    <exec_method name='start' type='method' exec='/root/bin/svc/method/atlassian/jira start' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <exec_method name='stop' type='method' exec='/root/bin/svc/method/atlassian/jira stop' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <instance name='version_411' enabled='true'>
      <dependency name='postgresql' grouping='require_all' restart_on='none' type='service'>
        <service_fmri value='svc:/application/database/postgresql:version_82'/>
      </dependency>
      <property_group name='atlassian' type='application'>
        <propval name='bin' type='astring' value='/opt/atlassian/atlassian-jira-enterprise-4.1.1-standalone'/>
      </property_group>
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>Atlassian JIRA</loctext>
      </common_name>
    </template>
  </service>
</service_bundle>

Credentials, Projects and Resources

As explained in another post, SMF service manifest let the administrator specify advanced parameters for a service instance such as:
  • The credentials under which the service instance will be executed.
  • The Solaris project in which the service instance will be executed.
  • The resource pool assigned to a service instance.

In this case, being a multi instance service manifest, we will let the administrator declare such parameters in every service instance. This way, the maximum degree of service parametrization is achieved:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='application/atlassian/jira' type='service' version='0'>
    <dependency name='network' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/milestone/network:default'/>
    </dependency>
    <dependency name='filesystem-local' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/system/filesystem/local:default'/>
    </dependency>
    <exec_method name='start' type='method' exec='/root/bin/svc/method/atlassian/jira start' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <exec_method name='stop' type='method' exec='/root/bin/svc/method/atlassian/jira stop' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <instance name='version_411' enabled='true'>
      <dependency name='postgresql' grouping='require_all' restart_on='none' type='service'>
        <service_fmri value='svc:/application/database/postgresql:version_82'/>
      </dependency>
      <property_group name='method_context' type='framework'>
        <propval name='project' type='astring' value=':default'/>
        <propval name='resource_pool' type='astring' value=':default'/>
        <propval name='working_directory' type='astring' value=':default'/>
      </property_group>
      <property_group name='atlassian' type='application'>
        <propval name='bin' type='astring' value='/opt/atlassian/atlassian-jira-enterprise-4.1.1-standalone'/>
      </property_group>
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>Atlassian JIRA</loctext>
      </common_name>
    </template>
  </service>
</service_bundle>

Please note the slight different syntax for the method_context attributes with respect to the example in my previous post. This gives you an idea of SMF manifest flexibility. In that example, distinct execution methods were specified for every service instance. In this case, we specified the same service methods for every JIRA instance. The difference is that parametrization occurs in the service instance property groups. The same script, then, can be properly parametrized and reused for any of the instances you'll control with SMF.

Privileges Required to Open a Privileged Network Port

If you're running JIRA as a non-root user (that is considered a best practice but perhaps is questionable if you use a dedicated Solaris 10 Zone) you should be aware that a specific privilege is required for Tomcat to open a privileged network port (< 1024) such as port 80 (the default HTTP port.) If you plan to run JIRA as a non root user, then assign the net_privaddr privilege to the JIRA user as shown in the following manifest fragment:

[...snip...]
<exec_method name='start' type='method' exec='/root/bin/svc/method/atlassian/jira start' timeout_seconds='60'>
  <method_context>
    <method_credential
      user='jira'
      group='jira'
      privileges='basic,net_privaddr' />

[...snip...]


Conclusion

The following manifest and script can be used as a starting point for a working JIRA SMF service manifest. It's up to you, system administrator, to apply the modifications you need to fit it into your execution environment.

Service Manifest

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
  <service name='application/atlassian/jira' type='service' version='0'>
    <dependency name='network' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/milestone/network:default'/>
    </dependency>
    <dependency name='filesystem-local' grouping='require_all' restart_on='none' type='service'>
      <service_fmri value='svc:/system/filesystem/local:default'/>
    </dependency>
    <exec_method name='start' type='method' exec='/root/bin/svc/method/atlassian/jira start' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <exec_method name='stop' type='method' exec='/root/bin/svc/method/atlassian/jira stop' timeout_seconds='60'>
      <method_context>
        <method_environment>
          <envvar name='JAVA_HOME' value='/opt/sun/jdk/latest'/>
        </method_environment>
      </method_context>
    </exec_method>
    <instance name='version_411' enabled='true'>
      <dependency name='postgresql' grouping='require_all' restart_on='none' type='service'>
        <service_fmri value='svc:/application/database/postgresql:version_82'/>
      </dependency>
      <property_group name='method_context' type='framework'>
        <propval name='project' type='astring' value=':default'/>
        <propval name='resource_pool' type='astring' value=':default'/>
        <propval name='working_directory' type='astring' value=':default'/>
      </property_group>
      <property_group name='atlassian' type='application'>
        <propval name='bin' type='astring' value='/opt/atlassian/atlassian-jira-enterprise-4.1.1-standalone'/>
      </property_group>
    </instance>
    <stability value='Unstable'/>
    <template>
      <common_name>
        <loctext xml:lang='C'>Atlassian JIRA</loctext>
      </common_name>
    </template>
  </service>
</service_bundle>

Execution Methods Script

#!/sbin/sh
#
#
#ident  "@(#)JIRA 4.1.1   06/05/10 SMI"

. /lib/svc/share/smf_include.sh

# SMF_FMRI is the name of the target service.
# This allows multiple instances 
# to use the same script.

getproparg() {
  val=`svcprop -p $1 $SMF_FMRI`
  [ -n "$val" ] && echo $val
}

# ATLBIN is the JIRA installation directory
# of each JIRA instance.
ATLBIN=`getproparg atlassian/bin`

# JIRASTARTUP is JIRA startup script
JIRASTARTUP=$ATLBIN/bin/startup.sh

# JIRASHUTDOWN is JIRA shutdown script
JIRASHUTDOWN=$ATLBIN/bin/shutdown.sh

# Check if the SMF framework is correctly initialized.
if [ -z $SMF_FMRI ]; then
  echo "Error: SMF framework variables are not initialized."
  exit $SMF_EXIT_ERR
fi

# check if JIRA scripts are available
if [ ! -x $JIRASTARTUP ]; then
  echo "Error: JIRA startup script cannot be found."
  exit $SMF_EXIT_ERR
fi

if [ ! -x $JIRASHUTDOWN ]; then
  echo "Error: JIRA shutdown script cannot be found."
  exit $SMF_EXIT_ERR
fi

case "$1" in
'start')
  $JIRASTARTUP
  ;;

'stop')
  $JIRASHUTDOWN
  ;;

*)
  echo "Usage: $0 {start|stop}"
  exit 1
  ;;

esac
exit $SMF_EXIT_OK








Credentials and Projects for a Solaris 10 SMF-Managed Service

Introduction

Some posts ago we've learnt about Solaris 10 basic Projects and Resource Caps administration. In that article we examined Solaris 10 Projects facility and how it enables the administrator to group and organize running processes into tasks which are running in a project context. Since projects may define resource caps, processes running inside a project will be subject to such caps.

In the previous post we discovered how easy it is for an administrator to define Solaris 10 projects and assign processes to them. We also said that there was a little gotcha: how would an administrator define the credential (and/or the Solaris 10 project) under which an SMF-managed service will run? In this articles we'll explore the basic Solaris 10 SMF Service Manifest DTD to discover how to define user credentials, privileges and projects for a specific SMF service instance.

The SMF Service Manifest DTD

Solaris 10 SMF uses manifests to describe the characteristics of a managed service. The manifest DTD (Solaris 10, version 1) can be found on:

/usr/share/lib/xml/dtd/service_bundle.dtd.1

Although in some cases there's no need for an administrator to define a manifest for an SMF service (as in the case of inetd-manage services because Solaris 10 will build a manifest for you) it's not bad to quickly glance at the service DTD, at least to discover what can be done.

Method Context

The method context, defined by:

<!ELEMENT method_context
  ( (method_profile | method_credential)?,
    method_environment? ) >

<!ATTLIST method_context
  working_directory CDATA ":default"
  project           CDATA ":default"
  resource_pool     CDATA ":default" >

lets the administrator define credentials and resource managements attributes for an execution method. The method_context element defines the following three attributes:

NameDescription
working_directoryThe working directory to launch the execution method from. The :default token can be used to indicate the user specified for the method (by means of the method_profile or the method_credential element.)
projectThe project under which to run the current execution method. The project can be specified in either the numeric or the text form. The :default token can be used to indicate the use of the default project for the user specified (see working_directory description.)
resource_poolThe resource pool name to launch the method on. The :default token can be used to indicate the default pool for the specified project (see project description above.)

Method Credentials

The method credentials, as outlined in the previous section, can be specified with the optional method_credential subelement of the method_context element. The method_credential element is defined as follows:

<!ELEMENT method_credential EMPTY>

<!ATTLIST method_credential
  user             CDATA #REQUIRED
  group            CDATA ":default"
  supp_groups      CDATA ":default"
  privileges       CDATA ":default"
  limit_privileges CDATA ":default" >

The attributes of the method_credential element are defined as follows:

NameDescription
userThe user id for the current execution method. The user id can be specified in either the numeric or the text form.
groupThe group id for the current execution method. The group id can be specified in either the numeric or the text form. The :default token can de used to specify the default group for the specified user (see user attribute above.)
supp_groupsOptional supplemental groups to associate with the current execution method. A list of groups ids can be specified using a space as a separator. If absent or when the :default token is specified, initgroups(3C) will be specified.
privilegesAn optional privilege set.
limit_privilegesAn optional limit privilege set.

If you're wondering about privileges, please check out the official Solaris RBAC documentation.

An Example Manifest

Many Solaris 10 bundled SMF services use RBAC, hence their manifest make extensive use of such elements. You can, for example, check the default PostgreSQL service manifest:

$ svccfg export postgresql
[...snip...]
<instance name='version_81' enabled='false'>
      <method_context project=':default' resource_pool=':default' working_direct
ory=':default'>
        <method_credential group='postgres' limit_privileges=':default' privileg
es=':default' supp_groups=':default' user='postgres'/>
      </method_context>
[...snip...]

In the service manifest fragment above you can see how the postgresql service definition leverages RBAC and projects to define credential, privileges and resource pools and caps for the default PostgreSQL service.

Flexibility and Simplicity

This mechanisms is indeed simple and flexible: since Solaris 10 RBAC and Solaris 10 Projects use loosely coupled layers to define privileges and resource caps, it's easy to establish a relation between and user and a project (or a privilege set, or whatever), and then Solaris will make the rest for you. As you can see in the previous manifest fragment, there's most of the tokens are :default. The PostgreSQL processes will run under the specified user credentials (postgres) and all of the other parameters (resource pool, project, working directory, privileges, etc.) will be discovered at runtime by Solaris.

Should you need to cap PostgreSQL resources, just define a suitable project and establish it as the default project for the postgres user: you won't (almost) ever need to change the service manifest.

Inetd-Managed Services

As explained in another post, configuring an inetd-managed service on Solaris 10 is almost a one-liner. There's was a gotcha, though. Although the user specified with the legacy inetd syntax was assigned to a default project with a resource cap, I noticed that Solaris 10 wasn't honoring it. Let's try and check the manifest that Solaris 10 generated for us:

$ svccfg export svc:/network/svn/tcp
[...snip...]
<exec_method name='inetd_start' type='method' exec='/opt/csw/bin/svnserve -i -r /home/svnuser/svnrepos' timeout_seconds='0'>
  <method_context>
    <method_credential user='svnuser' group='svngroup'/>
  </method_context>
</exec_method>
[...snip...]

There's no default project specified indeed! Let's notice that a missing project attribute does not tell Solaris 10 to use the default project for the specified user. To solve this, just dump the service definition to a file, modify it and reimport it:

$ svccfg export svc:/network/svn/tcp > svn.xml
$ [add project=':default' attribute]
# svccfg -v import svn.xml

Just check that the service manifest has been correctly import and restart the service.

Conclusion

Solaris 10 offers great and sophisticated tools to ease the life of the system administrator and enable the deployment of flexible and controlled execution environments. But beware: sophisticated doest not imply complicated in the Solaris 10 universe. On the contrary: as much as I discover and use Solaris 10 as much I admire and appreciate its clean and usable administration tools. As a minimum, I never have to modify a descriptor by hand. There always is a command line interface tool to protect me from errors and to ease my scripting experience. Just look for the corresponding *adm or *cfg command to do your job. Solaris 10 documentation and the community around OpenSolaris, moreover, are great places where to go and find the help you need.

For an alternate example that emphasizes the flexibility of the SMF service manifests, have a look at this blog post.  

Enjoy. 



Saturday, June 5, 2010

Valentino Rossi Breaks His Leg at the Mugello


Nine times MotoGP champion Valentino Rossi severely broke his leg during Saturday MotoGP session at Mugello, Italy. The italian champion is on his way to the hospital right now. According to the latest press releases, Rossi will have to stop at least three months.

Have luck, Valentino.

Friday, June 4, 2010

VirtualBox as a Server Virtualization Platform

I'm a faithful VirtualBox user since its first versions for the Solaris platform. Its simplicity and the Solaris support made it a perfect choice for me, casual user of some Windows virtual machine. I could not imagine that VirtualBox would grow so rapidly to be the kind of platform it is today. I'm really glad to VirtualBox community and to Sun for this great piece of software. 

I'm using VirtualBox even on my MacBook: it's especially useful for me to build demos. Few days ago I built a Sun Ray Server demo with a virtualized OpenSolaris (2009.06 upgraded to b134) on my Mac in a question of minutes.


VirtualBox as a Server Virtualization Platform

VirtualBox, as many of you will know, is also used as a server virtualization platform. The very Sun Virtual Desktop Infrastructure can use a VirtualBox behind the scenes. Years ago, the major stopper for such an use, in my opinion, was the lack of a functional virtualized network stack. Since the introduction of Crossbow in Solaris and the Bridged Adapter functionality in VirtualBox, there's no reason I wouldn't run VirtualBox in a server (except resource consumption considerations that are off topic here.)

Having said that, in my office I get an enormous advantage from using VirtualBox on some Sun Fire servers with ZFS. As I told on older posts (see here and here) I take advantage of ZFS snapshots and clone to build a set of preconfigured virtual machines and clone them whenever I register a new instance. Making a ZFS clone and registering a new VirtualBox virtual machine is a no brainer. You can (and should...) even script it using VirtualBox command line interfaces (such as VBoxManage.) Horizontal scalability of such an environment is very good and, moreover, our infrastructure is very cheap. Some Sun Fire X2270 M2 and OpenStorage appliances have lowered the TCO of our desktop and server platforms. Nowadays, we estimate that the typical virtual machine we're running costs us about $60 per month (including server housing fees in a datacenter.) 

If command line interface falls short and you need more advanced administration tools, you should be aware of the VirtualBox Web Console project. Honestly, I don't use it because our environment is relatively small. Surely it does have a place in a large corporate environment.

Remote Virtual Machines and Desktop Virtualization

For the reasons stated above, I no longer execute virtual machine in my own laptop except in the cases when the virtual machine will be used in an external environment (as in the case of demos for our clients.) Whenever I need to use a virtual machine, unless it's already running, I just start it remotely:

$ VBoxManage startvm your-vm-name --type vrdp

The --type vrdp flag is necessary if you want to remotely connect to the virtual machine using the internal VirtualBox RDP implementation. You can even just launch the command using ssh (in my case, certificate authentication is performed so that I can safely script it):

$ ssh your-server VBoxManage [...]

The last thing you need is an RDP client for your client platform. Solaris has got a couple of clients bundled with it. In the case of OS X I just use Microsoft Remote Desktop Connection (yes, I know...)


One virtual machine per user obviously is not the way to go to provide a virtualized desktop experience to your users. If you want to do that, you can make a step forward and use a VirtualBox powered Sun Virtual Desktop Infrastructure and give them access using Sun Ray clients. This way, we reduced TCO of desktop computers for a wide range of user profiles. When feasible, we later moved such users to a Solaris desktop access and spared Windows license fees.

Is It Ready for the Enterprise?

I would say that it depends on your requirements. I consider VirtualBox a mature and very stable product. Together with Solaris, VirtualBox can bring you plenty of advantages:
  • Reduced storage consumption: Solaris has got two (yes, two) built-in technologies that can help you deduplicate your data and spare costly space of your storage infrastructure: ZFS snapshot and clones and ZFS deduplication. ZFS snapshot and clone, alone, has helped us incredibly reduce the footprint of our virtual machine disk images in our storage arrays.
  • Observability: DTrace. Just read the documentation and learn what DTrace can do for you, even on a production system, with no downtime and no performance penalty.
  • Network Virtualization: Project Crossbow has given Solaris a very flexible network virtualization stack. Creating virtual entities such as etherstubs, virtual network adapters and so on, is easy.
  • Predictive Self-Healing: Solaris has got a set of built-in features to improve observability, diagnostics and self-healing.
  • VirtualBox virtual machine can be teleported between a source and a target host while running.
  • Desktop virtualization with Sun Ray appliances.

These a just a few of the reasons why I consider that VirtualBox can be used in an enterprise environment. Undoubtedly, other server virtualization platforms offer characteristics that VirtualBox does not implement and you should review your requirements on a case by case basis. Sometimes tradeoffs are acceptable and more profitable than overkill solutions. Anyway, where VirtualBox isn't sufficient, there aways is an Oracle VM out there.

Conclusions

VirtualBox is a very flexible product and, for the average user, is pretty simpler than full featured server virtualization platforms such as Sun xVM Server (now dead), Oracle VM, other Xen-based platforms or VMWare ESX. If your requirements allow it, VirtualBox is a software to seriously take into account. If you use Solaris, moreover, you will benefit of some of Solaris' great technologies such as DTrace, ZFS snapshot and clones, ZFS deduplication and Solaris virtual network stack (Crossbow.)

On another blog post I will describe how VirtualBox, together with Solaris, can be an easy to manage and powerful solution for small scale deployments of an enterprise virtualization platform. 


Thursday, June 3, 2010

Installing Sun Ray Server Software on OpenSolaris 2009.06

Overview

Yesterday I received my first Sun Ray client and was looking forward to trying. The Sun Ray client is a display device which can be connected to a remote OS instance in basically two ways:
  • Using the Sun Ray Server Software.
  • Using Sun Virtual Desktop Infrastructure.

The Sun Ray client I'm using is a Sun Ray 2, which is a very low power device (about 4W), equipped with the following ports:
  • 1 DVI port
  • 1 serial port
  • 1 Ethernet RJ45 port

Sun Virtual Desktop Infrastructure (VDI) is a connection broker which gives Sun Ray devices access to a supported operating system such as Sun Microsystems' Solaris, Linux or Microsoft Windows which must be executed by one of the virtualization technology supported by VDI:
  • VirtualBox.
  • VMWare.
  • Microsoft Hyper-V.

Sun Ray Server Software (SRSS), on the other end, is a server software available for Solaris and Linux which gives Sun Ray clients remote access to an UNIX desktop session. Since we're only running Solaris on our machines, Sun Ray Server Software and Sun Rays are the quickest way to go to provide a low cost and effective desktop to all of our users.

Prerequisites

Solaris' SRSS prerequisite are very simple: Solaris 10 05/09 or newer. That's where fun begins. Sun Ray Server Software is supported on Solaris 10 but not (yet) on OpenSolaris. We'll be running Solaris 10 on our production environment but, for this proof of concept, I tried to use an already existing OpenSolaris virtual machine (OSOL 2009.06 upgraded to b134 from /dev) running on VirtualBox on a Mac OS X host. Taking into account the problems I've had in the past trying to run software supported on Solaris 10 (such as Sun Java Enterprise System) on OpenSolaris, I seriously considered installing a Solaris 10 VM and get rid of all those problems that are a direct consequence of the great job the OpenSolaris developers are doing (especially package refactoring and changes to Xorg installation paths.) At the end, curiosity kills the cat and now SRSS is running on OpenSolaris.

Installation Steps

First of all, download SRSS. You'll just need the following files:
  • Sun Ray Server Software 4.2.
  • Sun Ray Connector for Windows Operating System (only if you want to connect your Sun Ray client to a Windows operating system instance.)

Unzip SRSS on a temporary location on your Sun Ray server:

# unzip srss_4.2_solaris.zip

I will refer to this path as $SRSS from now on.

Bundled Apache Tomcat for the Sun Ray Admin GUI

If you plan to use the Sun Ray Admin GUI you should install a suitable web container (Servlet 2.4 and JSP 2.0 are required.) SRSS is bundled with Apache Tomcat which you can use to run the Admin GUI:

# cd $SRSS/Supplemental/Apache_Tomcat
# gtar -xvv -C /opt -f apache-tomcat-5.5.20.tar.gz

Please note that GNU tar is required to extract the Apache Tomcat bundle.

Since the default Apache Tomcat installation path used by SRSS is /opt/apache-tomcat you'd better make a symlink to your Apache Tomcat installation path:

# ln -s /opt/apache-tomcat-5.5.20 /opt/apache-tomcat

Installing SRSS

To launch the installation script for SRSS just run:

# $SRSS/utinstall

The script will just ask you a few question. Be ready to provide the following:
  • JRE installation path.
  • Apache Tomcat installation path.

From now on, the SRSS installation path (/opt/SUNWut) will be referred to as SRSS_INST.

Upon script termination it's now required that you restart your Sun Ray server:

# init 6

Planning Your SRSS Network Topology

The first thing you've got to do is defining your network topology. SRSS can be configured with or without a separate DHCP server, on private and shared networks, etc. The official SRSS documentation can give you hints to how to configure your server if your in doubt. For the sake of this proof of concept, I'll choose the simplest network topology: a shared network with an existing DHCP server. For alternate configuration, please have a look at the official SRSS documentation.

To configure SRSS on a shared network using an external DHCP server all you've got to do is:

# $SRSS_INST/sbin/utadm -L on
# $SRSS_INST/sbin/utrestart

On OpenSolaris some required Solaris 10 packages were missing and installation scripts correctly informed me about the situation. The missing packages can be installed with pkg:

# pkg install SUNWdhcs SUNWdhcsb SUNWdhcm

Configuring SRSS

SRSS has got an interactive configuration script which can be run to establish the initial SRSS configuration:

# $SRSS_INST/sbin/utconfig

Please take into account that the script will ask, amongst others, the following questions:
  • SRSS admin password.
  • Configuration parameters for the Admin GUI:
    • Tomcat path.
    • HTTP port.
    • HTTP admin port.
  • Whether you want to enable remote administration.
  • Whether you want to configure kiosk mode:
    • Kiosk user prefix.
    • Kiosk users' group.
    • Number of users.
  • Whether you want to configure a failover group.

To enable the use of GDM by SRSS you'll need to touch the following file:

# touch /etc/opt/SUNWut/ut_enable_gdm

Synchronize the Sun Ray DTU Firmware

The last step in the configuration process is synchronizing the Sun Ray DTU firmware:

# $SRSS_INST/sbin/utfwsync

SRSS Up and Running on Solaris 10

Solaris 10 configuration ends here and SRSS should now be up and running. In the next section I'll detail the workarounds needed to fix the quirks I've found while configuring SRSS on OpenSolaris.

Additional Configuration for OpenSolaris 2009.06 or >b134

As soon as I configured SRSS, I tried to plug my Sun Ray client on to see if it would work correctly. The Sun Ray client was discovering the SRSS server correctly but then hung with a 26 B error code. The SRSS logs were reporting that GDM session was dying almost upon startup. So, there was a problem with GDM.

Fixing Bug 6803899

There's a known bug that affects $SRSS_INST/lib/utdtsession. Open it with vi and replace awk with nawk.

< tid=$(awk -F= '$1 == "TOKEN" {print $2;exit}' ${DISPDIR}/${dpyparm})
> tid=$(nawk -F= '$1 == "TOKEN" {print $2;exit}' ${DISPDIR}/${dpyparm})

NWAM

OpenSolaris has got a new SMF managed service to autoconfigure the network physical layer called NWAM. Using SRSS with NWAM (and with other server software as well) can be quirky. I suggest you disable NWAM and fall back to manual network configuration. More details on this can be found on official OpenSolaris documentation.

Motif

OpenSolaris is not shipped with the Motif libraries (and dependencies) required by SRSS. You can ignore them and set up a new policy accordingly:

# $SRSS_INST/sbin/utpolicy -a -g -z both -D
# $SRSS_INST/sbin/utrestart -c

or proceed and install the missing packages:

# pkg install SUNWmfrun SUNWtltk SUNWdtbas

Since this is a proof of concept I'm not going to use features such as mobility. Nevertheless, I wanted to try and install Motif to see if additional problems would come out.

Fixing GDM

As I told at the beginning of this section, SRSS logs were indicating some kind of problem with GDM. If you're following OpenSolaris evolution, you'll know that, indeed, Xorg as well as GDM have undergone major changes and now notably differ from their Solaris 10 "parents". The first error that was showing up on GDM logs, which can be found on /var/log/gdm, were complaints about missing fonts:

Fatal server error:
could not open default font 'fixed'

Font locations, indeed, changed considerably on latest OpenSolaris builds. To fix this you have to create a file called /etc/opt/SUNWut/X11/fontpath to reflect correct font paths on your system. On OpenSolaris b134 such paths are the following:

/usr/share/fonts/X11/100dpi
/usr/share/fonts/X11/100dpi-ISO8859-1
/usr/share/fonts/X11/100dpi-ISO8859-15
/usr/share/fonts/X11/75dpi
/usr/share/fonts/X11/75dpi-ISO8859-1
/usr/share/fonts/X11/75dpi-ISO8859-15
/usr/share/fonts/X11/encodings
/usr/share/fonts/X11/isas
/usr/share/fonts/X11/misc
/usr/share/fonts/X11/misc-ISO8859-1
/usr/share/fonts/X11/misc-ISO8859-15
/usr/share/fonts/X11/Type1

After fixing font paths GDM complained about missing dependencies for the following libraries: libXfont and libfontenc. Although this is not the "Solaris way" of doing things, a quick and dirty solution was making symlinks to the missing dependencies in /usr/lib:

# cd /usr/lib
# ln -s xorg/libXfont.so
# ln -s xorg/libXfont.so.1
# cd amd64
# ln -s ../xorg/amd64/libXfont.so
# ln -s ../xorg/amd64/libXfont.so.1
# cd /usr/lib
# ln -s xorg/libfontenc.so
# ln -s xorg/libfontenc.so.1
# cd amd64
# ln -s ../xorg/amd64/libfontenc.so
# ln -s ../xorg/amd64/libfontenc.so.1

The last thing to do is fixing a problem in $SRSS_INST/lib/xmgr/gdm/remove-dpy for gdmdynamic syntax:

< gdmglue="; gdmdynamic -b -d "'$UT_DPY'
> gdmglue="; gdmdynamic -d "'$UT_DPY'

Done.

Now, your Sun Ray clients should be able to connect to your SRSS running on OpenSolaris (at least, b134.) As you can see in the following picture, there's my MacBook with a virtualized OpenSolaris (b134) acting as a Sun Ray Server, the Sun Ray 2 client and the virtualized desktop on the screen behind the MacBook.


Have fun!