AdSense Mobile Ad

Sunday, February 28, 2010

Adding Google Analytics tracking code to Confluence

If you're using Atlassian Confluence as your content management system and you'd like to collect statistical information about its web traffic, Google Analytics is probably the tool you're looking for.

Google Analytics is an extremely powerful and flexible tool: I'm using it to monitor the web traffic to the sites I own and I'm very happy with it. Getting started with Analytics is very simple: just install the tracking code in the pages whose web traffic you want to monitor.

If you're using Confluence it's pretty easy to do: Confluence uses a flexible templating engine and modifying a page layout for a given space is straightforward indeed.

If you want to monitor all of the web traffic to your Confluence instance, though, the best option is using the Custom HTML option in Confluence Administration ConsoleCustom HTML lets you define fragments of HTML code to be inserted in the following positions in the generated page:

  • At the end of the HEAD tag.
  • At the beginning of the BODY tag.
  • At the end of the BODY tag.


Just insert your Google Analytics tracking code in the appropriate place, which is usually at the end of the BODY tag and you're done! Your Confluence web traffic statistics are now being collected by Analytics.


If you're willing to experiment, Google has recently launched an asynchronous version of its Analytics tracking code which improves load times and accuracy, amongst other benefits. More information can be found here.


Wednesday, February 17, 2010

Mac OS X as an iSCSI initiator: Time Machine on ZFS

As I described in previous posts, I setup an iSCSI target with Solaris COMSTAR backed by a ZFS volume. I want to use this volume as a disk for Mac OS X Time Machine. This way, I'll get the best of the two technologies: a pretty looking and easy to manage Time Machine for backing up my MacBook backed by an enterprise-level, redundant and scalable ZFS volume published as an iSCSI target over my private LAN.

No more consumer disks on a table, no more poor hardware-implemented file system sharing protocol. No more worries to lose a disk. Just Solaris, ZFS, COMSTAR and a LAN.

Mac OS X as an iSCSI initiator

Although it's a subject most spoken of, Apple hasn't released yet the necessary components for Mac OS X to be an iSCSI initiator. Fortunately there exists a solid and free solution by Studio Network Solutions: globalSAN iSCSI Initiator for OS X. Just download it, install it, restart your OS X and a new panel will appear in your System Settings:


Connecting to a target

Connecting to a target is really easy: just use the globalSAN iSCSI GUI to add the target:


The target name is obviously retrieved from your target configuration.

Using the disk

If you read the previous post, you'll know that this target is backed by a ZFS volume which must be formatted before being used. With the Disk Utility you can format the new disk:



Using the disk with the Time Machine

To use the new disk with the Time Machine you just follow the usual procedure:


Conclusion

That's it. Using a ZFS volume as a disk for Mac OS X Time Machine is just a few clicks away. Next time you plan to buy a new external hard disk, just wait and take into account that a robust enterprise-level solution is available with not much more than the necessary budget to purchase a couple of consumer disks.

Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume

COMSTAR stands for Common Multiprotocol SCSI Target: it basically is a framework which can turn a Solaris host into a SCSI target. Before COMSTAR made its appearance, there was a very simple way to share a ZFS file system via iSCSI: just setting the shareiscsi property on the file system was sufficient, such as you do to share it via NFS or CIFS with the sharenfs and sharesmb properties.

COMSTAR brings a more flexible and better solution: it's not as easy as using those ZFS properties, but it is not that hard, either. Should you need more complex setup and features, COMSTAR includes a wide set of advanced features such as:
  • Scalability.
  • Compatibility with generic host adapters.
  • Multipathing.
  • LUN masking and mapping functions.

The official COMSTAR documentation is very detailed and it's the only source of information about COMSTAR I use. If you want to read more about it, please check it out.

Enabling the COMSTAR service

COMSTAR runs as a SMF-managed service and enabling is no different than usual. First of all, check if the service is running:

# svcs \*stmf\*
STATE          STIME    FMRI
disabled       11:12:50 svc:/system/stmf:default

If the service is disable, enable it:

# svcadm enable svc:/system/stmf:default

After that, check that the service is up and running:

# svcs \*stmf\*
STATE          STIME    FMRI
online         11:12:50 svc:/system/stmf:default

# stmfadm list-state
Operational Status: online
Config Status     : initialized
ALUA Status       : disabled
ALUA Node         : 0

Creating SCSI Logical Units

You're not required to master the SCSI protocols to setup COMSTAR but knowing the basics will help you understand the next steps you'll go through. Oversimplifying, a SCSI target is the endpoint which is waiting client (initiator) connections. For example, a data storage device is a target and your laptop may be an initiator. Each target can provide multiple logical units: each logical unit is the entity that performs "classical" storage operations, such as reading and writing from and to disk.

Each logical unit, then, is backed by some sort of storage device; Solaris and COMSTAR will let you create logical units backed by one of the following storage technologies:
  • A file.
  • A thin-provisioned file.
  • A disk partition.
  • A ZFS volume.

In this case, we'll choose the ZFS volume as our favorite backing storage technology.

Why ZFS volumes?

One of the wanders of ZFS is that it isn't just another filesystem: ZFS combines the volume manager and the file system providing you best of breed services from both world. With ZFS you can create a pool out of your drives and enjoy services such as mirroring and redundancy. In my case, I'll be using a RAID-Z pool made up of three eSATA drives for this test:

enrico@solaris:~$ zpool status tank-esata
  pool: tank-esata
 state: ONLINE
 scrub: scrub completed after 1h15m with 0 errors on Sun Feb 14 06:15:16 2010
config:

        NAME        STATE     READ WRITE CKSUM
        tank-esata  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0

errors: No known data errors

Inside pools, you can create file systems or volumes, the latter being the equivalent of a raw drive connected to your machine. File systems and volumes use the storage of the pool without any need for further partitioning or slicing. You can create your file systems almost instantly. No more repartition hell or space estimation errors: file systems and volumes will use the space in the pool, according to the optional policies you might have established (such as quotas, space allocation, etc.)

ZFS, moreover, will let you snapshot (and clone) your file systems on the fly almost instantly: being a Copy-On-Write file system, ZFS will just write modification on the disk, without any overhead and when the blocks are no more referenced, they'll be automatically freed. ZFS snapshot are Solaris a much optimized version of Apple's time machine.

Creating a ZFS volume

Creating a volume, provided you've already have a ZFS pool, it's as easy as:

# zfs create -V 250G tank-esata/macbook0-tm

The previous command creates a 250GB volume called macbook0-tm on pool tank-esata. As expected you will find the raw device corresponding to this new volume:

# ls /dev/zvol/rdsk/tank-esata/
[...snip...]  macbook0-tm  [...snip...]

Creating a logical unit

To create a logical unit for our ZFS volume, we can use the following command:

# sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
Created the following LU:

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f00800271b51c04b7a6dc70001  268435456000         /dev/zvol/rdsk/tank-esata/macbook0-tm

Logical units are identified by a unique ID, which is the GUID shown in sbdadm output. To verify and get a list of the available logical units we can use the following command:

# sbdadm list-lu
Found 1 LU(s)

              GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f00800271b51c04b7a6dc70001  268435456000         /dev/zvol/rdsk/tank-esata/macbook0-tm

Indeed, it finds the only logical unit we created so far.

Mapping the logical unit

The logical unit we created in the previous section is not available to any initiator yet. To make your logical unit available, you must choose how to map them. Basically, you've got two choices:
  • Mapping it for all initiators on every port.
  • Mapping it selectively.

In this test, taking into account that it's a home setup on a private LAN, I'll go for simple mapping. Please, choose carefully your mapping strategy according to your needs. If you need more information on selective mapping, check the official COMSTAR documentation.

To get the GUID of the logical unit you can use the sbdadm or the stmfadm commands:

# stmfadm list-lu -v
LU Name: 600144F00800271B51C04B7A6DC70001
    Operational Status: Offline
    Provider Name     : sbd
    Alias             : /dev/zvol/rdsk/tank-esata/macbook0-tm
    View Entry Count  : 0
    Data File         : /dev/zvol/rdsk/tank-esata/macbook0-tm
    Meta File         : not set
    Size              : 268435456000
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN
    Product ID        : COMSTAR
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Enabled
    Access State      : Active

To create the simple mapping for this logical unit, we run the following command:

# stmfadm add-view 600144f00800271b51c04b7a6dc70001

Configuring iSCSI target ports

As outlined in the introduction, with COMSTAR a new iSCSI transport implementation has been introduced that replaces the old implementation. Since the two implementation are incompatible and only one can run at a time, please check which one you're using. Nevertheless, consider switching to the new implementation as soon as you can.

The old implementation is registered as the SMF service svc:/system/iscsitgt:default and the new implementation is registered as svc:/network/iscsi/target.

enrico@solaris:~$ svcs \*scsi\*
STATE          STIME    FMRI
disabled       Feb_03   svc:/system/iscsitgt:default
online         Feb_03   svc:/network/iscsi/initiator:default
online         Feb_16   svc:/network/iscsi/target:default

If you're running the new COMSTAR iSCSI transport implementation, you can now create a target with the following command:

# itadm create-target
Target iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 successfully created

If you want to check and list the targets you can use the following command:

# itadm list-target
TARGET NAME                                                  STATE    SESSIONS
iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163  online   0

Configuring the iSCSI target for discovery

The last thing left to do to have your iSCSI target configured for discovery. Discovery is the process which an initiator use to get a list of available targets. You can opt for one of the three iSCSI discovery methods:
  • Static discovery: a static target address is configured.
  • Dynamic discovery: targets are discovered by initiators using an intermediary iSNS servers.
  • SendTargets discovery: configuring the SendTargets option on the initiator.

I will opt for static discovery because I've got a very small number of targets and I want to control which initiators connect to my target. To configure static discovery just run the following command:

# devfsadm -i iscsi

Next steps

Configuring a target is a matter of few commands. It took me much more time to write down this blog post than having my COMSTAR target running.

The next steps wil be having an initiator connect to your target. I detailed how to configure a Mac OS/X instance as an iSCSI initiator on another post.

Using ZFS with Apple's Time Machine

The many of us who got accustomed to the ZFS wonders won't willingly trade ZFS for another file system, ever. But even though many ZFS users are running Solaris on their machines, including laptops, as I am, there are cases in which running another OS is desirable: that's when I seek the best option to integrate the other systems I'm running with Solaris and ZFS.

In the simplest case using some file system protocol such as NSF or CIFS is sufficient (and desirable): that's how I share the ZFS file systems where I archive my photos, my video, my music and so on. Sharing such file systems with another UNIX, Windows or Mac OS/X (just to cite some), it's just some commands away.

In other occasions accessing a file system is not sufficient: that's the case with Apple's Time Machine, which is expecting a whole disk for its own sake connected locally.

Fortunately, integrating ZFS and Time Machine is pretty easy if you're running a COMSTAR-enabled Solaris. Although setting up COMSTAR is a very well documented topic by the Solaris and OpenSolaris documentation, I'll give you a walk through the necessary steps to get the job done and having your time machine making its backup on a ZFS volume. You'll end up with the benefits of both world: a multidimensional time machine which will take advantage of ZFS snapshotting and cloning capabilities.

The steps I'll detail in the following posts are:

With such a solution, you will need no USB/FireWire/anything-else drive hanging around. You won't need to rely on consumer drives which implement some kind of file system sharing protocol which, as explained earlier, won't fit into the time machine use case.

Just a network connection and a box to install Solaris, ZFS and COMSTAR, and you'll provide a scalable, enterprise-level, easy to maintain solution for your storage needs.

Wednesday, February 10, 2010

Setting up Apache SSL on Solaris 10

Solaris 10 is almost ready to run an SSL-secured Apache instance out of the box. What you really need is just the server certificate. The certificate, basically, contains the public key your clients will use to encrypt the communication with your SSL-secured server. If you're setting up a production site, chances are you already have a certificate from a trusted Certificate Authority. If you don't, go and get one. Instead, if you're running a non critical, internal or testing site, you can build a self-signed certificate and use it for your site.

Stop apache

Stop apache! ;)

# svcadm disable svc:/network/http:apache2

Enabling SSL

Solaris 10 uses SMF to manage its services and the bundled Apache is no exception. To enable SSL for the bundled Apache instance, you've got to modify the service configuration:

svccfg -s apache2 setprop httpd/ssl = boolean: 'true'

Creating a certificate


Safe harbor statement: This step, as explained in the introduction, will not generate a certificate suitable for production use.

Solaris 10 provides a bundled OpenSSL package which is just what you need to produce a self-signed certificate. The openssl binary is installed by default at /usr/sfw/bin/openssl.

To create the certificate, issue the following command:

$ openssl req -new -x509 -out server.crt -keyout server.key

When filling in the questions made by openssl, please note that the Common Name field must contain the name of the server you're creating the certificate for.

The server.key file produced in the previous step is a just a plain text file. If you want (I do) to protect your key with a passphrase, then launch openssl once more:

$ openssl rsa -des3 -in server.key -out server.key.crypt

You can now safely delete server.key and store server.key.crypt in a secure place. However, Apache won't start unless you type a pass phrase and can be a pain. I usually store the key with a very restrictive permission mask (400) and install it unencrypted. Another option you might use if you don't like letting the key unencrypted is using the SSLPassPhraseDialog directive in ssl.conf and built a script to output the pass phrase. Please note, however, that this method is not inherently more secure than leaving the key unencrypted.

Tell Apache where the certificate and the key are

To tell Apache where the certificate and the key are you have to use the

SSLCertificateFile
SSLCertificateKeyFile

directives. Solaris 10 ships with a functional /etc/apache2/ssl.conf file: edit the file and make sure the SSLCertificate* directive are pointing to your certificate and its key.

Reviewing your configuration

You're probably going to spend some minutes reviewing your ssl.conf file and learn about mod_ssl directive you'll find in there in case you need further customization.

Start Apache

Start the Apache service issuing a:

# svcadm enable svc:/network/http:apache2

and test your site with openssl:

$ openssl s_client -connect localhost:443 -state -debug

A note about virtual hosts

If you're using Apache name-based virtual hosts you might be thinking that the same mechanism applies for SSL-secured name-based virtual hosts. I'm sorry but the answer is no. Basically, SSL encapsulates HTTP and Apache won't be able to decide which host the request is directed to because there won't be any Host header before decrypting the communication, which can only be accomplished at the destination server. Moreover, Apache wouldn't be able to choose a certificate to decrypt the communication just because of the same reason: indeed, Apache will ignore multiple SSLCertificate* directives in <VirtualHost/> block and default to the first directive encountered.. If you're looking for more information on the subject, you can start here: Name-based VirtualHosts and SSL. Unless you can accept the restrictions outlined in this article, the only viable options to deploy SSL-secured virtual hosts are using IP-based (or port-based) virtual hosts.

Tuesday, February 9, 2010

Setting JIRA maxClauseCount

Have you ever seen the following exception in your JIRA's logs?

org.apache.lucene.search.BooleanQuery$TooManyClauses: maxClauseCount is set to 65000

You probably have not. But if you had, chances are your JIRA is so populated that Lucene is throwing this exception when performing a text based search and you really need to get rid of it. I recently saw a very (very!) big JIRA instance running showing this symptom: when this exception started to appear in the logs many things began to break: some plugins, even some gadget configuration screens, some AJAX-based searches, they all started to malfunction.

This particular JIRA instance was suffering because of a poor deployment strategy and an even poorer project creation strategy. Nevertheless, the problem had to be fixed. JIRA by default sets Lucene's maxClauseCount parameter to 65000, which is good for most (if not all...) workloads. If you want to override this parameters you'll have to edit the jira-application.properties file, which is located at:

$JIRA_HOME/atlassian-jira/WEB-INF/classes/jira-application.properties

and edit the jira.search.maxclauses property. If you follow the official Lucene JavaDoc advice, that is, using the org.apache.lucene.maxClauseCount system property (for example, by setting it in the $JIRA_HOME/bin/setenv.sh script), won't work for JIRA: it probably sets it up programmatically during startup.