Pages

Monday, August 31, 2009

Setting up Subversion Client Access Via SSH Using TortoiseSVN

If you read so far, you're probably running your own Subversion repositories on your Solaris box. Fine! Now, let's face the next problem: some of your users needs access from a Windows client. They installed their favorite Subversion client, TortoiseSVN, and tried to checkout your repositories. But no, it does not work.

Setting up users

The first thing to do is setting up properly a bunch of user accounts for your client. If you haven't done yet, it's time to do it, now. Read here.

Preparing some keys for your users

As I told you in my previous post, the best option you have is setting up some public key for your users: configuration on the server side will be easier and your users won't need to enter a password every time they connect to your repositories. If you don't know how to do it, read this other post.

Configure TortoiseSVN

Many people get stuck here. Windows lacks the basic set of commands you need to interact with your remote system over an encrypted connection using the SSH protocol. It may sound strange to you faithful UNIX user but unfortunately that's the truth. Programs such as TortoiseSVN bring their own implementation of the SSH client, although specifically, TortoiseSVN lets you choose an alternate external client. The problem because of which people get stuck is that TortoiseSVN configuration GUI does no mention what-so-ever of SSH authentication. Nothing. That's why, once more, you should rely to The Manual just to discover that TortoiseSVN brings with it a PuTTY client, Plink, which is the command line interface to PuTTY backends.

The problem now reduces to configuring PuTTY to use a public key to authenticate you, save the session configuration and... remember its name! As Plink will use the same configuration registry as your standalone PuTTY, you'll be done.

Checkout your repository

Now that PuTTY is configured, you can check out your first repository over an SSH connection. Just remember not to use the server's URL and use PuTTY session name instead.


Now you should be able to:
  • interact with your Solaris-hosted repositories from your Windows clients...
  • ... with TortoiseSVN...
  • ... and without typing any password.

Moreover, if you set up your Solaris user accounts as I explained you in another post, the key you distributed to your user won't let them even login into your system. You and you're sysadmin will be happy!

Configuring SSH key authentication with PuTTY

If you're a UNIX user, you're probably already using SSH public key authentication. Personally, I use it to avoid typing so many passwords every time I connect to a remote machine. If you're running a Windows client you installed an SSH client to connect to your remote machines. I usually use Cygwin, which gives me an environment very simiilar to what I'm used to. If you didn't feel like installing Cygwin just to establish a SSH connection, you probably chose PuTTY.

PuTTY is a bit different: it's got no .ssh directory to read from, it brings its SSH client implementation with it. If you want to configure PuTTY to use SSH key authentication, you can just follow these steps.

Setting up your keys for PuTTY

Both if you own or not your own keys, you need another program to produce a file for PuTTY to read: PuTTYgen. When you run it, puttygen will let you import your private key and save it in a PuTTY-friendly format or, if you haven't got one, to generate your brand new key. If you prefer not being asked a password by TortoiseSVN again and again, you can just avoid protecting the key with a passphrase and store the key in a safe place. I'll repeat it: store the key in a safe place.


Once you've done with the process, you will have a ppk file you should better store in a safe place! 

Configuring PuTTY

To tell PuTTY to use your key, just open it, go to Connection/SSH/Auth and browse for your key file in the Private key file for authentication... field. Now you can save your session and you're done.


Have fun!

Sunday, August 30, 2009

Menta or hierbabuena?

One of the spells of foreign languages is that they aren't just a bunch of new words, properly ordered by the laws of its grammar. Language reflects how native people is, how they think, how they live. A language is the way we express ourselves and how we communicate. Obviously, there's much more to it than rules. I love learning new languages and, when I'm living in another country, learning the local language is the most important to do.

Languages also have their idiosyncrasies, I accepted it and learnt to live with it. When I don't understant what gave birth to an expression or to a motto I immediately go and check. That's the most valuabel thing I did while learning the languages I can speak. Sometimes, these idiosyncrasies lead to a clash.

Just like what happened with mint. I think everybody know mint, a family of spicy herbs. As I told you, I love tea. And in summertime, I love drinking Tuareg tea, a Moroccan specialty made with a variety of mint called spearmint and Gunpowder tea. For best results, fresh spearmint leaves are required, so I went out looking for them.

One thing I knew for sure: Spaniards call this specialty Té con hierbabuena. Being a so widely spoken language, it turns out that, despite the order that the Real Academia tries to impose, hierbabuena is a term that identifies different plants depending on the country you are. This fact explained why I was receiving so doubtful answers from whoever I asked about mint and hierbabuena. It seemed like everybody's got his theory! Some Spaniards even thought that hierbabuena was not a kind of mint. Well, but I was pretty sure! Despite the Arabic name of the drink, I sawed and rose spearmint myself.

An evidence that seemed to fail, here in Spain. Fortunately it came out that hierbabuena is spearmint. In Spain, too, where everything (else) is different. I clashed not with an idiosyncrasy of the language this time: I clashed with widespread ignorance.

Nostalgic geeks out there: Slackware 13.0 has been released

Patrick Volkerding, Slackware's Benevolent Dictator for Life, has announced the release of Slackware 13.0. Slackware is one of the oldest GNU/Linux distribution and the oldest still actively maintained. Slackware has a reputation for being the most UNIX-like GNU/Linux distribution, for being rock-solid and, lately, for being considered a geek distribution. Well, probably as "geek distro" as Debian is, in this Ubuntu world.

Welcome back!

Slackware is the first UNIX-like operating system that I used, ever!, before being able to afford running Solaris on a Sun workstation and that happened 1994 or 1995, approximately. I remember FTPing the Slackware repositories to download all of the floppies Slackware was made up of. I don't remember how many but they were a good number. There were no broadband at home, at those times, and I had to download them at the University, write the floppies and bring them home, hoping they still were OK. Eventually I started buying the CD, but that was after having a CD-ROM.

I've been faithful to Slackware for a long, long time. At the end, I dropped GNU/Linux in favor of Solaris, both at home and at work, and did it with a bit of sadness. I got so used to the Slackware operating system that dropping it was really painful. Still nowadays I really feel like testing the new releases, and that's what I'm doing with Slackware 13.0. 

Booting Slackware, for me, always tastes like a "Welcome back". The Slackware experience is peculiar. First of all, it's still a you-can-do-it-yourself kind of operating systems. The feeling of simplicity and cleanness that Slackware brings to the admin is remarkable. It's like you know where things are and where you've got to start for tweaking. And if you don't know, it's pretty easy to discover. Slackware guys, indeed, always try to brings unmodified packages from upstream: hence the very low distribution-specific pollution. If you wanted to read the manual of the software you're configuring, running Slackware you won't have any surprise.

The Slackware crew have been doing a great job and there always are great improvements from one release to another. But, as it's the case with distinctive brands, changes and improvements never break the Slackware way. That's of the things I love most.

Why should you install Slackware? Well, I usually suggest to give Slackware a try and then decide. Slackware's rock solid and, although the update process is not automatic (sorry, no Synaptic here...), you can update your system with packages from the development branch if you want to run newer versions of a package. The install process might be a newbies stopper: if you don't feel like running en Expert install and choosing packages one by one (it can be a pretty long process), you can simply perform a full installation and then free some space removing internationalization (*-i18n-*) and localization (*-l10n-*) packages, such as KDE's.

Major changes

One of the most importante changes is probably the support for the amd64 architecture. Before, the only choice was running Slamd64, an unofficial port. The Linux kernel shipped with Slackware is version 2.6.29.6. Free to compile your own, as usual, but beware that kernel 2.4 support has been dropped and a version 2.6 kernel is now required. Slackware comes with its huge installation kernel and a choice of smaller ones for post-installation setup. If you don't feel like testing which kernel best fits your needs, you can install the huge kernel used during installation. Slackware release notes also suggest running the SMP kernel even in machines with 1 CPU.

As far as it concerns the desktop environment, you slackers know that Slackware isn't shipping nor supporting GNOME. Slackware 13.0 brings the new KDE 4 Desktop Environment (v. 4.2.4), which finally has reached a point in which it seems mature for the regular users. And if instead of KDE's eye-candy you'd rather run a light and GTK2-based desktop, there you have Xfce.

HAL and udev integration has been there since a couple of releases and nowadays Slackware is a perfect choice for the desktop of the casual user. No more su and mount just to use a pendrive.

Slackware doesn't come with libdvdcss, which you're going to need if you want to reproduce encrypted DVDs. But don't worry: just download the source package, ./configure it and it'll build perfectly.

Xorg has been updated too and chances are you'll be running your X Window System without even setting up a xorg.conf file. Slackware kernels include DRI support and, if you've got a suitable card, you'll enjoy hardware acceleration out of the box.

If you want to read the release notes with all the details, here it is.

Conclusions

Many years have passed and Slackware is always there, faithful to its design principles, rock-solid and as clean as ever. Should I run Linux, I'd choose Slackware. If you need packages you can start from Linux Packages or SlackBuilds. Furthermore, as far as I can tell, Slackware is one of the best distributions to compile software: the complete install comes with everything you need to compile most part of the packages you'll ever need which are just a (./configure ; make ; make install) step away (well, you'd better build a package before installing).

Great work as usual, Slackware crew.




Saturday, August 29, 2009

JSR 303 - A standard bean validation API for the Java Platform

JSR 303 (Bean validation) is making its way into being approved by the Java Community Process. The goal of this series of post is to give you a quick introduction about this specification.

What is the problem with validations?

This specification, citing the official JSR 303 Request, tries to solve a common problem. Validations are commons steps executed during the business logic of the application, although often the validations algorithm are a characteristic of the data structure being validated, rather than the bussiness logic flow they're executed within. That's why validations aren't tier dependent operations and validation algorithms might span almost all of the tiers into your system, from the front end down to the backend. Reimplementing or redistributing the same validation classes to all of the applications layers and components is error prone and adds a complexity you must deal with. Bundling validation into the data structure being validated, on the other side, generate couplings and clutters the affected types with algorithms and structures which are not part of the class. 

How does this specification solve this problem?

This specification tries to solve this problem recognizing the fundamental flaw we often encounter in validation implementations inside enterprise applications:
  • The process of validating data is part of the business logic of an application.
  • The logic executed during the validation process might not be part of the business logic. Rather, it's a characteristic of the data structure itself.
  • Hence, validations should be considered metadata of a class and...
  • ... validations should be orthogonal to both business logic and tiers in the applications.
  • Validation logic may change over time.
  • Being an API specification, validation frameworks should be pluggable and interchangeable.

The solution boils down to recognizing that validation is a process that must be driven by class metadata. By providing a sufficiently flexible metadata model and a validation API, the specification defines how to associate validation metadata to a class either by using Java annotations or XML descriptors. 

The proposed solution only leverages the JavaBeans object model: therefore, it can be used in whichever application tier without being tied to a specific programming model. Furthermore, this API is suitable to be used as a component of other APIs thus enabling JSR 303 bean validation into, for example, the persistence layer (with JPA), the bussines layer (with EJBs), and so forth.

Why do you need this spec?

I would restate this with a more general question: do you need specs? This blog post isn't about why standards (and specs) are good. While designing and implementing there aren't just standards and specs. There are algorithms (your business logic is not a standard, it's just as unique as your fingerprint), patterns, best practices and unique application requirements. There's common sense, too (sometimes). The value of standards and specs is the value of having a standard solution to a well stated class of problems. If it fits your requirements, it's generally good: someone else, probably more skilled than you in that particular field, would have produced an implementation you can just take and use.

Being the JSR 303 a service provider standard (also) sort of guarantees you that you may choose the implementation that works best for you and that integrates best with the environment you have. There won't only Hibernate Validator after this specification is approved. It's very likely you could choose between a bunch of implementations which, because of this spec's characteristics, will fit into the other frameworks and APIs you use, such as EJB, JPA, JSF, Bean Bindings, put-your-favorite-here. Indeed, the JSR 303 will be required by JSR 316: the upcoming Java Enterprise Edition version 6.

The lack of a standard validation API, or even well established validation patterns, had as an effect that many application reinvented a validation framework: this process usually starts providing a solution to the first identified requirements, which seem too few or simple to require another validation framework, and then, as development proceeds, starts to grow according to the new requirements. There you ends with your own validation framework which, I bet most of the case, will not be a state-of-the-art, encapsulated and reusable solution. If the validation framework isn't carefully designed, you usually end up polluting your application layers and even pollution your class definition, to avoid the deployment problems cited above.

One thing more about specifications

One thing more about specifications. Although we should, I think many of us Java architects and programmers don't usually dive into a specification. At most, we know that it exist and then rely on the capabilites of the framework-of-the-day. Either under pressure, or because of a lack in the design of our application, or because we're struggling to get the job done and Google seems the saviour, or simply because the human being is lazy, not only may we end up reinventing the wheel: we may end up copy-and-pasting the wheel.

That's the fundamental flaw.

Specifications are there because groups of experts recognized a problem (usually before we do), study it and provide a general solution to it. Either in form of APIs, programming models and sometimes with even more complex technological solutions (such as EJB containers). It's up to you, Java architects, analysts and programmers, recognize if your problems belongs to such a class, evaluate the viability of using a standard and evantually choosing an implementation. Also, don't be blind in the name of a specification. Solutions require a problem to exist and it might also be the case that your problem doesn't perfectly fit in and that a simpler solution will satisfy your requirements. It's up to you deciding if ignoring a standard solution is an advantage for the design and the overall cost of your application.

These are just examples and speculations but that's the kind of choice an architect must do. Nevertheless, as far as it concerns this specification, I'm pretty confident that its flexibility will make the JSR 303 a common guest in our Java enterprise applications.


Nessie on Google Maps: does somebody see a boat?

Today, while reading my usual newspapers, I stumbled upon this article on the Corriere della Sera website. Even if it's summer, I think that Il Corriere needs not such fillers... By the way, according to this article, this Google Maps photo has caught the attention of believers of Nessie existence, the Loch Ness monster. In their opinion, that's a shot of the mythological monster that hides into the Scottish highlands' lake.

Well, everybody's free to believe what he likes. Personally, I just see a boat here. What do you see?


View Larger Map

Friday, August 28, 2009

Configuring PuTTY to use UTF-8 character encoding

When it comes to Windows (and Windows programs...) it's never late to "learn" something. Or forget something?

Straight to the point: as I needed a terminal to connect to my Solaris box down there, I decided to go for PuTTY. In this laptop I'm running Windows Vista (I'm not joking...) and I didn't want to install Cygwin for such a basic task.

PuTTY doesn't even intall: you just download it and run it. Fine! As soon as I connect, the first surprise.
I'm not a masochist and I didn't chose that directory name! My Solaris user is running with an UTF-8 encoding:


$ echo $LANG
en_US.UTF-8

and PuTTY is simply misunderstanding it. Well, to say the truth it isn't even trying to understand it. When you launch PuTTY you're presented a dialogue. In that dialogue, if you choose the Window/Translation tab, you'll the able to choose the character encoding PuTTY will be using during the translation process of the received data. Just set it to match your other endpoint encoding like this:

Et voilà! PuTTY is printing characters correctly:
Just a remark for the PuTTY guys: Windows/Translation doesn't seem such a good option name, to me, for a character encoding.

Wednesday, August 26, 2009

An update about GPush: it finally seems to work

If you're part of the club that wanted Google mail pushed onto your iPhone, the release of the GPush application did sound like good news. Unfortunately the application hasn't worked that well after its release and people started to complain. I was one of them: in this blog and directly to Tiverias Apps.

It was probably a scalability problem: they never tested an application with such a great number of users and GPush wasn't exactly the kind of application that passes unobserved. We were waiting for it! On its website support page Tiverias Apps has been constantly giving feedback to the users about the problems that we were experiencing. Finally I'm glad to state the following: GPush is working flawlessly for me since a couple of days.

There's some glitch, still, but I'm confident they will be resolved in a GPush application update. Specifically, I still can't change my account settings without uninstalling and reinstalling the application. It just ignores the change.

It was worth what I paid for it.

Update: You don't need GPush anymore if you want to have your google mail pushed to your iPhone.

Disney-Pixar's Up: they keep on surprising me


Back in 1995, I went to the cinema and bought a ticket to see the first Pixar movie: Toy story. Since then, Pixar movies have been praised by both critics and audience. Up is no exception and, although critics were skeptical about how the movie, featuring as main character a 78 year-old man (or so), would be able to entertain children, it's been a success since its release.

A bit of introduction: I think Pixar has really been innovative under many points of view, both technological and artistic. Toy story was a milestone in this genre: since its release the way animated movies are done changed forever. I think creativity and intuition are the reason why some movies by Pixas raised so many concerns before their release. Think of Ratatouille (winner of the Best animated picture Oscar): it's a story about a rat in a kitchen "who" dreams to become a cook. Or think of Wall-E: no dialogue for almost half the movie. Obviously there were concerns about such a formula to entertain an audience composed mainly by kids. Another box office success and another Oscar. And I must admit that I liked Wall-E. Big time. Should I find another movie with an equal ratio of entertainment over silence, I'd only think about Kubrick's 2001, A space odessy.

With Up, it's the turn of an octuagenary, former balloon seller, widower, who's trying to fullfil his last wish, a dream he shared with his wife since he was a kid. Another one of the bets won by Pixar. The movie begins with sort of an Overture, ended by a funny sequence a tempo with the most famous aria (an habanera) of Bizet's Carmen: L'amour est un oiseau rebelle. This part introduces the beholder into the life of the main character, Carl, since he first met his would-be wife up to his present life as widower. This flashback is incredibly emotional: since the sweetness of Carl life as a child, the warmth of its wedding life, and the tenderness of the remembrance and the solitude. This first part, moreover, is almost dialogue-less and recalls the first part of Wall-E, although much shorter.

The rest of the movie is the story of how Carl struggles to fullfil their last and only dream. Obviously Carl's the antipode of the super hero: it's an old man with his wisdom and physical limitations. This second part is much more conventional than the first. Indeed, I think there's an abyss between the two and although the movie is wonderfully rendered, there are some really impressing sequences and a bunch of funny characters, the second part never hits any peak as emotional as the first part achieves.

I think the story might seem a bit conventional, even weak sometimes. Don't forget that it's Disney targetting a young audience. Moreover, after seeing so many hypnotizing (the ancient greek way: zzz...) super heroes movies, I liked this one where so many roles are shifted. It's not a child but an old man who pursuits its dream. There are no fairies, but a child with familty problems. Carl hasn't got any super power. Carl's a hero his way. A hero as a hero could only be in children' fantasies. And his strength finds its roots in the past, when he was a child. In the world where everything's possible. Until you've grown up.

P.S.: I haven't seen the 3D version. Yes, yes, I know. It's such a nonsense, isn't it? But hey, the ticket costed 7.5 Eur and the "3D supplement" costed an additional 3.5 bucks. Each! A robbery.

Monday, August 24, 2009

Setting up SSH access to Subversion repositories on Solaris 10 (with zones)

If you followed my previous installments, you're probably running some Subversion repositories in your Solaris box. Chances are you're running the Subversion daemon, svnserve, and using the svn protocol to access the repositories: in a previous post I explained how you can set it up as an inetd service and in another post I gave you a pointer to an SMF manifests' repository where you can find a manifest to configure your Subversion daemon with the Solaris SMF framework.

There's another interesting way to access your Subversion repositories: tunneling the communication with svnserve over an encrypted SSH connection.


Why SSH?

There are many reasons that may lead you to such a choice and the most important might be:

  • You're accessing your repositories from outside your network and want to use an encrypted connection.
  • You don't want to maintain (yet another) user registry into the repository configuration files and you'd rather leverage your existing authentication strategy.

With SSH you can have existing users authenticate into your Solaris instances without additional effort. If you're already using some directory service, such as Sun Java System Directory Server, you already know the benefits of centralizing your user registry. If you're not, you should consider using a directory service before starting to duplicate sensible information such as user accounts, groups and privileges. If you're planning to give access to your repositories to users outside your organization, you could think a directory is not a good choice. Well, in this post I'll show you a possible workaround.

Tasks


To configure such a solution, you have to take into account the characteristics and the consequences of using SSH to authenticate your users into the Solaris Operating System. Whether you're using a directory service or local files (passwd, group, etc.), users who are going to connect must be managed at operating system level. You'll be able to give users access to your repositories without actually allowing them to perform any other operation on your system. Paranoid administrators shouldn't worry about users logging in into their machines, if they don't want to.

Subversion client configuration also allows you to fine tune the tunnel settings: you can change the binding port or the entire command itself, if you wish to.

You will also pay attention to the repositories' permissions: connecting via SSH, under some aspects, is just like using the local file protocol. Users connecting to your repositories, then, should have appropriate permissions on the repositories' directories.

You will also be able to leverage the Solaris Zones technology to isolate your Subversion repositories and users into a non global zone.

Configuring Subversion


This is pretty easy: if you're not fine tuning the tunnel definition, there's really nothing to configure. Just invoke the Subversion client using the svn+ssh schema and the job is done.

If you wished to fine tune your tunnel settings, you can edit the Subversion client config file. This file contains a section named [tunnels]. The config file is located in the .subversion subdirectory into your $HOME. If you want to change the default behavior associated to the svn+ssh schema, just edit (or create if it's missing) a line such as:

ssh = command

To change the default port, you could use:

ssh = ssh -p portnumber

If you wanted, you could also define your own schemas:

yourschema = yourcommand

would be used when accessing the repository with the svn+yourschema schema.

Another nice feauture of the configuration file is the possibility to override the tunnel definition with an environment variable. Defining a schema with the following syntax

yourschema = $YOURVAR yourcommand

has the following effect:

  • If the variable $YOURVAR exists, it's used as the tunnel definition.
  • If the variable doesn't exist, the tunnel definition provided in the configuration file is used instead.

The default value for the SSH tunnel definition is indeed the following:

ssh = $SVN_SSH ssh

When overriding the SSH tunnel definition you may choose to setup the $SVN_SSH variable for your users instead of modyfing the Subversion configuration file. We'll use this technique later.

Setting up the repository


One thing to take into account when using the SSH tunnel is that the svnserve command will be run with your user identity. This means that the user you're logging with must have proper permissions to access the repository files. The easiest way to go is probably creating a group for your users, let's say svn-group, and gave them write access to the repository directory, repository-dir:

$ chgrp -R svn-group repository-dir

$ chmod -R g+w repository-dir

If you're setting up multiple repositories you can create a group for each one of them. Please take into account that the Solaris operating systems allows an user to belong to a maximum of NGROUPS_MAX groups. If you also need to change the current group membership of an user because the required group is a Solaris secondary group, you can wrap the svnserve command into a script which changes the current user's group with the newgrp command.

Another good practice is setting a sane umask before accessing the repository files. You could wrap the svnserve command, or even the svn command if you're using the file schema too, into a shell script which sets the umask for the user:

#!/bin/sh
umask 002
# your commands here

Setting up public keys to use with SSH (and restrict user to only use Subversion)


When opening a SSH session, you're usually asked a password to authenticate into the remote machine. As explained in an earlier post, you can generate key pairs and use them for authentication. Key pairs also have another advantage: you can provide some users a key pair to authenticate and configure the SSH daemon to restrict their abilities to interact with the system. Specifically, you can setup the remote machine to only allow some users to launch a specific command, svnserve in this case, when authenticating. This is especially useful when you share a repository with users outside your organization. You can create user accounts and key pairs for them: with a proper configuration such users, although listed in your user database (both local files or directories such as LDAP), they will only be able to login and launch the svnserve command in tunnel mode. This approach together with Solaris Zone technology will give you the possibility:

  • To quickly set up zones on your system to host subversion repositories.
  • To optionally centralize your user accounts in the directory of your choice.
  • To limit some user account to only use the Subversion server, effectively prohibiting them to open an interactive login session into your system.
  • You can centralize the setup of the users' home directory by using the Solaris automounter. Subversion-only users will have their homes automounted from an ad-hoc server.

To configure the machine, or the Solaris zone, which hosts the Subversion server, you only have to follow the instructions in this post to provide them a key to connect to the server. Once this is done, if you want to limit your user ability by specifying a command to execute at login, you just have to add this fragment before the public part of the key:

command="/opt/csw/bin/svnserve -t"

In this case I specified svnserve path as installed by the Blastwave's package: if your setup is different, just change the path. If you're using Solaris Express Community Edition or OpenSolaris, Subversion may be found in:

command="/usr/bin/svnserve -t"


Please be aware that this fragment must be inserted before the key fragment and in the same line.

If the number of the users is such that you don't want to manage this process manually, you can for example:

  • Use a script to generate the keys and to concatenate the public part into the authorized_keys2 file.
  • Manage a centralized authorized_keys2 file.
  • Share the authorized_keys2 file amongst users' home directories: they won't be able to read that file.
  • Optionally automount users' home directories to share this configuration in many systems or zones.

Configure your users' groups


As mentioned earlier, users should belong to a group with the necessary permissions (read and write) on the repository directory. If you manage your users with local files, just assign them the proper primary and secondary roles: if you need secondary roles for some users you can use the newgrp command in a wrapper shell script to have an user login into the desired group before invoking the Subversion commands.

If you use a directory service, configure the directory appropriately. If you're using the Sun Java System Directory Server and you're using the default LDAP schema, assigning groups to user is pretty easy:

  • The primary group can be specified setting the gidnumber attribute of the LDAP user entry.
  • The secondary groups can be specified by adding multiple memberuid attributes into the group entry.

To add the users joe and john to a group you just add:

memberuid: joe

memberuid: john

into the group definition.


An alternative configuration with just one subversion user


If you do not want to leverage your existing user repository or you don't even have one, don't worry. In that case, all you have to do is setup your subversion repositories as usual and then manually setup the user for the current tunnel with the --tunnel-user=username option in the authorized_keys files. That's it.

Setting up zones


Setting up sparse zones in Solaris 10 is really straightforward and Sun Solaris 10 official Zones documentation covers the topic with great detail.

If you need to configure the zone to use an LDAP, please refer to the Solaris 10 Naming and Directory Services administration guide.

If you want to follow my advice, you can setup a zone for installing Blastwave's software and share the installation between zone using a loopback mount.

Next steps

You can now:



If some of your users run Windows, you can read the following post to learn how to configure TortoiseSVN to use your public key to connect to Subversion using an SSH-tunneled connection.

Sunday, August 23, 2009

Windows-Solaris interoperability: CIFS permissions and Quicktime idiosyncrasies

Since I had to setup a Windows Vista laptop, I started to use a combination of Solaris technologies to enjoy some easy to setup Solaris-Windows interoperability services. The ZFS-CIFS combination is an excellent way to integrate that Windows machine in my home and work Solaris-based networks. Running CIFS in workgroup mode is sufficient, right now, and almost everything works as expected. I say almost because I hit a strange QuickTime player behavior. It's probably a QuickTime idiosyncrasy, nevertheless I spent some time investigating it.

Files and directories on my ZFS file systems have got the following permissions:

  • 600 for files
  • 700 for directories
  • No ACLs.
That's a pretty simple and intuitive setup. I share some private directories and only my user has got privileges on it. I also mapped with idmap the staff UNIX group, to which my user belongs, with the Windows' Administrators group. Given the permissions sets I'm using it's probably unnecessary, but I didn't like that ephemeral SID showing up in the Windows security tab.

Now, with this setup, when I try to open a MOV files with the QuickTime player, I've got the following error:

Error -43: A file could not be found

Moreover the QuickTime player process, after closing the error windows, remains there hanging around. That's not a big issue but you have to kill it if you want to open it again.

The first thing I checked was if the file is readable. Well, obviously it is. The file is readable and I can copy it in a local folder and launch it from there. It works. But that's not what I want to accomplish.

Second thing I noticed is that, if I use the File Open... feature of the QuickTime player, the error is different: it simply says I haven't got sufficient privileges to open the file. It turns out that, for strange reasons and only in some situations, the QuickTime player was requiring more permissions that I thought it was necessary. I succeeded opening the files only removing some of the special permissions associated with that file. Specifically, the execute file denial (for my user), and the read data and write data denial for the Administrators and the Everyone group. Really, really strange, indeed.

Friday, August 21, 2009

Solaris Express Community Edition build 121 has been released

It's was an unusual long time you couldn't just get and try the latest Solaris Express Community Edition build because of some serious bugs: a couple of releases were canceled or users were strongly advised about how much those bugs could affect their systems' stability. Today, Solaris Express Community Edition build 121 has been released and it can be downloaded from OpenSolaris website. The official announcement, as usual, was broadcast to the OpenSolaris Announce mailing list and here are the changelogs links if you want to check them out:

  • ON (Kernel, drivers, utilities): http://dlc.sun.com/osol/on/downloads/b121/on-changelog-b121.html
  • X Window Systems: http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_120/
I'm not going after any bugfix because the latest build I'm running, build 116, is pretty stable and gives me no problem at all. A system upgrade will be welcome nontheless.

Thursday, August 20, 2009

Don't buy GPush (yet): it's not working

So happy was I, yesterday: I thought my emails were going to be pushed to my iPhone, thanks to GPush, something many users were waiting for.

Yesterday I bought the application and I had no problem configuring it. It's a pity that, since then, I just receive one (yes: one...) notification. After that, silence.

Tiverias Apps, GPush producers, states that they're experiencing problems with their servers and that their developers should have isolated the code paths which are causing the problems that we're experiencing. Just hope it's not a scalability issue: sending push notifications to a great number of GMail users seems no easy job to me.

If you feel like buying the app, please wait for these problems to be solved.

Update: GPush has started to work.

Tuesday, August 18, 2009

GPush: Gmail push notifications for the iPhone

It finally has come true. I wish it was an Apple supported feature, as I think it should be; even so, I'm glad that GPush has finally made it into the App Store.

Nowadays GPush has very basic features:
  • It only lets you configure just one GMail or Google Apps account.
  • You cannot define filters.
  • The only way to stop incoming notifications is disabling GPush notifications in the iPhone control panel.
The first two issues are easily resolved by setting up an additional GMail account and configuring GPush to notify you about incoming mail in this account. Then, you can configure filters on the other account and forward to the GPush account only the mails you're interested in.

It's the first release and every software has its glitches. GPush is one thing I was really missing and I'm glad it's been deployed.

Solaris and Windows interoperability: using Solaris CIFS to share directories and to map identities

In an earlier post I described how easy is setting up CIFS and share Solaris directories with Windows clients. ZFS sharesmb properties, moreover, makes the process even easier: just set it to on, or to name=sharename to change the share's default name, and the job is done.

The next step in configuring your CIFS server is: how do you assign privileges to your user? Well, despite what you might be accustomed during your life with Samba, Solaris has been enhanced and now it supports Windows SIDs and Windows ACLs. As usual, Solaris documentation is the best place to look for information. In this post, I'll give you a brief introduction.

Let's suppose you completed the steps in the previous post and you did set up a share correctly. Right now, you'd be probably working on it from your Windows client. The first thing you could be noticing while inspecting the security permission of the shared folder is the following: your solaris user is listed and an unknown SID might be listed as well. What's that SID? Let's open a shell.

The Identity Mapping Service is a service whose purpose is just that: mapping identities between Solaris and Windows. You can dump the currently mapped identities with the command idmap dump -nv. The first thing you might want to do is mapping your user's default group into a Windows group. In this example I'll map the Solaris staff group into Windows' Administrator group:

$ idmap add wingroup:Administrators@BUILTIN unixgroup:staff

This is a bidirectional map which says both operating systems what corresponds to what. After logging in and out you can recheck your shares security permissions. In Solaris I have:

$ ls -adl software
drwxr-xr-x  14 enrico   staff         14 Aug 18 17:29 software

while on Windows I read:


Really easy, isn't it? The same way you can change permissions with Solaris ACLs, you can do it from Windows. The following directory has got the same permission set depicted in the previous figures:

$ ls -dV subversion/
drwxr-xr-x   2 enrico   staff          3 Feb 22 01:10 subversion/
                 owner@:--------------:-------:deny
                 owner@:rwxp---A-W-Co-:-------:allow
                 group@:-w-p----------:-------:deny
                 group@:r-x-----------:-------:allow
              everyone@:-w-p---A-W-Co-:-------:deny
              everyone@:r-x---a-R-c--s:-------:allow

Basically, it's a directory owned by enrico:staff, with a permission mask of 755 and no ACLs. Now let's add the SOLARIS\enrico user a couple of permissions: Delete subfolders and files and Delete. As soon as you do that, you'll find:

$ ls -dV subversion/
dr-xr-xr-x+  2 enrico   staff          3 Feb 22 01:10 subversion/
            group:staff:-w-p----------:-------:deny
              everyone@:-w-p---A-W-Co-:-------:deny
            user:enrico:rwxpdD-A-W-Cos:-------:allow
            group:staff:r-x----------s:-------:allow
              everyone@:r-x---a-R-c--s:-------:allow


That is: modifications done in Windows are correctly reflected on the Solaris side.

This is really the starting point. You can establish users and groups maps, you can set your Solaris ACLs and having them propagated to the Windows clients. In domain mode you can also rely on a directory service for identity mapping.

Happy interoperability!

Sunday, August 16, 2009

Microsoft Word 2007 table of contents feature seems to be buggy

First time it happened, I couldn't believe. I thought it was I who screwed up that document: I wondered what I could have done to the document styles to get those spurious entries into the table of contents. Then it happened again. And again. It couldn't be me, simply.

I'm working at a client which is committed to produce its documents with Microsoft Office 2007. No way to change that: I had to purchase a license and install it onto my virtualized Windows. When I'm at work, I just use the client's computers. When I'm at home, I run Windows on a Solaris host with Sun xVM VirtualBox to get the job done.

A few days ago, just before sending to print the last revision of a document, I realized that the table of contents was screwed up! Instead of just listing heading up to level 3, it was showing spurious lines here and there. Some of them, were image captions, too. The first I tried to correct the problem was selecting the guilty lines, checking the paragraph options, which incidentally seemed ok, and reapply the original style. It worked, but everytime I opened the document, the table of contents was screwed up again. I tried to understand what happened, given that the spurious lines weren't that spurious (they were always the same), but found nothing. I google a couple of minutes just to confirm I wasn't alone.

I later discovered how I could reproduce the problem: it always happened when I closed the guilty document with the document map feauture on. Switching off the document map solved the problem and the next time I opened it it was just fine.

I recognize that's not a great solution but hey, it works.

Thursday, August 6, 2009

Sending emails from the iPhone at full resolution: yet another use for the "copy and paste"

As you may have noticed, whenever you send a photo by email with your iPhone share functionality, the photo is down sampled to a mere 600x800 px². That's not so bad for an email, when bandwith and pricing could both be issues. A JPEG at such resolution could fall in the [100, 150] kB interval, depending on the dynamics in the photo itself. That's a huge difference from the average 1 MB of the original JPEG at its full resolution.

The down sampling process it's definitely bad in these cases:
  • If you're unaware of it.
  • If you're aware of it but don't know how to circumvent it.
  • If email is the only vehicle you use to transfer photos from the phone to your PC.
As far as it concerns point one, ignorance is your friend but the process may nevertheless be bad. The iPhone not informing about the quality loss seems bad to me, too. Both of the camera phones I owned before the iPhone used to inform the user when the photo would be down sampled. Information, is good.

Points two is what this post is about and point three is where problem gets worse for people like me who have not got any iTunes-compliant PC to connect the iPhone to.

The iPhone 3G S shots photos at the decent resolution of 2048x1536 px² (don't know whether that's optical or not) which gives you photos of approximately 17x13 cm² at 300 dpi² or 34x26 cm² at 150 dpi² (the minimum output resolution you should use depends on the characteristics of the sampling technology used to shoot it). If you want to bring your photos to your PCs for further editing or printing you can rely on an email or an USB connection to a PC which correctly recognizes your phone. I tested Solaris Nevada up to build 116 and the iPhone is not recognized: that game's over for me right now.

When you enter the camera roll, you can select multiple photos and send them via email, amongst other thing, by using the Share button shown in the following screenshot.


 

That's a fast path but that's when the down sampling takes place! If you want to send your photos at full resolution you should make your selection, copy and paste them into a new email and send it. Copy and paste does not modify the copied data and your email will contain your original photos.

Wednesday, August 5, 2009

Do you want to jailbreak your iPhone? You might be a criminal, according to Apple.

In the statistical distribution of the events that may lead you, iPhone user, to desire to jailbreak your phone, a significant event, at least according to Apple's lawyers, is the probability you're a criminal. More specifically, a drug dealer. So likely, that this is the first example they provide to the Copyright Office of the United States of America.

You might have been reading my concerns about the iPhone and the limitations that Apple is willingly enforcing. My concerns are sincere concerns from an user standpoint.

I've just been reading Apple's response to the following question, submitting by no less than the Copyright Office of the United States of America:
Does “jailbreaking” violate a license agreement between Apple and the purchaser of an iPhone? If so, please explain what provision it violates and whether “jailbreaking” constitutes copyright infringement?
You can download and read the entire response, if you want. I don't want to spare you such joyful reading, but I really feel like citing this one:
For example, each iPhone contains a unique Exclusive Chip Identification (ECID) number that identifies the phone to the cell tower. With access to the BBP via jailbreaking, hackers may be able to change the ECID, which in turn can enable phone calls to be made anonymously (this would be desirable to drug dealers, for example) or charges for the calls to be avoided.
That's the kind of issues raised by the situation in which you're using something that may harm yourself and the others. Now, Apple just forgets that nowadays you can install your applications on a very wide range of mobile devices. The Java Virtual Machine for mobiles is installed in millions of devices, so is Windows Mobile. Substitute it with the OS you like. It's just Apple that's protecting you even from yourself. Or it's just protecting its monopoly and cash flow?

Whichever the answer, if you'd like to be able to use your phone and you're not a criminal, you're not evil, and you're not a drug dealer, you can for example sum up to the Defective By Design initiative and protest against Apple.

Has the apple got rotten?

I already expressed my complaint about the iPhone more than once. From an user's standpoint, I could reformulate them just saying that I felt deception and disappointment discovering that the iPhone does not allow me to use services like instant messaging applications (because of the one application at a time issue), push mail and VoIP services. Oh yeah, some of them you could sort of use them. For the best result: you pay them for a redundant service. If you just want to do something you can do using first class free services (such as Google's), or with well established hand held devices (such as a BlackBerry) well, the story is different.

The ban of Google Voice from the App Store just was the last straw. The Federal Communications Commission has (finally) sent a letter to Apple and AT&T in order to cast light upon what's going on with the iPhone and the App Store policies.

If you're interested or feel affected, as I do, you can of course read the letter directly from FCC's website. The letter's quite self-explanatory, asking questions such as:
Why did Apple reject the Google Voice application for iPhone and remove related third-party applications from its App Store?

Did Apple act alone, or in consultation with AT&T, in deciding to reject the Google Voice application and related applications?

Please explain any differences between the rejected apps and any other voice over Internet (VoIP) applications that remain in the store.

This question just goes straight to the point I'm doing since quite a time:
Please explain whether, on AT&T’s network, consumers’ access to and usage of Google Voice is disabled on the iPhone but permitted on other handsets, including Research in Motion’s BlackBerry devices.
I understand that companies such as AT&T and Vodafone subsidize the cost of such a terminal, widely broadening the audience of such a technology. I wouldn't personally pay, ever, more than 500 Eur just to wear a logo on my phone, which is one of the things the iPhone does well. On the other hand, you cannot limit my freedom of choice giving me access exclusively at an "application store" where I can install all sorts of stupidware but cannot install Google Voice which, as many other Google's technologies, is given for free and it's high quality, too.

I'm an Apple customer since a very short time. Nonetheless, I feel Apple's practices as one of the most opaque on the market. I did look for informations about some why, but never found any clear because. Such as multitasking. Such as IMAP IDLE. Such as Google Voice...

I'm not making any further comments given that the situation speaks for itself. And if you've got an iPhone and you're experiencing the same limitations I am, well, there's nothing more to say.

Let's wait to see what happens after the FCC move. Meanwhile, a word to the wise. Do you want a phone? Don't be the iJail.

Sun xVM VirtualBox v. 3.0.4 has been released

On August 4, 2009, Sun Microsystems has released the last minor update to its desktop virtualization solution, xVM VirtualBox. Here's the changelog, as usual.

This is really a maintenance review and there's really nothing new under the Sun. I didn't hit any of the fixed bugs. If you did, it's time to upgrade.

Download Sun xVM VirtualBox.

Vintage games: Monkey Island Special Edition on the iPhone

The ScummVM project and the porting on the iPhone were a good omen. But LucasArts' release of Monkey Island Special Edition for the iPhone on July 22, 2009, confirmed the rumors about LucasArts being porting some of its vintage games to the iPhone. Really good news, though, especially for all of the Monkey-fans out there like this 30 years old guy which grew up playing with them.

If right now you're launching the App Store beware the size of the download: 351 MB. If you're not connected to a wireless network then wait for it.  Hurry can cost you very much if you haven't got a flat rate. The size of the application, though, is well justified: a brand new orchestration, spoken dialogs, graphics delivered in two versions (original and Special Edition).


When I first launched Monkey Island on my phone I was really excited. During the application bootstrap I felt that nostalgic impression of remembering  something that you thought it had gone. It's a game, I know, but hey, you're a kid just once. And I was with Monkey Island.

The first impression was pretty good. I didn't remember well the game and the new graphics it bears weren't a shocking surprise to me. They are polished, greatly fit with the application and remember the last Monkey Island PC releases so much. Reading the game instructions I learnt that the gesture to switch to the classic view is a two fingers slide. As soon as it appeared, I clearly remembered. That cross shaped pointer which was a watermark on LucasArts' adventures!

Here are some screenshot of the two look and feels of Monkey Island Special Edition. The first is the map of Melée Island:

 
  
The maps are similar and the new look recalls The Curse of Monkey Island. The first thing you should get used to is the new pointer. At the beginning, the good ol' cross pointer was easier to use. The asymmetric pointer with the rotating arrow isn't that usable to me, at the beginning, especially on this device: my brain focuses a point and the finger goes there.

The second snapshot is a view of the village. On the classic view, there you have the verbs and the objects you collected. On the new view, the object are inside the chest on the right corner, at the bottom of the screen whilst verbs are on the face icon on the other side of the bar. A gesture which is available only with the new graphics is that double tapping an object invokes its default verb. Opening a door has never been so easy!
  
 

I think the game is really worth what it costs: less than 6 Euros. You will need plenty of time to amuse and get to the end.

The last advice: don't shake the phone. That's the gesture to get an hint. Hints in an adventure are no cool.

Tuesday, August 4, 2009

ZFS filesystems with compression enabled made my system unresponsive

At home, I'm running some services onto a Solaris Express Community Edition. The last time I live-upgraded it was to build 113. Before running SXCE, I was running Solaris 10 on it. This workstation also manages a RAIDZ-1 pool composed of four USB drives. I was perfectly aware about the performance penalty I was paying for running USB instead of eSATA (or even internal SATA), but there was no choice, then. By the way, that pool is just used to offline some snapshot backup scheduled at night, so I wasn't usually hit by the slowness of this solution.

The most annoying bug affecting this setup was this:

6586537: async zio taskqs can block out userland commands

The effect was the unresponsiveness of the userland commands I was using, including Xorg itself, when doing some big I/O on a compressed file system of that pool. Originally, the compressed file systems were lzjb and moved to gzip-based compression as soon as it was released, but before knowing I was affected by that bug. The bug really made the system unusable: sometimes even the keyboard echo could be delayed more than 5 seconds. I haven't any other machine running compressed ZFS file systems and I couldn't test the bug status so often. Moreover, that's the typical machine you'd better not touch, unless you've got a lot of spare time to recover from a disaster (just in case). With time, at the end, I just learned to live with it.



The good news is that I just realized that the bug was fixed and committed in SXCE build 115 and scheduled for Solaris 10 Update 9. I live upgraded my old SXCE to build 116, the same I'm happily running on my laptop, and I must recognize that the system is really, really much more responsive when ZFS compressed file systems are being stressed. Maybe there also were improvements on the USB side, who knows.


If you're running older build affected by this bug, upgrade now.

Monday, August 3, 2009

Using paper as a data storage medium

I thought I would comment on this because stumbling upon it was really a flashback to me.

Once upon a time, way back in the 90's, I and a friend of mine were waiting to enter an exam at the University. I don't remember now if it was Structure of Matter or Microelectronics. The fact is that we were speaking about how easily a CD-Rom got unreadable. Both of us had such an experience with properly stored CD-Rom and we were wondering about alternatives. The subject moved to the more general problem of storing information and finally we found ourselves talking about paper as a storage medium.

Many years later here's Jeff Atwood blogging about the same subject and introducing a software to get the job done.

Never did it myself, and I don't think I'm going to. I've got ZFS and snapshot-powered backups to protect my data. Nonetheless this flashback makes me wonder about alternative ways to store my GPG secret key to bring it with me...

Sunday, August 2, 2009

Dell Vostro 1710 plastics won't withstand an official repairing

As I told you some posts ago, my infamous Dell Vostro 1710 laptop had its motherboard changed because of its integrated NIC's failure. Long story short: after the first replacement, the motherboard had to be changed again. I had major issues even during battery recognition (never heard of that, before), let alone that the system could boot every four of five attempts.


After 10 days (yes, ten days) fighting with Dell Customer Service, which, besides, tried to convince me that the problem was a monitor's failure and probably an issue with an older BIOS (A10, which was the same BIOS I was running before), they finally sent me another technician to change the motherboard once again.


It was too much for the poor Dell Vostro plastics. Here's how the plastic protection of the monitor's junction "survived" the surgery:
My suggestions thus are:
  • Before buying a Dell Vostro, think twice (or thrice).
  • If you plan to run Solaris (including Nevada or Indiana) and don't think that a 10 meters long network cable crossing your living room is elegant, absolutely do not buy a Dell Vostro.