Pages

Wednesday, September 30, 2009

iPhone user experience: is it ready for a color-blind person?

I own an iPhone since a couple of months and I must admit that, as far as it concerns my user experience as a color-blind person, it is the best mobile device I worked with in a long time. It isn't perfect, though.

If you're wondering what color-blindness is, you can start reading this Wikipedia article. I seldom notice the effect of such an impairment and when I do, I'm usually interacting with a computer.

There exist many forms of color blindness and mine is called protanopia. Protanopia differs from other forms, such as deuteranopia, because we experience an abnormal dimming of light at some wavelength with the result that, for example, red is easily confused with black.

In the case of the iPhone, I must admit that I'm experiencing just one problem: missed calls. There's a fancy icon down there indicating the number of unchecked missed call. When you open the corresponding window, guess what? Missed calls are "highlighted" using the good ol' red fonts while received or dialed calls are black. The background? White, just to make it worse. I tried looking for some other color scheme but there's none. Just the good ol' "iPhone experience" that Apple is providing us.

This rare impairment affects almost 1% of the male population. Not much, indeed. But for us, there are no red characters to catch our attention. They just fade to black, hiding between the others. If you're working with a color-blind person, please follow good sense: don't rely on colors and please, do not use red to catch their eyes. Just rely on the tools your word processor is offering you, such as that yellow marker whose existence you probably wondered about sometimes.

Sunday, September 27, 2009

Googlle?


As I'm a customary animal, everyday I open my browser and go along the same Internet path. As I'm a faithful Google user, one of the pages I often open is iGoogle. Today I thought my eyes were failing when I realized I was seeing the simplest and oddest Google doodle I've ever seen. Who can I ask about it? Google, of course. The first thing I did was saving that doodle to the disk and its very name gave the answer:
11th_birthday.gif.

Happy birthday Google!

Pushing gmail to your iPhone (without GPush)

As I told you some posts ago I bought GPush and struggled to make it work. At the end I started to be notified about incoming mail, although with some glitches from time to time. Now, very shortly after GPush was released, you don't need it anymore: Google Sync is now pushing mail to your iPhone.

This is really good news because now you can sync your mail, your calendar and your contacts with your iPhone. As I was already using Google Sync for contacts and calendars, setting up GMail push was really easy: just the flip of a switch!



If you haven't set up your Google Sync account on your iPhone, just follow the instructions on the Google Sync web site.

As far as I can tell, mail is pushed to the iPhone almost instantaneously. Nonetheless, there's a thing I'm not really happy about. I miss is a notification popup: no one is ever shown and the counter on the mail icon is the only information you're given when a mail is pushed:


I would expect a mail to be managed just like an SMS or even a phone call: checking periodically sort of defies the purpose of a push notification...

Thursday, September 10, 2009

Ellison's statement to Sun customers

Yes, this is marketing. But it sheds light (and hope) upon us, concerned Solaris users, who were wondering about what will happen to our favorite OS. Here's what Oracle CEO, Larry Ellison, is saying about the merger.

Bad news for HP, Dell and IBM, indeed. "We're in it to win it." Let's give Larry a chance.

Saturday, September 5, 2009

JSR 303 - Overview, part 2

Table of contents:
Introduction
In the previous post, I began introducing the concepts defined in the JSR 303 (Bean validation) specification. The first concept we dealt with was the concept of constraint. The constraint represent the validation rule you want to apply to the target of your validation process, be it a field, a method or an entire class. This post is an overview of the validation API and a brief description of the validation process you can trigger on a JavaBean of your own. The understanding of the validation routine is necessary to avoid common pitfalls when defining and applying constraint to a class or a class' fields.

A word of warning: JSR 303 API is not yet committed and might change in a future revision of the specification. At the moment, the JSR 303 is in the public draft review state.

The validation process (in a nutshell)

The validation process is always started on a bean instance and may be instructed to recursively validate the objects in the object graph starting from such bean. Although the validator API also provides methods to validate only a given field or property of the target bean (for fine grained validation process, such as it might be the case for a partially populated bean) the validation routine described here will be partially honored for such methods, too.

The validation routine performs the following tasks, for the applying groups to be validated:
  • perform all field-level constraints' validations
  • perform all method-level constraints' validations
  • perform class-level constraints' validations
  • perform cascade validations

The validation routine is a deterministic routine which also checks the following:
  • the validation routine will avoid infinite loops, keeping track of the validated object instances in the current validation process
  • the validation routine won't execute the same validation more than once for the same validation constraint, keeping track if it matched on a previous group match.

The constraint validations matching the current groups being validated are run with no deterministic order.

Constraints semantics and constraint validator visibility

The algorithm implemented by the validation routine is pretty intuitive: it starts from inside the bean and goes to the outside. First, fields are validated, then properties, then the entire bean. The first thing to grasp is what a constraint really represent, that is something you must be aware of when designing your constraints and your validation strategy.

The constraint may be thought of as a validation rule applied to an object. The context in which the constraint is applied (the root object's type) is part of the validation rules' semantics:
  • If you apply a constraint to an object's field or property, you're applying a validation rule to that field (as part of a type), but the validation routine visibility is limited to only that object. You cannot access values of other fields of your object during the validation of the fields' constraints.
  • If you apply a constraint to a type, the validation routine visibility will be able to apply class' invariants (if you've got any) and any other validation rule whose parameters may be whichever part (be it field or property) that make up your type.

Basically, you'll use field-level (resp: property-level) constraints when you want to apply the validation rule to that field (resp.: property) as part of a type. For example, a String id field may be length-bounded when part of a type, such as a Passport (A String isn't length-bounded by definition).


You'll use class-level constraints when you want to apply a validation rule to the state of a type as a whole. For example, if you define a Passport type you would apply a class level constraint to check that the id-field is coherent with the Country field.


That's why the validation routine starts from inside the bean and proceeds outwards: you would not apply validation rules to an entire class instance state if some of the class' fields were invalid.


Constraint design principles and a constraint domain

The validation routine definition and the constraints semantics outlined in the previous section have got an effect, which is visible in the way these concepts have been defined. As we saw, whether to apply a constraint to a class or to a field depends on the semantics of the constraint you defined. But a class' fields are class' instances themselves. Let's suppose we're validation a bean of class X which defines a field of type Y.

public class X {
  private Y y;
  [...]
}

Moreover, let's suppose you designed class Y as an helper class for X and that Y is seldom visible, or visible at all, in your API. When validating such type you might be tempted to apply validation rules to Y instances as field level validations in class X. The constraint validation would indeed be passed Y instance and you could get access to all of Y's fields and properties, if you needed them. Furthermore, you could argue that this way you could validate both X and Y instances in just one validation pass. Yes, but that could be a huge design flaw if the constraints you're applying in the "X context" were in reality validation rules of class Y whose scope is not restricted to the "Y as a field of X" use case.

The question you've got to answer, during the definition of every constraint you design, is the following:

What is this constraint's domain?

If you establish an analogy with a constraint validators and a set of mathematical operators, you should be asking yourself:

What is the domain of this operator?

The answer to this question will suggest you the correct way to define and apply your constraint. If the answer is "The domain of the constraint is the entire type Y" (and you own type Y), you would define constraints accordingly and apply them to type Y (and its fields, if necessary). If the answer is "The domain of the constraint is field y of type X" you would define the constraints accordingly and you'll apply them to X.id.

Obviously reality might be more complicated than this and you might, for example:
  • discover that you need both a set of constraints for the Y domain and another set for X.id domain.
  • need constraints for the Y type but it's not yours.
In the first case you would act accordingly to achieve the correct isolation of the two distinct domains while in the second case you could extend, if possible, type Y or wrap it in another type of yours.

This leads us directly to why the validation routine must execute cascading validations.

Cascading validations

Cascading validations are the means you need to cleanly implement constraints such as:

Type X is subject to the following validation rules [...] and its field y is valid in the Y domain.

To model such situation you would apply the @Valid annotation to the X.y field:

public class X {
  @Valid
  private Y y;
  [...]
}

When encountering such annotation during the validation of an X instance, the validation routine will also apply the constraint validators applied to Y in the Y domain. In other words, the validation routine would go down the object graph recursively reapplying itself to everyone of such fields (resp: properties).

What's next

In the next post I'll introduce the least (but not last) few missing concepts and then we'll start building some examples.

Thursday, September 3, 2009

JSR 303 - Overview, part 1

What was before the JSR 303?

Well, as you probably know, before the JCP promoted this specification, very little could be done and you probably ended up using non-standard validation frameworks and/or tier-specific validation frameworks (such as Spring Validator, just to make an example). Things could go even worse: you probably ended up writing your own implementation solution.

The only standard approach I can think of is the venerable (and neglected) JavaBeans specification. I won't dig into the specification details, but JavaBeans are not just classes with some getters and setters. That's just the beginning. The JavaBeans specification, although nowadays that solution might seem clumsy, would support validations in the form of event listeners processing property change events. The mechanism is powerful but won't really fit the simplest use cases, which probably are the great majority we're dealing with everyday. The JavaBeans specification was tailored to provide a object component framework, such as Microsoft's COM could be. That's why events and listeners find their place there. But in the case of your-everyday-class, I recognize there's too much boilerplate code in it. Moreover, JavaBeans' events and listeners aren't declaratively configurable and, once more, the pitfall would be building your own configuration mechanism: ending up with your own non-standard solution, once more.

The basic concept: the constraint

The JSR 303 specification introduces the concept of constraint. The beauty of a constraint is that it's a metadata which can be applied declaratively to types, methods and fields. The constraint is a simple and intuitive concept which defines:
  • How you would apply an instance of the constraint to a target. In other words, its interface.
  • Who's going to implement the validation algorithm associated to that constraint.
I'm going to make the most basic example: validating a String length. The constraint would use two configuration parameters representing the minimum and the maximum length of the string. If one of the parameters would be left unspecified it would take its default value, such as 0 for the lower bound and infinite for its upper bound. The metadata you would need to apply to a field subject to this validation rule would thus be:
  • the constraint itself, let's call it Size
  • the constraint configuration parameters: min and max.
How would your class end up? Something like:

public class YourClass {
  @Size(min=1,max=9)
  private String yourField;
...
}

Of course the specification gives us the means to inject this kind of metadata without applying annotations. Somewhere in the definition of the Size constraint there would be the link with the class which actually implements the (really basic) validation algorithm. This is one of the indirection levels formalized in the specification: the validation algorithm cleanly separates the general validation process and introduces hooks, in the same definition of a constraint, where the constraint specific algorithm would be invoked. The general validation process, for example, takes into account the objects' graph navigation and the API to discover and describe the constraints applied to a target (the constraint metadata request API).

The responsibility of the analyst programmer would then be:
  • Define his own constraints.
  • Implement his own constraints.
  • Apply constraints to validation targets.
  • Trigger the validation process when needed.


All of the validation API has been designed with simplicity and reusability in mind. As far as I've seen this API is JavaBeans-based and does not establish a dependence with any other API. Furthermore this API is not obtrusive: you can define, declare and use constraints in every tier of your Java EE application, from the EJB up, up to the JSF components and the managed beans. The validation metadata and the validation process are (finally) orthogonal to the logic of your application: the design and the implementation of a Java EE system-wide validation policy for your objects' is finally feasible and standardized.

What's next

In the next post we'll dig into the technical details of what a constraint is and how you can define and use your own.

Sun xVM: Cloning your domU

If you're using Sun xVM for server virtualization the capability of cloning a domU is a real time-saver: reduced downtime and reuse of a corporate-standard OS installation and configuration. Really cool. If you sum up the power of ZFS snapshots and clones to all this, the picture is impressing: you can configure you domU to use ZFS volumes that you can take snapshot of and clone at will.

Now that you have the big picture, assuming you already know how to administer you ZFS pools, how can you clone your domU? This way: the first thing you have to do is shutting down your domU:

# virsh shutdown your-domain

Now you can copy your domain disk files or snapshot the corresponding ZFS volumes with a command such as:

# zfs snapshot your-fs@snap-name
# zfs clone your-fs@snap-name your/clone/name

If your just using files, then:

# cp your-domain-disk-file your-new-domain-disk-file

The next thing you've got to do is dumping and editing your source domain configuration:

# virsh dumpxml your-domain > your-domain.xml
# cp your-domain.xml your-new-domain.xml

Now, before importing this file, you've got to apply some modifications. Since Sun xVM identifies domains by means of a name and an uuid: then, you've got to edit the domain definition file to change the name and remove the already-used uuid. A new uuid will be generated for you as soon as Sun xVM wil import the domain definition. So, open the file:

# vi your-new-domain.xml

First, change the name you'll find in the <name/> element and then remove the entire <uuid/> element. The last modification you must apply is having the new domain point to the new file or ZFS volume you copied or cloned earlier. An example of a disk definition is the following excerpt from a domain configuration:

<disk type='file' device='disk'>
  <driver name='file'/>
  <source file='/export/home/xvm/db-server/winsrv2003.img'/>
  <target dev='hda'/>
</disk>

Just change the file attribute of the <source/> element and the job is done.

Last thing you've got to do is importing the new domain definition:

# virsh define your-new-domain.xml

Done! Now you can boot your new domain.

A last word of warning: chances are your just-cloned system shouldn't be running aside the old one without proper configuration. Double check your virtualized OS configuration for parameters such as:
  • network configuration (hostname, DNS, static IP addresses, etc.)
  • network service which may clash

Enjoy your virtualized server environment.