Wednesday, December 21, 2011

Google Authenticator: Using It With Your Own Java Authentication Server

The Google Authenticator application for mobile devices is a very handy application that implements the TOTP algorithm (specified in RFC 6238). Using Google Authenticator you can generate time passwords that can be used to authorize users in an authentication server that shares the secret key of the requesting users.

Google Authenticator is mainly used to access Google services using two-factor authentication. However, you can take advantage of Google Authenticator to generate time based password to be authenticated by a server of yours. The implementation of such a server is pretty simple in Java and you can get some inspiration getting the source code of the Google Authenticator PAM module. In this blog post, we will go through a simple implementation of the TOTP algorithm in a Java class.

Generating the Secret Key.

To generate the secret key we will use a random number generator to fill up a byte array of the required size. In this case, we want:
  • A 16 characters Base32 encoded secret key: since Base32 encoding of x bytes generate 8x/5 characters, we will use 10 bytes for the secret key.
  • Some scratch codes (using Google's jargon).

// Allocating the buffer
byte[] buffer =
  new byte[secretSize + numOfScratchCodes * scratchCodeSie];

// Filling the buffer with random numbers.
// Notice: you want to reuse the same random generator
// while generating larger random number sequences.
new Random().nextBytes(buffer);

Now we want to extract the bytes corresponding to the secret key and encode it using the Base32 encoding. I'm using the Apache Common Codec library to get a codec implementation:

// Getting the key and converting it to Base32
Base32 codec = new Base32();
byte[] secretKey = Arrays.copyOf(buffer, secretSize);
byte[] bEncodedKey = codec.encode(secretKey);
String encodedKey = new String(bEncodedKey);

Loading the Key Into Google Authenticator

You can manually load the key into Google Authenticator, or generate a QR barcode to have the application loading it from it. If you want to generate a QR barcode using Google services, you can generate the corresponding URL with a code such as this:

public static String getQRBarcodeURL(
  String user,
  String host,
  String secret) {
  String format = "https://www.google.com/chart?chs=200x200&chld=M%%7C0&cht=qr&chl=otpauth://totp/%s@%s%%3Fsecret%%3D%s";
  return String.format(format, user, host, secret);
}

Verifying a Code

Now that we've generated the key and our users can load them into their Google Authenticator application, we need the code required to verify the generated verification codes. Here's a Java implementation of the algorithm specified in the RFC 6238:


private static boolean check_code(
  String secret,
  long code,
  long t)
    throws NoSuchAlgorithmException,
      InvalidKeyException {
  Base32 codec = new Base32();
  byte[] decodedKey = codec.decode(secret);

  // Window is used to check codes generated in the near past.
  // You can use this value to tune how far you're willing to go. 
  int window = 3;
  for (int i = -window; i <= window; ++i) {
    long hash = verify_code(decodedKey, t + i);

    if (hash == code) {
      return true;
    }
  }

  // The validation code is invalid.
  return false;
}

private static int verify_code(
  byte[] key,
  long t)
  throws NoSuchAlgorithmException,
    InvalidKeyException {
  byte[] data = new byte[8];
  long value = t;
  for (int i = 8; i-- > 0; value >>>= 8) {
    data[i] = (byte) value;
  }

  SecretKeySpec signKey = new SecretKeySpec(key, "HmacSHA1");
  Mac mac = Mac.getInstance("HmacSHA1");
  mac.init(signKey);
  byte[] hash = mac.doFinal(data);

  int offset = hash[20 - 1] & 0xF;
  
  // We're using a long because Java hasn't got unsigned int.
  long truncatedHash = 0;
  for (int i = 0; i < 4; ++i) {
    truncatedHash <<= 8;
    // We are dealing with signed bytes:
    // we just keep the first byte.
    truncatedHash |= (hash[offset + i] & 0xFF);
  }

  truncatedHash &= 0x7FFFFFFF;
  truncatedHash %= 1000000;

  return (int) truncatedHash;
}

The t parameter of the check_code method and verify_code methods "is an integer and represents the number of time steps between the initial counter time t0 and the current Unix time." (RFC 6238, p. 3) The default size of a time step is 30 seconds, and it's the value that Google Authenticator uses too. Therefore, t can be calculated in Java as

t = new Date().getTime() / TimeUnit.SECONDS.toMillis(30);

Download the Library

A ready to use library can be downloaded from GitHub, where Mr. Warren Strange kindly started a repository with the code from this post and packaged it in a Maven project. The library contains a complete implementation of the server-side code, better documentation and some example code in the test cases.

Conclusion

You can now use the Google Authenticator applications and use it to generate time based passwords for your users, authenticated against your own authentication server.

As you can see, the required code is pretty simple and all of the required cryptographic functions are provided by the runtime itself. The only nuisance is dealing with signed types in Java.

Enjoy!

Friday, December 16, 2011

Using a ThreadPoolExecutor to Parallelize Independent Single-Threaded Tasks

The task execution framework, introduced in Java SE 5.0, is a giant leap forward to simplify the design and the development of multi threaded applications. The framework provides facilities to manage the concept of task, to manage thread life cycles and their execution policy.

In this blog post we'll describe the power, the flexibility and the simplicity of this framework showing off a simple use case.

The Basics

The executor framework introduces an interface to manage task execution: Executor. Executor is the interface you use to submit tasks, represented as Runnable instances. This interface also isolates a task submission from a task execution: executors with different execution policies all publish the same submission interface: should you change your execution policy, your submission logic wouldn't be affected by the change.

If you want to submit a Runnable instance for execution, it's as simple as:

Executor exec = …;
exec.execute(runnable);

Thread Pools

As outlined in the previous section, how the executor is going to execute your runnable isn't specified by the Executor contract: it depends on the specific type of executor you're using. The framework provides some different types of executors, each one with a specific execution policy tailored for different use cases.

The most common type of executors you'll be dealing with are thread pool executors., which are instances of the ThreadPoolExecutor class (and its subclasses). Thread pool executors manage a thread pool, that is the pool of worker threads that's going to execute the tasks, and a work queue.

You surely have seen the concept of pool in other technologies. The primary advantage of using a pool is reducing the overhead of resources creation, reusing structures (in this case, threads) that have been released after use. Another implicit advantage of using a pool is the capability of sizing your resource usage: you can tune the thread pool sizes to achieve the load you desire, without jeopardizing system resources.

The framework provides a factory class for thread pools called Executors. Using this factory you'll be able to create thread pools of different characteristics. Often, the underlying implementation is often the same (ThreadPoolExecutor) but the factory class helps you quickly configure a thread pool without using its more complex constructor. The factory methods are:
  • newFixedThreadPool: this method returns a thread pool whose maximum size is fixed. It will create new threads as needed up to the maximum configured size. When the number of threads hits the maximum, the thread pool will maintain the size constant.
  • newCachedThreadPool: this method returns an unbounded thread pool, that is a thread pool without a maximum size. However, this kind of thread pool will tear down unused thread when the load reduces.
  • newSingleThreadedExecutor: this method returns an executor that guarantees that tasks will be executed in a single thread.
  • newScheduledThreadPool: this method returns a fixed size thread pool that supports delayed and timed task execution.


This is just the beginning. Executors also provide other facilities that are out of scope in this tutorial and that I strongly encourage you to study about:
  • Life cycle management methods, declared by the ExecutorService interface (such as shutdown() and awaitTermination()).
  • Completion services to poll for a task status and retrieve its return value, if applicable.

The ExecutorService interface is particularly important since it provides a way to shutdown a thread pool, which is something you almost surely want to be able to do cleanly. Fortunately, the ExecutorService interface is pretty simple and self-explanatory and I recommend you study its JavaDoc thoroughly.

Basically, you send a shutdown() message to an ExecutorService, after which it won't accept new submitted tasks, but will continue processing the already enqueued jobs. You can pool for an executor service's termination status with isTerminated(), or wait until termination using the awaitTermination(…) method. The awaitTermination method won't wait forever, though: you'll have to pass the maximum wait timeout as a parameter.

Warning: a source of errors and confusion is a understanding why a JVM process never exits. If you don't shutdown your executor services, thus tearing down the underlying threads, the JVM will never exit: a JVM exits when its last non-daemon thread exits.

Configuring a ThreadPoolExecutor

If you decide to create a ThreadPoolExecutor manually instead of using the Executors factory class, you will need to create and configure one using one of its constructors. The most extensive constructor of this class is:

public ThreadPoolExecutor(
    int corePoolSize,
    int maxPoolSize,
    long keepAlive,
    TimeUnit unit,
    BlockingQueue<Runnable> workQueue,
    RejectedExecutionHandler handler);

As you can see, you can configure:
  • The core pool size (the size the thread pool will try to stick with).
  • The maximum pool size.
  • The keep alive time, which is a time after which an idle thread is eligible for being torn down.
  • The work queue to hold tasks awaiting execution.
  • The policy to apply when a task submission is rejected.

Limiting the Number of Queued Tasks

Limiting the number of concurrent tasks being executing, sizing your thread pool, represents a huge benefit for your application and its execution environment in terms of predictability and stability: an unbounded thread creation will eventually exhaust the runtime resources and your application might experience as a consequence, serious performance problems that may lead even to application instability.

That's a solution to just one part of the problem: you're capping the number of tasks being executed but aren't capping the number of jobs that can be submitted and enqueued for later execution. The application will experience resource shortage later, but it will eventually experience it if the submission rate consistently outgrows the execution rate.

The solution to this problem is:
  • Providing a blocking queue to the executor to hold the awaiting tasks. In the case the queue fills up, the submitted task will be "rejected".
  • The RejectedExecutionHandler is invoked when a task submission is rejected, and that's why the verb rejected was quoted in the previous item. You can implement your own rejection policy or use one of the built-in policies provided by the framework.

The default rejection policies has the executor throw a RejectedExecutionException. However, other built-in policies let you:
  • Discard a job silently.
  • Discard the oldest job and try to resubmit the last one.
  • Execute the rejected task on the caller's thread.

When and why would one use such a thread pool configuration? Let's see an example.

An Example: Parallelizing Independent Single-Threaded Tasks

Recently, I was called to solve a problem with an old job my client was running since a long time ago. Basically, the job is made up of a component that awaits for file system events on a set of directory hierarchies. Whenever an event is fired, a file must be processed. The file processing is performed by a proprietary single threaded process. Truth be said, by its own nature, even if I could, I don't if I could parallelize it. The arrival rate of events is very high throughout part of the day and there's no need to process file in real time, they just to get processed before the next day.

The current implementation was a mix and match of technologies, including a UNIX shell script that was responsible for scanning huge directory hierarchies to detect where changes were applied. When that implementation was put in place, the number of cores in the execution environment were two, as much. Also, the rate of events was pretty lower: nowadays they're in the order of the millions, for a total of between 1 and 2 terabytes of raw data to be processed.

The servers the client is running these processes nowadays are twelve core machines: a huge opportunity to parallelize those old single-threaded tasks. We've got basically all of the ingredients for the recipe, we just need to decide how to build and tune it. Some thoughts before writing any code were necessary to understand the nature of the load and these are the constraints I detected:

  • A really huge number of files is to be scanned periodically: each directory contains between one and two millions of files.
  • The scanning algorithm is very quick and can be parallelized.
  • Processing a file will take at least 1 second, with spikes of even 2 or 3 seconds.
  • When processing a file, there is no other bottleneck than CPU.
  • CPU usage must be tunable, in order to use a different load profile depending on the time of the day.

I'll thus need a thread pool whose size is determined by the load profile active at the moment of invoking the process. I'm inclined to create, then, a fixed size thread pool executor configured according to the load policy. Since a processing thread is only CPU-bound, its core usage is 100% and waits on no other resources, the load policy is very easy to calculate: just take the number of core available in the processing environment and scale it down using the load factor that's active at that moment (and check that at least one core is used in the moment of peak):

int cpus = Runtime.getRuntime().availableProcessors();
int maxThreads = cpus * scaleFactor;
maxThreads = (maxThreads > 0 ? maxThreads : 1);

Then, I need to create a ThreadPoolExecutor using a blocking queue to bound the number of submitted tasks. Why? Well: the directory scanning algorithms are very quick and will generate a huge number of files to process very quickly. How huge? It's hard to predict and its variability is pretty high. I'm not going to let the internal queue of my executor fill up indiscriminately with the objects representing my tasks (which include a pretty huge file descriptor). I'll prefer let the executor reject the files when the queue fills up.

Also, I'll use the ThreadPoolExecutor.CallerRunsPolicy as rejection policy. Why? Well, because when the queue is filled up and while the threads in the pools are busy processing the file, I'll have the thread that is submitting the task executing it. This way, the scanning stops to process a file and will resume scanning as soon as it finishes executing the current task.

Here's the code that creates the executor:


ExecutorService executorService =
  new ThreadPoolExecutor(
    maxThreads, // core thread pool size
    maxThreads, // maximum thread pool size
    1, // time to wait before resizing pool
    TimeUnit.MINUTES, 
    new ArrayBlockingQueue<Runnable>(maxThreads, true),
    new ThreadPoolExecutor.CallerRunsPolicy());


The skeleton of the code is the following (greatly simplified):


// scanning loop: fake scanning
while (!dirsToProcess.isEmpty()) {
  File currentDir = dirsToProcess.pop();


  // listing children
  File[] children = currentDir.listFiles();


  // processing children
  for (final File currentFile : children) {
  // if it's a directory, defer processing
  if (currentFile.isDirectory()) {
    dirsToProcess.add(currentFile);
    continue;
  }


  executorService.submit(new Runnable() {
    @Override
    public void run() {
      try {
        // if it's a file, process it
        new ConvertTask(currentFile).perform();
      } catch (Exception ex) {
        // error management logic
      }
    }
  });
}


// ...
        
// wait for all of the executor threads to finish
executorService.shutdown();
        
try {
  if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
    // pool didn't terminate after the first try
    executorService.shutdownNow();
  }


  if (!executorService.awaitTermination(60, TimeUnit.SECONDS)) {
    // pool didn't terminate after the second try
  }
} catch (InterruptedException ex) {
  executorService.shutdownNow();
  Thread.currentThread().interrupt();
}

Conclusion

As you can see, the Java concurrency API is very easy to use, very flexible and extremely powerful. Some years ago, I would have taken much more effort to write such a simple program. This way, I could quickly solve a scalability problem caused by a legacy single threaded component in a matter of hours.

Wednesday, December 14, 2011

Hyperfocal Distance: Advanced Depth of Field and Focusing Tips

The Basics: Depth of Field

Focusing is invariably one of the tasks you perform while taking a shot. You also know that focusing isn't only about having your subject in focus: you can use depth of field as a composition technique to give your photo a particular mood.

Depth of field is the distance between the farthest and the nearest object that will appear in focus in your photo (we will later clarify what does in focus mean). You may use a shallow depth of field when you want to isolate your subject from the surrounding objects, as you can see in the following image:

Shallow Depth of Field

The yellow flower is in focus, while the background is blurred. The characteristics of the blurred part of the image is called bokeh, and mainly depends on the chosen aperture and on the physical characteristics of the lens you're using.

On the other hand, other times you may want every part of your image to be in focus, such as in a typical landscape shot.

Being able to understand how you can control the depth of field is fundamental if you want to use it proficiently and get the shots you want.

Depth of field is mainly affected by these parameters:
  • The focal length of your lens: the greater the focal length, the smaller the depth of field.
  • The aperture you're using: the smaller the aperture, the greater the depth of field.
  • The distance to the subject: the shorter the distance to the subject, the smaller the depth of field.

As we'll see later, and as you've probably experienced yourself, it's much more difficult to get a shallow depth of field than a deeper one. How many times were you striving to get a portrait with a good bokeh without success? You tried raising your aperture (reducing the f-number) but nothing, the background wasn't sufficiently blurred. Why?

We will soon discover it. These rules are fairly basic and are pretty well known to the average amateur photographer. However, these are only approximations of a more complicated formula and sometimes you may strive without success to get the results you want even if you're following all of the above mentioned advices.

Understanding the Nature of Depth of Field

Depth of field behind and in front of the object that is on focus isn't symmetric: on most conditions, depth of field will be deeper behind the subject and shallower in front of it. We won't explore the details of the depth of field equations, but it's important that you realize the following:
  • The ratio between the focus zone behind a subject and the focus zone in front of it tends to 1 when the distance between the camera and the subject gets shorter and is about the same order of magnitude of the lens focal length. Unless you're shooting with a macro lens, this won't be the case.
  • The depth of focus zone behind the subject increases as the distance from the subject increases and will reach the positive infinity at a finite distance, usually called hyperfocal distance.

What does this mean? Well, amongst other things it means that:
  • It's way more difficult to blur the foreground rather than the background.
  • If the distance from the subject is greater than the hyperfocal distance you aren't going to get that beautiful bokeh you're looking for, no matter how much you strive for it.
  • On the other hand, if you're looking for a picture with a really deep depth of field, just be sure your subject is farther than the hyperfocal distance.

Hyperfocal Distance

We now understand that the hyperfocal distance is responsible for at least some problems we had while getting the focus condition we looked for our shot. The hyperfocal distance H can be expressed as:

H = (f2) / (N c)

where f is the focal length, N the aperture and c the diameter of the circle of confusion. The circle of confusion, as suggested at the beginning of this post, is the criterion used to establish when a region of a photo can be considered in focus: it's the minimum diameter of the circle generated by a cone of light rays coming from a lens when a point is not in focus. Being the diameter of a physical light spot on your sensor (or on your film), this value depends on the size of the sensor: the biggest the sensor, the biggest can be c to get comparable sharpness. You can use 0.03 mm as a typical value for c.

Some properties of the hyperfocal distance are:
  • The biggest the focal length, the biggest H is. Please note that the relationship is quadratic: a lens with a double focal length will give an hyperfocal distance four times as big, keeping the other parameters fixed.
  • The biggest the aperture, the smallest the hyperfocal distance.
  • When focusing on an object at the distance H, the depth of field will be extend from H/2 to infinity.
  • When focusing on an object at a distance H or greater, the ratio between the focus zone behind the subject and the focus zone in front of the subject is infinite.

But how big is H? Here are some values for H(f, N) some common focal lengths and apertures (assuming c = 0.03 mm):
  • H(18mm, f/4) = 2.7 m
  • H(18mm, f/16) = 0.67 m
  • H(55mm, f/4) = 25.21 m
  • H(55mm, f/16) = 6.30 m
  • H(100mm, f/4) = 83.33 m
  • H(100mm, f/16) = 20.83 m
  • H(200mm, f/4) = 333.33 m
  • H(200mm, f/16) = 83.33 m

It's now apparent why focal length is often really important if you need a good bokeh. If you're shooting with a 18mm-f/4 lens, if your subject is more than 2.7 meters away there's no way to get a decent bokeh. And even if it got closer, the boken wouldn't be that good either. On the other hand, this is the reason why wide lenses are really good to get a really wide landscape in reasonable focus. Even if you were shooting with a 55mm lens at f/4, any object farther than 12.6 m (25.21 m / 2) would be in focus.


We've understood why, if you want to shoot at a subject at a given distance and you want to get a good bokeh, you must take the hyperfocal distance into account:
  • If your subject is nearer than the hyperfocal distance, you can shoot and tweak your depth of fields using the other parameters.
  • If your subject is farther than the maximum hyperfocal distance you can get with your lens, your only option is changing it.
  • If your subject is very close to the hyperfocal distance of the lens configuration you're using, you should consider changing the lens anyway to get a good bokeh (the reason will be explained in the next section).

Evaluating the Depth of Field

Learning your lens parameters is important and knowing the approximate hyperfocal distance of your lenses (at least for some apertures) is important if you need to quickly evaluate if the conditions in which you're going to take a shot are correct.

There's another advantage of knowing the hyperfocal distance: using a curious mathematical property of H, you can quickly evaluate the characteristics of the depth of fields at distances smaller than H without learning the complex, and not-as-easy-to-evaluate, depth of field equations. Here's how.

The nearest end and the farthest end equations of the depth of field can be expressed in terms of H and s (the distance from the subject), when s is much larger than the focal length (which is always true unless you're doing macro photography, which is not the case):

DN = H s / (H + s)
DF = H s / (H - s)

This equations are pretty simple, but not enough for a photographer to quickly use them when shooting without the help of a calculator! If we now consider distances s = H / n (where n is a natural integer), then these formulas simplify ever further:

DN = H / (n + 1)
DF = H / (n - 1)

  • The depth of field at a distance H/n (where n is an integer number) is the range [H/(n+1), H/(n-1)].

Much easier to calculate by mind! Also, it's apparent that for relatively small H or relatively big n you're going to have a shallow depth of field. You often won't even need to calculate the result, just remember the principle.

Using this trick, you can evaluate approximately the depth of field. For example: if you're shooting with a 200mm lens at f/4, you know that H is approximately 333 m. What's the depth of field if we're making a portrait to a subject at 10 m? 10 meters is approximately 333/30 so that, from the above formula, the depth of field will be the range [333/31, 333/29] = [10.74, 11.48]. Pretty shallow, indeed.

From this formula it's also clear why the ratio between the focus zone behind the subject and in front of it goes down from infinity to 1 when the distance from the subject goes down from H.

Conclusion

In this blog post we've introduced the concept of hyperfocal distance and explained why it is so important to understand the basic characteristics of the depth of field. Depth of field is an important tool for you as a photographer and it's omnipresent in every photography course. However, very often a photographer isn't able to evaluate the depth of fields he's going to obtain from a specific camera configuration and he's left with trial and error, without even being able to assess if the shot he's looking for is even possible to achieve.

The hyperfocal distance equation is very simple and is much simpler of many depth of fields models you can find. If you don't need to calculate it exactly, known H is sufficient in most everyday situations.

Have fun.

Adobe Photoshop Lightroom Tutorial - Part XVI - Saving And Migrating Your Adobe Lightroom Presets

Part I - Index and Introduction
Part XVII - Tone Curve

As we've seen in previous posts of this tutorial, many aspects of Lightroom can be customized by users and saved into a preset. There are many kind of presets and the most commonly used are:
  • Develop presets.
  • Export presets.
  • External editor presets.
  • Import presets.
  • Metadata preset.
  • Watermarks.

Instead of repetitively applying the same configurations over and over again, you can permanently store them in a preset and load them when necessary. You can, for example, save a commonly used develop configuration (such as a color temperature setting) in a preset and apply it with just a click to a bunch of photos at a time. Or you can save metadata configuration into presets and automatically apply them during an import operation.

There are many reasons why you should know how and where Adobe Lightroom stores your presets. Some of the common scenarios where you would want to save and migrate them are:
  • Safeguarding your data against loss.
  • Sharing them across different Lightroom instances, probably because you're using more than one computer.

Imagine, for example, you create some custom adjustment brushes and save them as presets. You then migrate your catalog to resume working on another computer only to discover that your brushes are gone.

Maybe you thought that a Lightroom catalog was self contained: it is, but only to a certain degree.

Where Are Presets Stored?

Lightroom version 3 can store and use presets in two places:
  • In the preset folder, a unique folder per user account.
  • In the catalog directory (only if explicitly enabled).

By default, Lightroom only uses the user presets folder, unless you select the Store presets with catalog checkbox in the Lightroom preferences window, as shown in the following picture:

Lightroom Preferences Dialog

The user-wide presets folder is called Lightroom and its location is platform dependent. Fortunately, there's a quick way to determine which folder it is: open the Preferences dialog, navigate to the Presets tab and push the Show Lightroom Presets Folder… button (shown in the previous picture). In the case of the OS X operating system, this folder is located in ~/Library/Application Support/Adobe/Lightroom:

Lightroom Presets Folder in Mac OS X

If you decided to store the presets in your catalogs, you will find a copy of this directory into your catalog root directory.

How Can I Backup And Migrate My Presets?

This is really easy: copy them to a folder that Lightroom recognizes as a preset folder and they're will be ready to use. Just pay attention to your Lightroom configuration, as detailed in the previous section, to copy presets from and to a correct location:
  • If you're storing the presets with the catalog, just synchronize the catalog across computer and no more action is needed.
  • If you're storing the presets in the user wide presets folder, you need to back it up manually and synchronize it across computers.

An effective way to synchronize files across computer is using rsync. Using rsync you can efficiently keep in sync a folder across different machines with almost no effort and transmitting the minimum amount of data to keep in sync the target with the source. This is a specially important factor to take into account since catalogs can get in the gigabytes range very soon.

Which Approach Should I Use?

If you're using only one or few catalogs, maybe its convenient for you to change the default Lightroom behaviour and store them with your catalogs. Every time you back up your catalog, you're presets will be backed up as well. And if you synchronize a catalog to another computer, all of your presets will be available on the other machine without further effort.

However, if you're using many catalogs, you may want to store commonly used presets outside the catalogs: otherwise, you would have to create them again to any new catalog you create.

You can also use a third and completely manual approach: you can manually manage which presets you want to store and at which level. Just look for the corresponding files (the presets folder structure is pretty self-explanatory) and copy them to where you need it.


If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Tuesday, December 13, 2011

JIRA Development Cookbook - A Book By Jobin Kuruvilla

Some days ago I obtained a copy of JIRA Development Cookbook, published by Packt Publishing and available online in multiple formats, including ePUB, Kindle and PDF. I started reading it with some curiosity because it seems a serious attempt to produce a comprehensive book about JIRA development.

Here are my first impressions. I haven't finished reading it yet and I'm possibly reviewing it more deeply in the future.



Book Contents

So far, I've got a very good impression of this book. The book begins with a couple of introductory chapters about the JIRA development process and the plugin frameworks: they give you an overview of the overall development process, of the basic APIs provided to the developer and of the tools you're going to use and are required readings if you're new to JIRA development.

The following chapters are pretty self contained and each one covers a different aspect of the JIRA customization process:
  • Custom fields.
  • Issues.
  • Workflows.
  • Gadgets and reporting tools.
  • User interface.
  • Remote access to JIRA.
  • Various database management, customization and migration tasks.
Every chapter is a comprehensive tutorial with step by step guides, complete code examples, troubleshooting information and more in depth discussions to understand how JIRA internally works. Because of their self contained structure, you can also freely jump from chapter to chapter, according to your needs.

First Impressions

I've made several JIRA plugins and a couple of years ago, at the very first attempts, I remember digging into Atlassian documentation to find out every single bit of information I could possibly find.

I'm not the tutorial kind of guy, though: the first thing I always look for are specs. However, I recognize that it's very important to have well structured documentation, and possibly some guided tours, to significantly reduce the learning time. That's exactly what I felt it missed: Atlassian documentation is very good and the JIRA Development Hub provides examples and small tutorials for practically every aspect of JIRA you would like to customize. Unfortunately, as it often happens to wiki-style online documentation, it lacks structure to some degree.

That is exactly what this book provides: a comprehensive guide you can use to effectively kick start your JIRA customization project, be it a plugin, a JIRA service or a customized user experience.

Sunday, December 11, 2011

Give Your Photos a Dreamy Haze Simulating a Diffusion Filter

You've just finished tweaking a great photo of yours: you think it's great but you feel it's missing something. You think that the image conveys a feeling of peace and relax and that it would really benefit from that dreamy haze you've often seen in somebody else's portraits. But you don't know how to do it.

One of the commonly used tools to achieve to give that look and feel to an image is a diffusion filter. Dreamy wedding pictures or studio portraits (such as newborn babies') are probably done that way.
The problem is:
  • You need such a filter.
  • You need a lens which you can put that filter on.
  • You need to carefully plan such a shot in advance.

Fortunately, it's not difficult to simulate such an effect in post production: you only need a photo editing tool with basic filters and layer support (such as Adobe Photoshop, Adobe Photoshop Elements or The Gimp). In this blog post, we're using a technique that's particularly suitable to portrait photography. If you're trying to tweak some landscape image, however, I suggest you take a look to the blog post about the Orton effect and see which technique best suits your image.

Beware: as usual, this isn't an advice not to try and get things right out of the camera.

The Basics

To simulate the diffusion pattern provided by a diffusion filter, we're going to blur the image with a blurring filter. The blurred image itself isn't sufficient, because you're going to lose too much detail. Instead, we'll blend it with your original image to achieve the diffusion effect.

The original image we'll use in this tutorial is the following:

Original Image

I don't think this image needs an additional dreamy look and feel: in fact, this is the final post processed shot and I'm keeping it as is. However, I chose it because it's a good candidate to show you some problems you may encounter on the way.

The first thing we'll do is duplicating the background layer:

Duplicate Layer

The next thing we'll do is blurring the new layer using the Gaussian Blur filter. The rule of thumb we discussed in the blog post about the Orton effect still holds: you need to use a blur radius sufficiently wide to blur the image while preserving overall detail. In this case (the image size is 4208x3264 pixels), I'll use a 30 pixel radius to achieve the following blurred effect:

Blurred Layer - Radius: 30 px

As you can see, the image is blurred but overall detail is not lost: in the eyes, for example, you can clearly see the iris edges and most of the features of the kid's face.

Now we're ready to blend the two layers. Depending on the image, we're probably going to use one of the following blending modes:
  • Screen.
  • Overlay.
  • Multiply.

If you recall the discussions on the previous blog posts, these three blending modes act differently on the image:
  • Screen will produce a brighter image, since it brightens a pixel according to its brightness.
  • Multiply will produce a darker image, since it darkens a pixel according to its darkness.
  • Overlay will both Screen and Multiply the image, according to a pixel brightness, producing a more contrasted image and preserving highlights and shadows.

As a rule of thumb:
  • Assuming the exposure of the image is already correct, we're going to use the Overlay blending mode.
  • If your image is a bit underexposed, it may benefit from using Screen instead.
  • If your image is a bit overexposed, it may benefit from using Multiply instead.

As usual, you've got to try and decide yourself. In this case, since I'm happy with the image exposure, we'll use the Overlay blending mode. The result is:

Final Result - Layers Overlaid
You can clearly perceive the diffusion pattern and the dreamy haze it shed over the image.

Fine Tuning

This is just the beginning. In this case, I find the image is much too dark now. The quickest way to fix it is modifying the overlaid layer opacity. This is the result setting its opacity to 80%:

Overlaid Layer - Opacity: 80%

Beware that setting the opacity too low will also remove the dreamy haze: in this case, I usually won't go below 80% and would try to fix the image exposure by fixing the original layer instead.

Using the same technique, you can control the quantity of "blur" that you're going to add to the image.

Obviously, you could also add an additional layer below the blurred one and fix exposure on it. You could use an adjustment layer, if available, or just use the Screen blending mode to raise the image exposure. The following picture is the result of 3 blended layers (from bottom to top):
  • The original.
  • A copy of the original, using the Screen blending mode and 50% opacity.
  • A copy of the original, gaussian blurred, using the Overlay blending mode.


Three Layers

But The Image Is Over-Saturated

The biggest problem of the images we're producing, however, is not exposure: it's color saturation. This is a common problem when using some blending modes such as Overlay. Raising the contrast of an image, in general, will boost color saturation as a side effect. This may be good in some kinds of photography, such as landscape, but it may be bad in others, such as portrait photography. A more saturated flower may look good but... do you notice the orange hue of the kid's hue in the final images we got so far? That's something you should always try to avoid. Ultimately, this is the reason why I chose this image for this tutorial.

Fortunately, this is again very easy to fix. Assuming once more that we're happy with the initial image color saturation, I usually reduce the saturation in the blurred layer until I'm happy with it. If the photo editing tool that you're using supports adjustment layers, this is really easy: just add a saturation adjustment layer and play with it until you like the result. If your editing tool does not supports it, you're going to use the Saturation tool back and forth until you're happy with it.

However, especially in portrait photography, I usually convert the upper layer to black and white, thus removing all of the color saturation from the overlaid layer. Once more, depending on the tool you're using, you may also be able to achieve better results fine tuning the black and white conversion. Adobe Photoshop, for example, lets you adjust the RGB channels intensity during the conversion:

Abode Photoshop Elements - Black and White Conversion Window

If you're not happy with the result, you can tweak the channel intensities. Depending on the image, I'm usually pushing up or down the red channel intensity until I like the end result.

This is the final image, using a black and white blurred layer:

Final Image - Black and White Blurred Layer Overlaid

It's far more natural, even if still very contrasted.

Conclusion

We've seen how you can use very simple layer manipulations to achieve a variety of effects. So far, we've only used the Screen and Overlay blending modes, and there's a world of possibilities for you to discover.

As a rule of thumb, you've also seen as the Overlay blending mode affects your image contrast and color saturation. Although I recognize that "punchy" images may look appealing at first, I don't really like excessive color saturation in portraits and I always end up with cooler images. That's just a matter of taste, however.

Also, I find that the dreamy haze best suits less contrasted and brighter images.

If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Thursday, December 8, 2011

Smoothing the Skin Reducing Local Contrast Using Layers

In a previous post in my Lightroom series I took advantage of the Clarity tool to smooth the skin of the model. Nevertheless, the Clarity tool is not available either in Photoshop or The Gimp and many people ask how the same effect can be reproduced in these programs.

I answered this question in a previous post (Clarity Adjustment in Photoshop). The answer, however, was pretty theoretical so that in this post we will use the technique we learnt to smooth the skin in an image.

The original image, once more, is the following:

Original Image (Cropped)

The first thing we're going to do is creating a new layer duplicating the original one. Once we've done it, we use the High Pass filter to detect the transitions. The radius you need to use depends on the image (and corresponds to the amount of local contrast we're going to detect). In this image of 16 megapixels I'll use a radius of 3 pixels:

High Pass Filter - Radius: 3 pixels

We end up with a neutral grey layer where transitions are marked by lighter and darker pixels whose intensity will depend on the corresponding (underlying) pixel:

High Pass Layer

We know from our previous post that the information contained therein can be used both to raise or reduce the local contrast of our image. Since, in this case, we want to reduce it, we have to invert the layer end we end up with:

High Pass Layer - Inverted

If we now set the blending mode to this layer to Overlay the result is the following:

Resulting Image: Lowered Local Contrast

What has happened? The neutral gray pixels leave the underlying pixels unaffected. Lighter ones will screen the underlying pixels and dark ones will multiply the underlying pixels. Since pixels aren't neutral gray only at the point of transitions, we would end up with increased local contrast. However, since we've inverted the high pass layer, the local contrast is reduced.

The skin now looks smoother and you can use the high pass radius to fine tune the smoothness. However, eyes aren't sharp any more and this is something we want to avoid. Now, we have to apply this local contrast reduction only locally. How? Using a layer mask.

A layer mask is literally that: a mask, and it behaves as such. Let's imagine you take a sheet of white paper and you print your original layer over it. Now, you take another sheet of paper and you print the high pass layer over it. If you put this sheet over the other in overlay mode (assuming it would be possible), you end up with the result we just achieved. Now: you take scissors and make a couple of holes in the upper sheet just where the eyes of the model are. What would the result be? Well, the eyes from the inferior layer would be visible and wouldn't be affected by the upper layer.

Layer masks work just like that: the only difference is that you make "holes" using the black color. Layer, moreover, are more sophisticated: you can make "semi-transparent" holes using a shade of grey. Funny, isn't it?

Let's add a layer mask to our inverted high pass layer:

Layer Mask

As you can see, the layer mask is white: this means that the entire layer now contributes to the overlay effect with the lower layer. Let's now take a brush and paint over the left eye and its eyebrow. The result is:

Right Eye and Eyebrow Unmasked

As you can see, the right eye and its eyebrow have recovered all of their local contrast: pixels that we painted with black aren't affected by the upper layer any more. Let's finish painting and we end up with this result:

Final Result

You can also notice how the layer mask icon in the layers palette reflects our mask:

Resulting Mask

Conclusion

Although not available as a standalone tool, you've learnt how you can reduce the local contrast (a negative clarity, in Lightroom jargon) of a selected part of your image.

Layer mask are powerful and you can use the same technique to selectively apply any kind of effects. In this case, you could have chosen to use a blurred layer to smooth the skin, instead of going through the steps described above. Or, you could have used a bigger radius to soften the skin even more, as shown in the following picture. The only limit is your imagination.

Result with High Pass Filter, Inverted, Radius: 5 px 

With the same technique, you can enhance the eyes of the model (as described in the Lightroom tutorial), applying the same adjustments in a new layer and masking out the parts of the image you want to leave unaffected.

If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Exposure Compensation in Post Production Using Layer Blending Modes

Very often I hear people complain about what should be, apparently, a very easy thing to do: modifying the exposure of an image in post production. Truth is that if the adjustment you need is relatively small results are going to be satisfactory. Unfortunately, as long as you need to push the exposure adjustment a bit farther, results are often going to disappoint you.

Why?

Usually because of two factors:

  • Your image has not sufficient dynamic range to achieve the result you want (that's why you should shoot RAW if you can).
  • Because people don't clearly understand the physics of the human eye.


As far as it concerns the first factor, you cannot help but repeating it over and over again: you should shoot RAW if you can. Double check it. Even if you think you cannot. Some Canon point and shoots, for example, can be tweaked to shoot RAW.

As far as it concerns the second factor, here is yet another quick wrap up.

A Word of Warning: I'm a mathematician but if this is not a rigorous post. This is a quick tutorial for you to better understand some tools that are often misunderstood.

The Human Eye as a Sensor

As we stated many ways, one of the things that photographers often overlook is that the human eye is a logarithmic sensor. But what does it mean? It basically means that whenever the quantity of light doubles, whichever its frequency in the visible spectrum, you'll notice a comparable "brightness" improvement.

This also means that if you need to modify the exposure in post production, you cannot overlook this effect. You cannot just bump the channels up and down because luminance ratios must be preserved. If you don't, the dynamic range of your image will suffer and you'll end up with a flat, hazy image.

The Problem with the Brightness Tool

The problem with the Brightness tool is that, despite the name, it is not suitable to apply such a modification. It depends on the tool you're using but as far as it concerns the software most commonly used (Photoshop and The Gimp), a brightness adjustment does not what you think it does.

Roughly speaking, the brightness adjustment just adds a correction to the pixel values:

n = o + d

where o is the old value and d is the selected brightness adjustment. It might look right, doesn't it? It does shifts the histogram left or right (depending on the sign of d). But it does not preserve ratios and, as such, it is not the tool you need to tweak an image exposure preserving its dynamic range.

Here's what happens trying to fix an artificially underexposed TIFF image using just the Brightness tool (in this post I'll use The Gimp):

Original Underexposed Image

Brightness raised to 100


This test image is badly underexposed but the detrimental effect introduced by the Brightness adjustment on an image dynamic ratio won't ever be avoided.

If you like Mathematics, try to bring d to infinity and think at what happens to the (absolute value of the) ratio between two pixels value: it just goes down to 1. That's why your images will soon start to lose contrast.

Many people will then try to fix the contrast but that won't work, either.

What Can Be Done?

Preserving the ratios! To preserve the ratio, you need to correct a pixel values using a multiplier. That's what the Multiply and Screen blending modes are made for:

  • If you need to lower the exposure of your image, you can blend a layer with a copy of itself using the Multiply blending mode.
  • If you need to raise the exposure of your image, you can blend a layer with a copy of itself using the Screen blending mode.


What do these blending modes do?

Basically, they're effect is:

  • Screen lightens a pixel in the lower layer proportionally to the "brightness" of the corresponding pixel in the upper layer.
  • Multiply darkens a pixel in the lower layer proportionally to the "darkness" of the corresponding pixel in the upper layer.

What happens if you blend a layer with a copy of itself using these blending modes? It applies the very effect we were looking for!

Again, if you like Mathematics, just think about the definition of the exponential function. What happens when you're modeling a phenomenon whose variation (derivative) depends solely on its intensity? Refine this concept: is there any function whose derivative in a point x is only a function of the value of the original function in the point x? Yes: the exponential function.

We're applying an exponential transformation on a matrix of points that will be fed in a logarithmic sensor (the human eye). Just what we were looking for.

Here's what happens when we screen the test image we used before with itself (twice):

Screen Blending Mode

This is a much better result indeed.

If you're new to layers and blending modes, well, it's time you get serious about them. Here's a screenshot of The Gimp layer palette of the previous image:

The Gimp - Layer Palette

Conclusion

The Brightness and Contrast tools are often misused and it's important you understand what's going on under the hood. If you need to apply an exposure compensation, they're not the right tools. Instead, take some time to learn about layer blending modes. The Multiply and the Screen blending modes are just the beginning. Here are some additional tips:

  • You can mix and match them to achieve more sophisticated results and you can fine tune the effect tweaking the layer transparency.
  • If screening or multiplying two layers is too much compensation for your image, just make the layer a bit more transparent.
  • If you're looking for a way to raise the overall contrast of an image, try to screen and multiply it.
  • You may also want to take a look to the Overlay blending mode, which is a mix between the Screen and the Multiply (it screens dark colors and multiplies bright ones).


If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Adobe Photoshop Lightroom Tutorial - Part XIV - Using Presence Controls to Smooth the Skin

Part I - Index and Introduction
Part XV - Speeding Up Your Workflow Using Presets and the Painter Tool

Local Contrast to Make Softer or Rougher Surfaces

In a couple of previous posts (see here and here) we learned about local contrast and how this effect can be easily achieved in Adobe Photoshop Lightroom. Local contrast, or Clarity in Lightroom's jargon, literally refers to the amount of contrast that's present locally in areas of colors and tones transitions of your image. Local contrast allows you to tweak your image without modifying the overall contrast and the tonal scale of your image:
  • Increasing the local contrast "sharpens" transitions and gives rougher surfaces. 
  • Decreasing the local contrast gives smoother surfaces.
Although clarity can be adjusted at the image level (I often raise it a bit to get punchier images) by its own nature local contrast is an adjustment that you often want to brush into your image.

Smoothing the Skin

One of the uses of a negative clarity is skin smoothing. Skin isn't so smooth a surface and unless your model has got a perfect one and your lighting conditions are optimal, his skin won't appear as smooth as we'd like. Here, we're not talking about skin imperfections (you're going to manually remove those with other brushes) but skin texture.

Depending on the image you'd like to get, you may need to correct the skin texture somehow. If you want to give your portrait a "dreamy" and "diffused" look and feel, this is a way to achieve it. As usual, there are plenty of way of doing it in Photoshop but this post will focus on the clarity (local contrast) adjustment. One of the good things of this adjustment is that it's really easy to use and it usually gives very good results with little effort.

To have an idea of what you're going to achieve, you can lower overall clarity of a portrait and see what happens. The reduced local contrast is going to take away sharpness to your model skin and smooth its surface. But unless you're happy with this result, you'd better take a brush and apply clarity locally.

Adobe Photoshop Lightroom ships with a Soften Skin brush since version 3. This brush is defined as follows:

Adobe Photoshop Lightroom - Soften Skin Brush

As you can see, this brush applies a negative Clarity adjustment of -100 (the minimum) and raises the Sharpness a bit to balance the extreme smoothing effect. There's no silver bullet here: you've got to try and tweak the brush parameters yourself until you get the result you expect.

Personally, I don't like this brush: it's too "extreme". I'd rather use a Clarity adjustment in the [-50,-60] range and adjust sharpness and saturation according to my taste.

A personal suggestion: in portraits where you're looking for a really smooth skin, try to desaturate the skin a bit. I find the results are more natural.

A Test Image

This is a crop from an image in which I want to brush in some negative clarity to smooth the skin of the model. This image was taken with bounce flash in a small room: the flash was bounced with an angle close to 80 degrees and the light hasn't lit evenly the face of the model. Remember: photography is all about light and you should try to get the results you want right out of the camera. Unfortunately, sometimes we cannot prepare the setup we need to get the right shot and that's when it's right to fix things in post production.

Original Image (Cropped)

Look at the original image. This is a cropped section of a shot a took by surprise: the model wasn't even wearing any make up. Besides having to remove some skin imperfections, I really don't like the overall texture of the skin. Also, the bounce flash hasn't properly lit the eyes and the skin underneath them. That's what I'm trying to fix with a negative clarity adjustment: I'll try to reduce the local contrast without affecting too much the overall texture of the skin or completely removing those shadows ending up with an unnaturally flat image.


Final Image (Cropped)

This is the result after brushing in some negative clarity (-70) and some sharpness. The result is much smoother but it's not yet unnatural. Since it's not a studio image and I want to preserve the overall look of the shot, I don't want to go any further.

In this image I also tweaked the eyes as explained in the previous post.


If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.