AdSense Mobile Ad

Showing posts with label photography. Show all posts
Showing posts with label photography. Show all posts

Thursday, January 10, 2013

Backups Using Amazon S3 and Glacier: A Clarification

Some days ago I published a post about the recent integration of Glacier as a new storage class in Amazon S3 and how it pages the way to new and interesting use cases, even for home users, despite being a service more geared towards enterprise users. The post was then kindly cited by Ted Forbes on the latest instalment (at the time of writing) of his excellent photography podcast, The Art Of Photography: Episode 118, Photo Storage with Amazon Glacier and S3. The podcast has surely driven a great deal of visits to my blog post, and I've received lots of emails with questions related to it.

Many of them asked whether I ever did, or why the post did not, consider other more user-friendly backup solutions in the cloud. In fact, most of these comments focussed on completely different kind of services, with a particular emphasis on services which enable easy and automatic backups of both entire computers, drives, or folders.

Now, it was never my intention to go into details of that kind of offering, and I won't do it know. But I do think that a followup to the original post is necessary to clarify a couple of things.

First of all, I want to stress the relevance of a fundamental assumption that I took for granted when choosing S3 and Glacier as a cold storage service for some files of mine: I want to offload files from my disks, assuming I'm done working with them and won't almost certainly need to access them in the medium term (if not in the foreseeable future). Ted made a great work in his podcast episode in explaining how Amazon S3 and Glacier can be used and in suggesting some interesting use cases. Ted certainly did a better job than I did in the original blog post in suggesting that Glacier is an interesting option to offload big files we don't use often to a reliable and affordable cloud storage service.

In fact, in my current workflow there's no room (nor will) for other kinds of strategies than offloading from my workstations, and I suspect many users out there have got similar workflows and issues (I guess photographers do). Some kind of content is very "bulky":  photographs and video footage can easily reach the tens of gigabytes per work session, if not more, and even an amateur photographer like me can easily overgrow its hard disk, no matter how big it is. Of course, I've always kept on expanding my disk pools at home to satisfy the always increasing need of space, but I'm certainly not willing to maintain unnecessary files on the internal hard disks of my machine beyond the amount of time strictly necessary to work on them. Once I'm done with them, I either back them up in my home storage appliance (if I foresee the need to have them quickly available) or I offload them.

That's the use case Glacier is great for! I'm not asking for anything more, nor anything less, than an affordable and reliable site to store them until I'll need them, should it ever happen.

To make a long story short, I agree there are lots of alternatives out there, each of them with its own features, strengths and shortcomings, and different level of complexity. Google Drive, for example, is just great to keep a relatively small amount of content organised and synchronised across a wide range of devices. CrashPlan offerings for home users are a great way to start easily backing up entire computers and drives. Zoolz have got a similar offerings, with distinct online and cold storage tiers.

Nevertheless, what I really don't like about some of this services is the fact that they sometimes charge depending on the number of users and/or computers you're backing up. I'm using many different devices and, because of my workflow, they're all still pretty easy to setup and contain pretty much the same data: I just keep locally the applications I need and the data I'm working on. Everything else is not kept in my the internal hard drives. This approach is very convenient because I never worry about the loss of a machine: I just need to install the OS and the applications which, of course, I always keep available. As an OS X user I don't even use Time Machine, because it's quicker (much quicker) to just reinstall the OS and the apps I need. Let alone synchronising tens of gigabytes over the internet. For me it's just non sense, I just need to work fast and to recover fast. But I recognise it's certainly appealing to lot of other users with different needs.

For that reason, in my workflow I really don't need nor want any client synchronising anything on the wire. I just load a bunch of data I'm working on on my workstations (a photo session, for example), back it up locally elsewhere (as you should always do with assets you need and cannot lose) and, when I'm finished with it, I offload it somewhere else and delete it from my drives.

That somewhere is currently Amazon S3 and Amazon Glacier: it's affordable, it's easy to use and no matter how many devices I'm working on, I can always grab my data if I need it.

Sunday, December 30, 2012

Amazon S3 and Glacier: A Cheap Solution for Long Term Storage Needs

In the last few years, lots of cloud-based storage services began providing relatively cheap solutions to many classes of storage needs. Many of them, especially consumer-oriented ones such as DropBox, Google Drive and Microsoft SkyDrive, try to appeal their users with free tiers and collaborative and social features. Google Drive is a clear case of this trend, having "absorbed" many of the features of the well-known Google Docs applications, seamlessly integrating them into easy to use applications for many platforms, both mobile and desktop-oriented.

I've been using these services for a long time now, and despite being really happy with them, I've been looking for alternative solutions for other kinds of storage needs. As an amateur photographer, for example, I generate a lot of files on a monthly basis, and my long-term storage need for backup is currently in the tens of gigabytes per month. If I used Google Drive to satisfy those needs, supposing I'm already in the terabyte range, I'd pay almost $50 per month! Competitors don't offer seriously cheaper solutions either. At that price, one could argue that a decent home-based storage solution could be a better solution to his problems.

The Backup Problem

The problem is that many consumer cloud storage services are not really meant for backup, and you're paying for a service which keeps your files always online. On the other hand, typical backup strategies involve storing files in mediums which are kept offline, typically reducing the total cost of the solution. At home, you could store your files in DVDs, and keep hard disk space available for other tasks. Instead of DVDs, you could use hard drives as well. We're not considering management issues here (DVDs and hard drives can fail over time, even if kept off and properly stored) but the important thing to grasp here is that different storage needs can be satisfied by different kind of storage classes, to minimize the long-term storage costs of assets whose size is most probably only going to grow over time.

This kind of issues has been addressed by Amazon, which recently rolled out a new service for low-cost long-term storage needs: Amazon Glacier.

What Glacier Is Not

As soon as Glacier was announced, there has been a lot of talking about it. At a cost of $0.01 per gigabyte per month, it clearly seemed an affordable solution for this kind of problems. The cost of one terabyte would be $10 per month, 5 times cheaper than Google Drive, 10 times cheaper than DropBox (at the time of writing).

But Glacier is a different kind of beast. For starters, Glacier requires you to keep track of a Glacier-generated document identifier every time you upload a new file. Basically, it acts like a gigantic database where you store your files and retrieve them by key. No fancy user interface, no typical file system hierarchies such as folders to organize your content.

Glacier's design philosophy is great for system integrators and enterprise applications using the Glacier API to meet their storage needs, but it certainly keeps the average user away from it.

Glacier Can Be Used as a New Storage Class in S3

Even if Glacier was meant and rolled out with enterprise users in mind, at the time of release the Glacier documentation already stated that Glacier would be seamlessly integrated with S3 in the near future.

S3 is a cloud storage web service which pioneered the cloud storage offerings, and it's as easy to use as any other consumer-oriented cloud storage service. In fact, if you're not willing to use the good S3 web interface, lots of S3 clients for almost every platform exist. Many of them even let you mount an S3 bucket as if it were an hard disk.

In the past, the downside of S3 for backup scenarios has always been its price, which was much higher than that of its competitors: 1 terabyte costs approximately $95 per month (for standard redundancy storage).

The great news is that now that Glacier has been integrated with S3, you can have the best of both worlds:
  • You can use S3 as your primary user interface to manage your storage. This means that you can keep on using your favourite S3 clients to manage the service.
  • You can configure S3 to transparently move content to Glacier using lifecycle policies.
  • You will pay Glacier's fees for content that's been moved to Glacier.
  • The integration is completely transparent and seamless: you won't need to perform any other kind of operation, your content will be transitioned to Glacier according to your rules and it will always be visible into your S3 bucket.

The only important thing to keep in mind is that files hosted on Glacier are kept offline and can be downloaded only if you request a "restore" job. A restore job can take up to 5 hours to be executed, but that's certainly acceptable in a non-critical backup/restore scenario.

How To Configure S3 and Use the Glacier Storage Class

The Glacier storage class cannot be used directly when uploading files to S3. Instead, transitions to Glacier are managed by a bucket's lifecycle rules. If you select one of your S3 buckets, you can use the Lifecycle properties to configure seamless file transitions to Glacier:

S3 Bucket Lifecycle Properties

In the previous image you can see a lifecycle rule of a bucket of mine, which move content to Glacier according to the rules I defined. You can create as many rules as you need and rules can contain both transitions and expirations. In this use case, we're interested in transitions:

S3 Lifecycle Rule - Transition to Glacier

As you can see in the previous image, the afore-mentioned S3 lifecycle rule instructs S3 to migrate all content from the images/ folder to Glacier after just 1 day (the minimum amount of time you can select). All files uploaded into the images directory will automatically be transitioned to glacier by S3.

As previously stated, the integration is transparent and you'll keep on seeing your content into your S3 bucket even after it's been transitioned to Glacier:

S3 Bucket Showing Glacier Content

Requesting a Restore Job

The seamless integration between the two services don't finish here. Glacier files are kept offline and if you try to download them you'll get an error instructing you to initiate a restore job.

You can initiate a restore job from within the S3 user interface using a new Action menu item:

S3 Actions Menu - Initiate Restore

When you initiate a restore job for part of your content (of course you can select only the files you need), you can specify the amount of time the content will be kept online, before being automatically migrated to Glacier again:

S3 Initiation a Restore Job on Glacier Content

This is great since you won't need to remember to transition content to Glacier again: you simply ask S3 to bring your content online for the specified amount of time.

Conclusions

This post quickly outlines the benefit of storing a backup copy of your important content on Amazon Glacier, taking advantage of the ease of use and the affordable price of this service. Glacier integration in S3 enables any kind of users to take advantage of it without even changing your existing S3 workflow. And if you're new to S3, it's just as easy to use as any other cloud storage service out there. Maybe their applications are not as fancy as Google's, but their offer is unmatched today, and there are lots of easy to use S3 clients, either free or commercial (such as Cyberduck and Transmit if you're a Mac user), or even browser based S3 clients such as plugins for Firefox and Google Chrome.

Everybody has got files to backup, and many people is unfortunately unaware of the intrinsic fragility of typical home-based backup strategies, let alone users that never perform any kind of backups. Hard disks fail, that's just a fact, you just don't know when it's going to happen. And besides hard disk failures, other problems may appear over time such as undetected data corruption, which can only be addressed using dedicated storage technologies (such as the ZFS file system), all of which are usually out of range of many user, either for their cost or for their skill requirements for setup and management.

In the last 6 years, I've been running a dedicated Solaris server for my storage needs, and I bought at least 10 hard drives. When I projected the total cost of ownership of this solution I realised how Glacier would allow me to spare a big amount of money. And it did.

Of course I'm still keeping a local copy of everything because I sometimes require quick access to it, but I reduced the redundancy of my disk pools to the bare minimum, and still have a good night sleep because I know that whatever happens my data is still safe at Amazon premises. If a disk breaks (it happened a few days ago), I'm not worried about array reconstruction, because it's not an issue any longer, and I just use two-way mirrors instead of more costly solutions. I could even give up using a mirror altogether, but I'm not willing to reconstruct the content from Glacier every time a disk fails (and it's going to happen at least once every 2/3 years, according to my personal statistics).

So far, I never needed to restore anything from Glacier, but I'm sure that day will eventually come. And I want to be prepared. And you should want to as well.

P.S.: Ted Forbes has cited this blog post in Episode 118 (Photo Storage with Amazon Glacier and S3) of The Art of Photography, his excellent podcast about photography. If you still don't know it, you should check it out. Ted is an amazing guy and his podcast is awesome, with content that ranges from tips, techniques and interesting digressions on the art of photography. I've learnt a lot from him and I bet you will, too.

Friday, August 17, 2012

Night Photography: A Tip to Photograph Stars (and Other Point Light Sources)

Mastering night photography is not that difficult, nonetheless it has its own peculiarities you should be aware of. In this blog post we will see how one of the basic rules we learn about exposure is no longer valid when shooting stars.

One of the first things you certainly learnt when you started learning photography was how exposure is determined by three parameters: aperture, shutter speed and ISO sensitivity. Each one has multiple effects on the final result (most notably depth of field, motion blur and noise), but each one can be used to determine how much light enters your camera and reaches the sensor. Aperture, though, has a peculiarity: it's not an absolute measure, but a relative one. In fact, the f-number is not a measure strictly speaking: it's a pure number. The f-number N is the ratio between the focal length f of the lens and the diameter r of the entrance pupil:


Basically, the luminance (the "brightness" of the resulting image) depends only on the relative aperture and not on the absolute value of either lens parameters alone. In fact, when evaluating exposure, you just use the f-number: no matter the focal length or, more generally, no matter which lens you're using, if the f-number is the same, exposure is going to be the same. If you stop your aperture up or down, exposure will stop up or down accordingly.

This is true most of the time and is a consequence of the physical model of an optical system such as a single aperture camera (or the human eye). We've seen many time the equation


which summarizes this basic rule: luminance (in f-stops, hence the logarithm) is proportional to the square of the aperture N and inversely proportional to time t the shutter remains open.

What Happens When Shooting Point Light Sources?

When shooting stars, or more generally point light sources, however, the model changes and this result is no longer valid. A point light source, in this context, will be defined as a source of light whose size in the resulting image will smaller or equal to one pixel. Perfectly in focus, and depending on your sensor's resolution, some stars and planets may in fact appear bigger than one pixel, but not that much. Hence, this approximation can be considered good enough.

The reason why this happens is not complicated but requires some knowledge of Mathematics and Physics but since a photographer is usually only concerned with results and the rules to apply, I'll try to provide just a very summarized and intuitive explanation.

Let's start with a couple of analogies, although pretty "rough". It's absolutely intuitive that, the farther from a sound source, the fainter the sound you perceive. It's also intuitive that when shooting with a flash, the farther from the subject, the fainter the light that reaches it and, hence, the fainter the light reflected to your camera sensor. Now: why doesn't a similar effect exists when shooting any subject? A picture is produced by the light reflected on the subject: why isn't exposure affected by the distance from it?

It turns out it's a consequence of two competing phenomenons which, under certain circumstances, "balance" themselves and cancel out the contribution of the distance. It also turns out that the result is the general well known law we were talking about at the beginning of this article, hence the importance of the relative aperture, the f-number, in the field of photography.

On the other hand, when shooting point light sources (as the majority of stars in the night sky can be considered) the two competing phenomena don't balance themselves any more. In fact, one of the two practically disappears and the focal length f of the lens doesn't affect exposure any more. In this case, the result is similar to what we described in the analogies we above: luminance is inversely proportional to the distance from the subject but, much more importantly, it is proportional to the diameter of the entrance pupil. Having disappeared f from the equation, the result depends solely on r2 (a quantity proportional to the area of the entrance pupil) and not on the relative aperture. This fact is somewhat intuitive, if you think about it: the larger the area of the entrance pupil, the more light it can gather. Seen from this point of view, in fact, the usual rule is probably less intuitive: lenses configuration with the same aperture N may have entrance pupils of different sizes. Why, then, they give the same effect? That's because of the two components we were talking about, but we won't enter into mathematical details.

Since


it turns out that focal length does affect the final exposure, given N.

How? This model predicts that an increase in the focal length of the lens keeping the aperture N fixed increases exposure, since it increases the area of the entrance pupil. Although you won't be usually shooting skies with long lenses, you could take advantage of this fact to reduce shutter speeds, especially taking into account that detected luminance varies with r2 and, hence, with f2, the square of the focal length.

Some estimations are quickly done: if you increase the focal length from, let's say, 18 to 35 (using the same aperture), you'll increase the quantity of light reaching the sensor of a factor


that is, 2 stops.

It's important to realize that this effect applies only to point light sources, that is, small stars whose size in the picture is comparable to, or smaller than, the size of a pixel. It doesn't apply to the moon, to bigger stars and planets and not even to the sky itself. Nevertheless, it's a good trick to know if you want to maximize the number of visible stars in your picture.

Sometimes you may be tempted to stop down the aperture to have a better focus at infinity, especially when the lens you're using hasn't got a hard stop at infinity (many cheaper lenses, such as most Nikkor DX lenses, have not). In this case, instead of indiscriminately or heuristically stopping down the aperture, use the hyperfocal distance instead (which we talked about in a previous post) to get a good focus lock at infinity and determine exactly the depth of field you need. If you can, open your lens as much as you can.

Sunday, August 12, 2012

Adobe Photoshop Lightroom Tutorial - Part XXIV - Organising Your Photo Catalog Using Metadata and Keywords

Part I - Index and Introduction

Metadata, in one of its simplest form, is defined as "data about data". In the case of photography, for example, you may think about EXIF data attached to your image: they provide technical information (camera settings, geolocation information etc.) about the picture. Depending on the tool you use, you can go beyond what's provided by standards (such as EXIF or IPTC) and provide your own metadata.

What's the point of using metadata? The basic idea is organizing your images and thus being able to make searches based on some criteria. For example, you may want to search for images shot with a specific camera, or with a specific lens; or you may looking for pictures taken at a certain shutter speed, aperture or geospatial coordinates. Or you may be willing to search for pictures using non-technical criteria, such as a portrait shot at a wedding and processed in black and white. Can you imagine what your Internet experience would be if search engines didn't exist? You couldn't find a way to the information you're looking for, and the very concept of "Internet" as you know it would be defied. The same thing happens with your photo catalogs. How could you possibly find something if you couldn't search using the criteria you need? Amateur photographers with small catalogs may be able to find the pictures they're looking for manually scanning the catalog, or trying to remember which folder or collection a picture is in. But as soon as your catalogs grow larger and larger things get worse and the problem starts to be insurmountable. That's why some products exist which provide the tools you need to overcome this problem. In fact, there's a dedicated category of such products: image management databases and Adobe Photoshop Lightroom is one of them.

If you're using Lightroom, you already know your images are stored into a catalog which acts as a "proxy" between you and the images managed by Lightroom. The catalog is basically a database which stores additional information (metadata) alongside your images. Such metadata makes the database searchable, so that you can look for images using search criteria. Lightroom, in this respect, is extremely helpful and powerful in that:
  • It comes with out-of-the-box support for an extensive set of well known or frequently used metadata (such as ratings, EXIF and IPTC).
  • It lets you extend the metadata model using your own keywords.
  • It lets you easily build search criteria mixing and matching any type of searchable field.
  • It lets you define smart collections, that is collections of pictures whose content are defined by a search filter and are automatically updated.

These are just the most important features provided by Lightroom, and we'll discover more of them in the following sections.

Flags, Ratings and Labels

The simplest forms of metadata you can catalog your images with are flags, ratings and labels:
  • Flags are used to pick or reject an image.
  • Ratings are used to rate images on a scale from 0 stars to 5.
  • Label is a one of 5 color codes (Red, Yellow, Green, Blue and Purple) that can be assigned to an image.

While the meaning of flags and ratings is pretty well defined, the meaning of labels can be customized by the user. By default, labels are just "colours" but their name, and thus their meaning, can be customized to be meaningful for the user. Lightroom, for example, provides an additional naming scheme for labels, inherited by Adobe Bridge, that uses the following convention:
  • Red: Select
  • Yellow: Second
  • Green: Approved
  • Blue: Review
  • Purple: To Do

You're free, however, to assign your own meanings to colour labels. In my workflow, for example, I just use the common three traffic light colours (red, yellow and green) to transition images from the undeveloped, partially developed and done states.

In the following picture, you can see a screenshot of some pictures in my catalog. Three of them (the first, the second and the third) are flagged because I picked them, rated (with 4, 3 and 4 stars respectively) and labelled green (because I finished processing them). The second image is unflagged (it's neither rejected nor picked), unrated (0 stars) and partially developed (yellow label).

Flags, ratings and labels

The basic rating and labeling metadata are flexible and easy to use and can be largely adjusted to any development workflow. In my workflow, for example, flags are used before starting developing images in order to pick only the pictures eligible for development. Images to be deleted, are marked as rejected and deleted pretty soon. Images I'm not sure about are left unflagged even if, eventually, they'll either be picked or rejected (and thus deleted). Ratings are usually applied at the end of the development process and are usually immutable while colour labels are just a quick visual aid to quickly identify pictures I should be working on. Eventually, when I finish developing a folder (or collection) all pictures will be labeled green.

Metadata

Images can be assigned metadata of many kinds. Lightroom support many kinds of metadata including:
  • EXIF
  • IPTC
  • DNG
  • Location
  • Metadata defined in a custom plugin
Lightroom can also read proprietary metadata (such as proprietary EXIF extensions) found on an image, but in this case it usually gives no way to modify it. In fact, Lightroom won't even read all proprietary metadata: if you're interested in reading a field not visible in Lightroom, you should look at the excellent ExifTool by Phil Harvey (a command line tool I will probably write a post about in the future).

Metadata can be inspected and modified using the Metadata panel in the Library module:

Metadata Panel

As you can see in the previous image, the Metadata panel shows information about the chosen type of metadata (in this case EXIF and IPTC) and lets you modify the writeable fields. At the topmost part of the panel, Lightroom provides some commonly used fields (Rating, Label, Title, Caption, etc.) as a convenience to speed up metadata editing.

To change the currently displayed metadata category, you just need to select the desired one in the list box in the upper left corner of the panel. In my Lightroom setup, these are the available choices:

Metadata Categories

If you're a developer, you could also extend available metadata writing a custom Lightroom plugin using the Lightroom SDK. Most users (even professional ones), however, will be just satisfied with what Lightroom offers out of the box.

Location Metadata

Location metadata is the perfect way to geographically localize where a shot was taken. Nowadays, many cameras populate these fields using data gathered by a GPS device, such as modern smartphones or GPS-equipped cameras. Many DSLR, however, still lack this functionality and their pictures require the user to manually introduce location metadata.

Up to Lightroom 3, location metadata were just made up of text fields, but with the latest Lightroom release (v. 4 at time of writing) you can use the Map module to populate these fields by dragging and dropping images over a map:

Map Module

Once an image is dropped over the map, Lightroom will automatically update its location metadata, as you can see in the following picture:

Location Metadata of a Photo

The Map module also provides the possibility of saving a location, a functionality that can greatly speed up your workflow. To save or load a location, just use the controls found in the Saved Locations panel of the Map module.

Since location metadata may contain sensitive information you're willing to protect, you may want to ensure that information about some locations are never exported. That might be the case of information about your home location, for example. To have Lightroom protect a location, you can add it to the list of saved locations and mark it as private:

New Private Location

In the New Location dialog box, you can specify a radius which will determine the area of the (circular) location you're saving and a checkbox that can be used to mark it as private. If an image is tagged into a private location, the corresponding metadata will never be exported, no matter which export mechanism or publish service is used.

Applying Metadata Changes to Multiple Images

Very often you find yourself applying the same metadata to multiple images. For example, it may often be the case for metadata in the location, copyright, contact and workflow categories. Lightroom offers two ways to perform "bulk" metadata changes:
  • Metadata synchronization.
  • Metadata presets.

Metadata synchronization is very similar to develop settings synchronization: you apply the modification you need to a picture and then sync other pictures with it. To sync metadata with a reference image, just select all the images to be synced paying attention that the reference image be the first image in the selection set. Once the images are select, press the Sync Metadata button in the bottom left corner of the right module panel:

Sync Buttons

Ligthroom will present a form in which fields to be synced can be chosen and copied to the metadata set of the other images.

Metadata presets are a very similar concept, with the difference that metadata values are saved in a preset (instead of copied from a reference image) and applied to a set of images. To create a preset, select the Edit Metadata Preset item in the Metadata menu or in the Preset listbox in the topmost section of the Metadata panel. A form will be presented in which fields to be saved in the preset can be chosen. A saved preset can be applied to one or multiple images simply selecting the corresponding preset in the Preset listbox.

Metadata presets are handy when a set of metadata values is frequently applied to many photos. To speed up my workflow, for example, I created a preset for each of the fixed sets of metadata I commonly use, such as contact information, copyright information and common locations where I use to shoot. Metadata synchronization, on the other hand, is more suitable when many images share a common set of characteristics (same job, same event, same model, etc.) whose value, however, aren't worth creating a preset which will most likely be scarcely reusable.

Keywords

Lightroom lets you apply keywords to an image. Keywords, or tags (an alternate name used in other contexts such as Flickr or Google+), are just text labels that can be searched for: hence, they provide a mean for an user to freely organize the catalog using user defined "keys". In this sense, keywords are the building blocks you use to organize your catalog "your own way". Metadata described so far, in fact, was related to some technical aspects of a picture or some standard attribute set (camera settings, location, etc.). Keywords, on the other hand, are "the words you use to describe a picture" and, hence, to keep your catalog organized using words meaningful to you.

You may want to define, for example, keywords for each style of photography you produce (portraits, landscape, etc.), for treatments you apply (duotone, sepia, black & white, cropped, etc.), names of persons appearing in a photo, etc.

Adding and Removing Keywords

To assign keywords to a picture, just use the Keywording or Keyword List panels of the Library module. The Keywording panel, shown in the following picture, is made up several distinct controls:
  • The Keyword Tags list box, used to change what's shown in the keywords box.
  • The keyword box, where you can see a list of keywords applied to your image whose content depend on the current Keyword Tags selection.
  • A text box that lets you add keywords.
  • Two grids, Keywords Suggestions and Keyword Set, which provide a visual shortcut to two set of keywords.

Keywording Panel

By default, the keyword box shows the keywords currently assigned to the selected picture(s). In case of a multiple selection, a keyword that's only assigned to a subset of images is postfixed by an asterisk (*). To add a keyword, you just need to type it in the keyword text box and press Enter: the keyword will be added to the currently selected image(s) and created (with the default options) if it's the first time you use it. If you want to tweak the behaviour of a keyword, as described in the following section, you maybe want to create it manually before entering it or manually changing its options later. I usually prefer creating them manually in order not to forget changing their options afterwards.

To speed up your workflow, Lightroom lets you quickly select keywords from two 3x3 grids: Keywords Suggestions and Keyword Set. The former contains suggestions based on last used keywords, based on the keywords currently applied to an image; you'll see that adding or removing keywords to the current image triggers a suggestions change as well. The latter, on the other hand, is made up of a static list of keywords you can create, save and use. Using the listbox on the right side of the keyword set grid (showing Outdoor Photography in the previous image) you can select, create, edit and remove your own keyword sets. Lightroom ships with some example sets but you should create your own, reflecting your "keywording habits" for each kind of photography you're interested to.

To remove a keyword, just deselect it from one of the suggestion grids or manually delete it from the keyword list.

Keyword List

The Keyword List panel shows a graphical representation of the current keyword tree. In fact, keywords aren't just a flat set of tags: Lightroom let you organize keywords into a hierarchy, as shown in the following picture:

Keyword List

The quickest way to build is a hierarchy out of already existing keywords is just rearranging them with your mouse. If you drag a keyword over another, the former will convert into a child of the latter.

But what's the point of building and maintaining a hierarchy? It's not only constraining the size of a keyword list that could potentially grow to a considerable size. The keyword hierarchy allows you to organize concepts in a tree establishing an "is a" relationship with the containing keywords. In the picture above, for example, the keyword cat is the leaf of the subtree animal/mammal/cat. That is: in the hierarchy I'm using the cat is a mammal and is an animal. This way, you don't have to tag an image three times (cat, mammal and animal) but just one: cat.

You can tweak how keywords behave in the hierarchy. In the example above, we wanted cat to be a mammal an an animal. There may be cases where you just use the hierarchy for organizational purpose and don't want a picture to automatically acquire all the containing keywords as well. For the same purpose, you may want to organize your catalog using certain keywords (such as names) but you want to prevent those keywords to appear elsewhere, such as in exported or published images. In this case, you can just edit a keyword and specify the behaviour you need (right clicking on it and selecting Edit Keyword Tag):

Edit Keyword Tag

In the Edit Keyword Tag window you can see how the behaviour of a keyword can be tweaked:
  • A keyword can be included in an export (or publish) operation if the Include on Export checkbox is selected.
  • A picture will inherit containing keywords if Export Containing Keywords is selected.
  • A picture will inherit a keywords synonyms (more on this in the following sections) if Export Synonyms is selected.
If you want to prevent a keyword to be exported or published, just deselect Include on Export: no matter how you export or publish image containing this keyword, Lightroom will remove it.

Effective Keywords

We've just seen how the set of keywords applied to an image not only depends on the ones you explicitly added but also on the behaviour and the operation keywords are considered for. If you want to inspect the list of keywords effectively applied to an image you can use the Keywording panel and choose the list you're interested in.

In the Keywords section we've seen that the Keywording panel features a Keyword Tags list box. Depending on your choice, the behaviour of the panel will change:
  • Enter Keywords: the default choice, whose functionality has been described in the Keywords section.
  • Keywords & Containing Keywords: if you choose this option, the panel will turn read-only and will show the list of all keywords inherited by the image (as described in the Keyword List section).
  • Will Export: if you choose this option, the panel will turn read-only and will show the list of all keywords that will be exported with the selected image.

Synonyms

In the Edit Keyword Tag window you may have notice a Synonyms text box whose functionality we haven't described yet. Lightroom lets you associate a set of synonyms to a keyword: synonyms can be thought as an additional list of keywords associated with a keyword, whose primary purpose is search. A synonym, in fact, won't even appear in the keyword list and can only be consulted checking the configuration of each specific keyword.

Many users wonder when a synonym or a keyword should be used. That really depends on how you build your own keyword hierarchy but here are some guidelines. You should try to keep your keywords hierarchy simple, clear and intuitive so that your workflow is smooth. Also, keywords represent concepts and we know that, literally speaking, a pure synonym doesn't offer nothing new, just an alternate spelling. This is a clear case in which a synonym should be created instead of yet another keyword.

Other times, some words just doesn't fit well into your keyword hierarchy. Let's take my animal hierarchy. I defined a cat as a mammal and an animal. What about pet? Or clawed? Or furry? You cannot feasibly build a hierarchy containing all nuances that can possibly come to your mind. Also, a cat can certainly be considered a "pet", but a bird can as well. But in my hierarchy, a cat is a mammal while a bird is not. What should I do? Create a pet keyword for each of them? You can easily see how that hierarchy can become more and more cluttered if we try to introduce these concepts. In this case, you'd better use a synonym. A cat may be a synonym of "pet", of "clawed", as well as a bird may.

If you then want a synonym to be inherited from a keyword, you can select the "Export Synonyms" options of the affected keyword so that it will be explicitly listed in the keyword list of an exported or published photo.

Searching and Filtering

Any kind of metadata can be used to search and filter your catalog images.  In Part V of this tutorial we already described the basic search and filtering facilities of Lightroom, so that we will only summarize them here.

A filter by flags, ratings or labels can easily be built using the Attribute filter bar. As you can see in the next image, you can just select on the graphical user interface the values you're interested in and Lightroom will filter the contents of the currently selected folder (or collection) according to your choices.

Attribute filter

If you want to filter using keywords, you can use multiple techniques. The first is using a Metadata filter. You can configure a Keyword column for such a filter and select the keywords you want to filter with. In the following image you can see how Lightroom intelligently makes things easy including only used keywords in the currently selected folder (or collection).

Metadata Filter (stacked with a Text Filter)

Another way to search using a keyword is using a Text search. You can either freely search on every searchable field or narrowing the choice specifying the field you want to look for (as seen in the following images).

Text Filter

Text Filter - Searchable Field

The last method you can use to filter for a specific keyword is using the Keyword List panel to build a quick filter. When you hover a keyword with your mouse, a small arrow appears on the right of the keyword record, as highlighted in red in the following image. Pressing the arrow control makes Lightroom create a Metadata filter using the selected keyword. If you need to filter with just one keyword, this method is probably the quickest one.

Keyword List

Filters can also be stacked together to build even more complex search queries, mixing and matching criteria built onto any metadata field that Lightroom manages:

Multiple Filters Stacked Together

Conclusions

As we've seen, Lightroom provides excellent management capabilities and you can keep your catalog perfectly organized with very little effort. The point of an image management database is just this: making your ever growing catalog manageable. There would be no point in storing thousands of images if you couldn't effectively use the tool to quickly retrieve what you're looking for.

Some photographers start to use this kind of tools without realising their real potential nor the issues they'll start experiencing when their catalogs overgrows a set of few hundreds images. Lightroom, furthermore, is an excellent tool to develop your RAW files and it's easy to forget about it being a catalog manager as well.

That's why is very important to learn about these feature soon and start applying them to your regular workflow. The sooner, the better.

Protect Your Privacy

I want to stress once more how Lightroom can help you protect sensitive data (keywords and location information) so that you don't accidentally publish them.

Protecting the location information is probably easier because, even if geolocation data is often added automatically and you may forget about it, Lightroom gives a simple solution to this problem: just create a private location specifying its centre and its radius and just forget about it.

In the case of keywords, marking them as not exportable is up to you and you may easily forget, especially if you get used to creating them automatically from the Keywording panel. Lightroom, furthermore, does not separately manage subject names as other tools do and this fact induces users to define a keyword hierarchy for it. If you forget to configure each and every keyword according to your needs, private data may unexpectedly leak.

If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Sunday, July 29, 2012

Notes on Dynamic Range, Gamma Correction and the Importance of Shooting RAW

Some months ago I wrote a quick blog post titled "Tones and Dynamic Range. Why You Should Shoot RAW". In that post I quickly analysed the shortcomings of files with low bit depths (such as 8 bits) and tried to give a valid reason why a photographer should always shoot RAW. Unfortunately, the limitation of the blogging platform and the time restrictions I've got make writing this kind of content awkward, at a minimum.

Here's a document featuring the same content, although better organized.

Notes on Dynamic Range, Gamma Correction and the Importance of Shooting RAW v. 1.2 (PDF)

Monday, July 23, 2012

Adobe Photoshop Lightroom Tutorial - Part XXIII - Understanding Channel Mixing to Achieve Effective Black and White Photos

Part I - Index and Introduction

Converting a photo to black and white may seem one of the easiest thing you can do with your photo editing software of choice. Unfortunately, nothing could be further than the truth, and if you don't do it correctly you may end up with dull images, very different from what you thought you'd get.

The Basic Fallacy: Zeroing the Color Saturation (Depending on the Tool You Use)

An approach I see very often to convert an image to black and white is zeroing the color saturation. What's worse, I sometimes hear theories about not-well-defined advantages of this technique. You get a black and white image, of course. But that image doesn't represent the luminance your eyes are seeing, let alone the intuitive result you think you'd get. Depending on the colors that are present in the shot and their saturation, differences can either be subtle or very deep.

What's interesting to note, as we'll see in a minute, is that most differences will be noticeable on deep blues and saturated reds. Think about: skies and some skin tones. Indeed, this doesn't look like a great deal.

Without going into the technical detailes about concepts like relative luminance or luma in colorimetric spaces, a photographer should understand that the luminance values of pure RGB colors aren't all equal to the human eye. In fact, many standards try to model the behaviour of the human eye and most photo editing programs provide us the tools we require to achieve predictable results, according to how the human eye works. Just for the sake of example, here's the transform matrix from RBG to CIE 1931, where Y is the luminance:



As you can see, red contribution to luminance is approximately 4 times greater than green's (0.17697 vs. 0.81240) and more than 16 times greater than blue's (0.17697 vs. 0.01063).

Another example can be seen in the transform matrix from RGB to the Y'UV color space:


We can see the contribution of each RGB channel to the value of Y' (luma, a gamma compressed luminance). Once again, the contribution of red is smaller than green (0.299 vs. 0.587) and bigger than blue's (0.299 vs. 0.114).

What can we infer from this? That green brings the greatest contribution to the luminance value, followed at a big distance by red, and finally by blue. In other words: given three pure RGB colors with  the same component:
  • Green will be much brighter than the others,
  • Red will be much darker than green but slightly brighter than blue,
  • Blue will be the darkest of all.


What happens, then, zeroing the RGB color saturation?

What happens is simple: the smaller the saturation, the more colors will tend to equal a specific grey value. We're not interested in knowing which grey tone, but the important thing is that each channel will then have the same luminance.

Here's a visual example, better than a thousand words: in the following images you can see the same color chart with different values of color saturation: 100%, 50%, 25% and 0%.

Color Chart - Saturation: 100%

Color Chart - Saturation: 50%

Color Chart - Saturation: 25%

Color Chart - Saturation: 0%

What's clear from this example is that zeroing the color saturation is not what you want during a conversion to black and white. In fact, to achieve balanced and realistic results, or results that at least would faithfully represent what you're eyes are seeing, you'd expect relative luminance between different RGB colors to be respected. That is, red should be a darker shade of grey than green and blue an even darker shade of green.

What can you do, then? Simple: use the tools provided by your photo editing program and forget about the saturation adjustment.

To say the truth, this also depends on the tool you use. The saturation control of some photo editors, such as Lightroom, behave differently than this and, in fact, apply a default mix when desaturating colors. Even so, even basic photo editing programs have got a "Convert to black and white" feature which will hopefully do a better job than you can do with the saturation slider. More sophisticated program may offer better tool, the "channel mixer" being the most interesting and flexible from a photographer's point of view.

Channel Mixing

As we've seen, it's necessary to "mix" RGB values with certain coefficients to simulate the behaviour of the human eye, as far as relative luminance across RGB color is concerned. That's exactly what the channel mixer can do for you, and that's why it's an omnipresent feature of good photo editing programs, although little known to many amateurs.

The channel mixer simply lets you change the coefficient with which each color contributes to the luminance. Raising a color's coefficient means that its contribution will be bigger, and thus will appear brighter. On the contrary, lowering a color's coefficient means that its contribution will be smaller, and thus will appear darker.

Adobe Lightroom 4 applies a default "mix" when converting to black and white which I found pretty acceptable. However, since the tool is there to use it, I almost always tweak the mix a little bit to achieve the results I desire. For example, I often raise the value of the red and orange channels in portraits in order to reduce speckles and imperfections and to slightly brighten the skin.

Here's the result you achieve in Lightroom 4 when desaturating the reference color chart:

Lightroom 4 - Saturation: -100

As you can see, it has done a much better job than Photoshop in this case (beware: I say "much better" from a photographer's point of view, not from a theoretical one). Now, reds and blues are darker than greens, as expected.

On the other hand, if you convert it to black and white, here's the result you get:

Lightroom 4 - Black & White - Default Mix

and here's the default mix applied by Lightroom:

Lightroom 4 - Default Black & White Mix

The result is somewhat less contrasted, and you can see how blues are slightly brighter (+9) and greens are slightly darker (-27).

The point is there's no right or wrong here: the Black & White mix is a tool you can use to fine tune your shot and is much more flexible than just zeroing the saturation. For example: if you want a darker sky, for example, just decrease the Blue contribution. if you want some brighter reds, just increase their contribution.

Conclusion

The bottom line is: the channel mixer is pretty flexible and you can use it at your advantage, for example to simulate some B&W filters in post production. For example, I often increase the Red and Orange channel in many portraits, especially those taken during the summer, to get less contrast in the subject's skin thus getting it brighter and removing many imperfections. In the following shot, for example, you can see how simulating a red filter has given the photo more contrast in the red highlights and resulting in a smoother and brighter skin:

Red and Oraange channels were increased to simulate a red filter

The same effect was used in this shot to achieve a similar result:

Red and Oraange channels were increased to simulate a red filter

In the following shot, the subject was very tanned and the shot, when converted to black and white, had a look I really didn't like. Once again, using an appropriate mix, I could deliver a more natural skin tone and get rid of all the subject's speckles too.



If you want to help me keep on writing this blog, buy your Adobe Photoshop licenses at the best price on Amazon using the links below.

Saturday, July 14, 2012

Nikon Creative Lighting System Tutorial: The Basics

Nikon Creative Lighting System Tutorial: The Basics v. 1.1 (PDF)

Nikon's Creative Lighting System, in Nikon's word
offers photographers new and unprecedented levels of accuracy, automation and control.
Looking past the marketing jargon, Nikon CLS is a set of technologies (and automations) that enable photographers taking the most out of their flash systems with the minimum effort. The technologies making up CLS include:
  • i-TTL balanced fill flash.
  • Auto FP High Speed Sync.
  • Flash Value Lock (FV Lock).
  • Wide-Area AF-Assist illuminator.
  • Flash Color Communication.
  • Distance-Priority Manual Flash.
  • Modeling Flash.
  • Advanced Wireless Lighting.

As it happens with most automatic mechanisms, an incomplete understanding of their behaviour might negatively (or at least counterintuitively) affect the obtained result. In my opinion, the behaviour of several Nikon CLS technologies isn't properly documented in the camera and flash manuals and, "unfortunately", you will be using some of them each time you use a flash, including the pop-up flash of your Nikon camera.

The purpose of this document is to describe the fundamental behaviour of the basic technologies that make up the CLS technology, such as i-TTL balanced fill flash (TTL-BL), regular iTTL flash (TTL in this guide) and flash value lock (FV lock).

Flash photography implies multiple exposures

As we've seen in Flash exposure tutorial: the basics, any time you're making a photograph, you have to deal with multiple light sources which, in the context of this article, we will divide into two categories: ambient and flashIn this section we'll quickly recap the concepts exposed in that tutorial.

Even if extremely faint, you will always be dealing with ambient light, which you may consider as the amount of light coming from continuous light sources outside of your control (the Sun, environmental lighting, etc.). That's the light you're used to meter when taking a shot without controlling any flash.

As soon as you turn on a flash, you're introducing new variables into the equations. The first thing you've got to realize, apart from the distinctive traits of flash light we've already covered in the previous tutorial, is that you've got control over that light source and this is where the CLS technology comes into play.

Every time you shoot a picture using flashes the resulting image will be the combination of two exposures: an exposure coming from the ambient light and an exposure coming from the flashes. Depending on the results you want to achieve, you will have to meter both light sources and configure both your camera and your flashes to achieve the desired ratio a/f between ambient light a and flash light f that's going to be received by your sensor.

In the previous tutorial we've seen how common settings (such as ISO sensitivity, shutter speed and aperture) differently affect a and f and we've also quickly described how TTL metering automatically changes the flash power output and, in turn, how it affects a/fThe physics behind it is easy, but the resulting mechanisms are not intuitive, and that's why it's important for a photographer to know them, at least their broad outline.

Nikon CLS system is a set of technologies that further assist the photographer in getting the results he wants more quickly and more easily. Once again, though, it's important for you to know how that technology works and the assumptions it makes: that way, you will be able to get the most out of it and you will avoid being "trapped" into situations in which you're getting results you can't explain.



Which is your main light source?
Failing to correctly recognize which your main light source is often is the beginning of a novice photographer's problems. No matter the lighting condition, if the main subject is poorly lit, you turn on the flash and hope the camera metering system will solve the problem for you. Whilst this is not so bad an assumption (after all, that's why the TTL and CLS technology are there), you must realize that your gear is going to make an educated guess based on its assumptions and the lighting conditions it determines. That's a starting point, but it seldom is the correct guess.

In fact, look at the name of one of the most misunderstood CLS technologies: i-TTL balanced fill flashFill flash implies that the flash is not the main light source or, said in other words, that ambient light is stronger that flash light. Unfortunately, although understandably, i-TTL balanced fill flash is the flash mode Nikon cameras and flashes use by default.

One of the important things we've learnt is that flash is a nearly instantaneous light source whilst ambient light is continuousAs a consequence, you can adjust the ratio a/f of their contribution, at least to a certain degree. This fact allows you, for example, to get properly exposed subjects and properly exposed backgrounds, where properly means "according to your will". TTL metering makes this so easy because it automatically changes the flash power output to compensate a change in other parameters (such as aperture or ISO sensitivity) and get a properly exposed subject.
To summarize:
Ambient light exposure is controlled by the camera metering system, while flash exposure is controlled by the flash metering system.
Said in other words, they're decoupledThe only "problem" with TTL metering is that novice photographers are often unaware of it, and wonder what's going on when results aren't as expected or when they're ready to make a step forward and get more creative.

To understand the different nature of the two situations and the kind of issues you may run into, let's make a quick summary of what we've seen in the previous article, trying to distinguish between them.

Flash is the primary light source
When flash is the primary light source, things are pretty easy. If you want your subject to standout over an underexposed background, just reduce the contribution of ambient light (for example, reducing the shutter speed) and the ratio a/f will decrease as well. On the contrary, you can increase the contribution of ambient light and the ratio a/f (for example, lowering the shutter speed or raising the ISO sensitivity) if you want to get a more exposed background.

Ambient light is the primary light source
When ambient light is the primary light source, flash can be used to balance shadows of a poorly lit subject and this technique is usually called fill flashThe camera will be set to correctly expose the (lighter) foreground and the flash metering system will fire the flash at the correct power to correctly expose the subject. You could argue that the sum of the two light source could eventually overexpose the subject: it's true, and that's one of the aspects that the Nikon CLS system takes care of.

TTL vs. TTL-BL
In Nikon's jargon, TTL (regular TTL) and TTL-BL indicate how the metering systems of your camera and your flash will "react" to the lighting conditions you're shooting in. Unfortunately, the difference between the two systems is not well understood by many users and I recognize that Nikon is not making its best to clarify the differences between the two modes in its manuals.

As we've seen, common exposure parameters may have different effects on different kind of light sources and on the lighting conditions you're shooting in. If you shoot in regular TTL, you can separately manage the two exposures (ambient and flash) and get the results you want.

In TTL-BL mode, the metering systems will assume you want to balance the two exposures. Basically, you're telling your camera to assume that the subject is darker than the background. Nikon CLS has been improving over the years and I do recognize that you can get great results even when blindly shooting in TTL-BL all the time. However, you may sometimes get weird results when shooting TTL-BL and not meeting its assumptions, and it's important to understand why.

In the next sections, we will quickly recap how TTL and TTL-BL modes work, the assumptions the metering systems make and the decisions they take. Since I haven't found yet conclusive official documentation about the Nikon CLS internals, please take all of this with a grain of salt.

TTL flash

When using the flash in TTL mode, you're basically telling your camera metering systems to independently manage the two exposures: ambient light and flash light will be metered separately and no (or little) compensation logic will be applied.

The behaviour of the TTL metering system isn't always intuitive and, once again, it is not properly documented. When using this mode, the two metering systems will meter ambient light and flash light independently.

This fact, as detailed in the previous tutorial, may lead to overexposure: if both metering systems are calculating a "correct exposure", if the lighting conditions and the scene characteristics are such that the two sources of light are not negligible, at least in a certain area of the scene (such as the very subject), then the two "correct exposures" will sum up and this may lead to a 1 stop overexposure in that area.

But besides these generic problems, Nikon CLS' behaviour may introduce new issues. Recent cameras, for example, may try to avoid overexposure risks by automatically dialing a negative exposure compensation in automatic and semi-automatic modes. This reduction seems to be somehow proportional to the intensity of the ambient light. That's why you may sometimes get a background darker than expected, especially when ambient light is very bright. This issue clarifies why TTL flash may not be the best mode to use for flash fill, especially when the scene is bright.

Another very important aspect of how the flash metering system works is: which part of the scene does it meter? It comes out that it meters the center of the frame. If your subject is not in the center when shooting, the flash metering system will be deceived and you may end up with an incorrectly exposed subject. This is the reason why Nikon CLS has got a flash value lock feature (FV lock) that we will see in the following sections.

Ultimately, in all the cases when you don't need TTL-BL (see below), you should switch to TTL. The quickest way to do that with Nikon cameras is selecting the spot metering system. You will then have total control over your photo and, following the advices of the previous tutorial, you will be able to get very good results, especially being able to tune the relatively intensity of ambient and flash lights in your shots: using the camera common exposure settings (ISO sensitivity, aperture and shutter speed) you will tune how much ambient light is detected by the sensor and using flash exposure compensation you will tune how much flash light will light your subject.

TTL-BL flash

Fill flash is a technique in which you use a flash to "fill" the shadows in the subject when the ambient light is brighter than the subject itself. For example, if you shoot a backlit subject, such a person in front of a bright sky or a window, you may need to fill the shadows in the subject face using a flash. This is the use case Nikon invented TTL-BL for: TTL-BL is meant to balance ambient light with flash and get a properly exposed and balanced background and foreground.

TTL-BL is the mode used by default (unless spot metering is used) with both the pop-up flash and hot-shoe mounted flash units compatible with Nikon CLS. As stated in the introduction, modern TTL-BL flash implementations work really good even when using them when flash is the primary light source. However, you may get unexpected results at times; that's why you should learn about both flash modes and learn to choose and use the more suitable depending on the shoot you're taking.

When using TTL-BL, the two metering systems will coordinate together in order to achieve the desired balancing of ambient and flash light. Roughly speaking, when using TTL-BL, the two systems meter the light and exchange the information required in order for the flash to be fired at the power that will achieve the balance. Once again, though, the camera will set its parameters as if the flash wasn't usedinstead, the flash metering system will lower the flash output at the desired level, taking into account the intensity of the ambient light. If the subject is not darker than the background, then, you will get an overexposed subject.

Flash exposure compensation can be used to fine tune the flash power output even when you shoot TTL-BL. Very often, in fact, you'd rather reduce the flash power output in order for your subject not to stand out too much in the shot or to achieve more creative moods.

Hopefully, it's now clear that the rationale behind TTL-BL flash is balancing a darker subject against a brighter background. No matter how smart your metering system might be, if you're not shooting under this assumption, you should switch to TTL instead.


Aperture priority mode in a bright ambient
As already seen in the previous tutorial, extra care must be taken when using aperture priority mode with a flash, especially in bright ambient light. In fact, aperture has a direct effect on both ambient light and flash light reaching the sensor and, above all, on the flash power output required to correctly light the subject. If you recall the definition of guide number, your flash will be able to properly light a subject at a certain distance (see PDF version for more details).

Why does this fact matters so much? Because if the light gets brighter, or if the ISO sensitivity being used is increased, then the shutter speed selected by the camera will be increased to compensate for it. But there's a maximum shutter speed that can be used when using a flash (wnless your camera can use high speed flash sync) which can be as slow as 1/200 s. If the correct exposure is given by a shutter speed faster than this value, the camera won't be able to select it and you'll get an overexposed shot.

In bright light, such as when shooting in sunlight, that's a boundary that you'll hit very soon: the sunny 16 rules gives:
i = 100
s = 1/125
a = f/16

An aperture = f/16 is right at the limit. If you open it one stop, you'll get
i = 100
s = 1/250
a = f/11
and you've just hit the maximum shutter speed for flash sync.

I bet many people won't be usually using aperture priority mode at a = f/11 (or smaller) at they'll surely get an overexposed shot.

On the other hand, if you're aware of what's going on (your camera meter will indicate the overexposure) and close down your aperture, you may soon get your flash out of range. Professional speedlights (such as Nikon SB-910) have guide numbers around 34 meters which, at = f/16, give a maximum distance r from the subject of approximately 2 meters:
r = g/a = 34/16


Clearly, a very short flash range.

Why manual mode is a good choice when using TTL flash

There are several reasons why manual mode is a good choice when using a flash, especially in TTL mode.

Manual is not that challenging
The first one is that manual mode is not that challenging when shooting with a flash. The reasons are manifold. First of all, as we've stressed several times, the camera metering system pretty much ignores the flash, even more when used in TTL mode. As a consequence, manual mode lets you freely use the camera metering system to quickly evaluate ambient light conditions and achieve the effect you want. On the other hand, the flash metering system will do its job and will properly expose your subject.

The flash freezes the movement
The nearly instantaneous burst of light emitted by the flash will freeze the movement of your subject and, in a reasonable settings range, you won't have to worry too much about shutter speeds and about getting a blurred subject. If you remember what we've seen in the other tutorial, the shutter speed usually has no effect on the amount of flash light reaching the sensor. This is a "degree of freedom" when using manual mode with flash: you can set ISO and aperture according to the flash power output needs you have and then using slow shutter speeds, even when no tripod is used. In fact, you may get interesting and creative results: when using wide apertures a slightly blur in the background will be nearly indistinguishable with shallow depths of fields.

No need to use "slow sync"
When using automatic or semi automatic modes with a flash, cameras usually limit the shutter speed to a minimum which is usually around 1/60 s. Such a speed is often insufficient for the sensor to gather sufficient ambient light and you get the typical "white ghost" (your subject) over a dark background. To override this behaviour, you need to choose the slow sync mode: your camera will then choose slower shutter speeds.

But why? I think the reasoning behind that behaviour is that the camera prevents you from getting some ghosting in your shot. 1/60 s is sufficiently fast a speed for freezing a standing still subject when shooting without a tripod. It's sort of an "error prevention" mechanism you can override if want to.

However, in the previous section we've seen you can use the instantaneous flash light to freeze your subject movement when using slow shutter speeds. Should you get unacceptable ghosting, just increase your shutter speed in manual mode and you're done.

Use flash exposure compensation and exposure compensation interchangeably
This is an advantage in terms of ergonomics I like when using manual mode with TTL flash. As we've seen, photographers can use two mechanisms to compensate exposure: exposure compensation and flash exposure compensationThe former acts on both light sources and effectively changes camera settings to accordingly reduce ambient light and flash power output. The latter acts only on flash power output, and is commonly used to fine tune the ratio between flash light on the subject and ambient exposure.

But what happens when you're using manual mode? When using manual mode, exposure compensation just changes the value shown by your camera meter, but effectively has no effect on camera settings (beware that you must also disable auto ISO). As a consequence, exposure compensation will only affect flash power output and will give a result similar to what you'd obtain using flash exposure compensation instead.

Flash value lock (FV lock)

No matter which flash mode you're using, the flash metering system always meters the centre of the frame. This is a very important thing to know if you want to get predictable results when shooting with a flash.

The problem is somewhat analogous to what happens when you set a parameter and then recomposeMost photographers are aware of the risks of recomposing when using some automatisms such as exposure meters and autofocus systems. Unless you tell the camera somehow what your subject is, you won't get predictable results.

The same thing happens with the flash exposure metering system. As it measures reflected flash light at the centre of the frame, when your subject is not in the centre, its exposure will likely be incorrect. This problem can be amplified by the fact that very often, after recomposing, the centre of the frame contains a farther background or a nearer foreground object. In either case, the flash metering system will be deceived and, for the effect of the inverse-square law (see Flash exposure tutorial: the basics), its reading can greatly differ from the correct one. As a consequence, you may get strongly overexposed or underexposed subjects.

Similarly to what happens with exposure lock and focus lock, a new kind of lock is provided: flash value lockUsing flash value lock when pointing at your subject lets your camera meter a flash burst and lock the flash power output level. You can, then, recompose your shot and get the proper flash output. Furthermore, while the flash value is locked, you will be able to take a burst of photographs using exactly the same flash power output obtaining consistent results.

Depending on your camera settings, the locked flash value will be retained until
  • you unlock it, pushing FV lock again,
  • the metering system timeout elapses (by default, it's a few seconds on most cameras),
  • the camera is turned off.

For this reason, check your camera viewfinder and be sure the lock hasn't be released before taking the picture.

Using the FV lock burst at your advantage
I find that the FV lock flash burst can also be useful to "prepare" your subjects' eyes to the main flash bursts. After the lock burst, you can have your subject blink their eyes and get used to it. Then, you can take the shot and reduce the chances someone has blinked just when the main flash burst is emitted.


Examples

Here are some examples to better understand how we can take advantage of both metering systems, of Nikon CLS technology and, thus, of our flash.

TTL-BL fill flash}

First of all, let's see an example of how we can use Nikon TTL-BL to get a shot with a well-balanced subject. The subject in the following figure was partially in the shadow of a palm tree and was strongly backlit. Since ambient light is the main source of light and our subject was darker (at least partially), we needed some fill-flash. Hence, I switched my camera in matrix metering mode, the flash in TTL-BL mode, I locked the flash value pointing at my subject and took the shot. Since ambient light was very strong and strong reflections were coming from the pool, I dialed an exposure compensation of -0.7 EV. The result is a well balanced image with a properly exposed background and the shadows in my subjects' faces partially lifted by the filling flash. In this case, I'd probably dial in another -0.7 EV to the flash compensation in order for my subjects not to ``pop out'', but that's just a matter of taste.


Matrix metering, TTL-BL, Exposure compensation: −0.7 EV 



TTL flash in manual mode
In the following figure we can see what happens turning on the flash and taking the photo using the settings suggested by the camera metering system in matrix metering mode and flash in TTL-BL.

Matrix metering, TTL-BL    

In this case, the ambient light was moderately strong, even if were in the shadow, and the subject was as lit as most of the background (excluding the sky). The poor child is much too lit: he's popping out the photo as if it were a ghost. Also, that background is ugly, so that we could take advantage of manual mode and TTL flash in order to darken it a bit and lower the ratio $a/f$ between ambient light and flash light.

In the following figure you can see the result of shooting in manual mode with flash in TTL mode locked onto our subject.

Manual mode, TTL, f/7.1, 1/125 s., ISO 320 

With the specified parameters (f/7.1, 1/125 s., ISO 320) the camera was metering an underexposure of almost -1 EV, so that the background would be 1 stop darker. On the other hand, the flash was locked onto the subject and flash compensation was set to -2/3 EV. The result is a dark background and a brighter subject, although not as bright as before because of the negative flash compensation I dialed in.


Why? Once again, just a matter of taste. Most of the times I prefer reducing the flash output to reduce the "ghost" effect you get when flash output is too strong. In case you need more output, just change the compensation, that's part of the beauty of Nikon CLS in manual mode.


In the previous example we wanted to achieve a darker background and we used the camera in manual mode with flash in TTL mode to modify the ratio between ambient and flash light a/f accordingly. In the following figure we use the same technique to \emph{increase} the a/f ratio. Since I wanted a bright background for this photo and had no other lighting equipment with me, I put the subjects in front of a white wall and lit by artificial light while leaving them on partial shade. On the one hand, I set the camera parameters to slightly overexpose the wall and get a bright white background and on the other hand I locked the flash value on the baby's face to properly expose it with a little fill.


Manual mode, TTL, f/7.1, 1/30 s., ISO 400 


Freezing action
In the following figure you can see how we can take advantage of the manual mode and the flash TTL mode to get:
  • a not-so-dark background,
  • a properly lit subject and
  • frozen action.


The child was moving in a low light situation and I couldn't have frozen the action without the flash without severe underexposure.On the other hand, manual mode lets us use shutter speeds as low as we want to gather the desired amount of ambient light: in this case, 1/10 s. was enough to get background light spots sufficiently visible. On the other hand, 1/10 s. is too slow a speed to properly freeze the movement of a human being at a focal length of approximately 60 mm, let alone a child who's playing. The flash burst duration, on the other hand, is much shorter (it depends on the flash power but it's approximately 1/1000 s.) and it can freeze action quite effectively. In fact, although you can still see a ghosting effect around the child's arms and shoulders, the flash has frozen the child's action pretty well.

Manual mode, TTL, f/8.0, 1/10 s., ISO 1600 

Since background lights were very dim, I had to push the sensitivity up to ISO 1600 to be able to properly expose them at a shutter speed of 1/10 s. Of course, I could have use lower shutter speeds, but the resulting ghosting effect would have been too strong to be acceptable.

Nikon Creative Lighting System Tutorial: The Basics v. 1.1 (PDF)