Tuesday, 21 December 2010

Extending the Frame with Time

Photography is best known as capturing a specific moment in a spilt second
to freeze time and reveal what the naked eye cannot see. Digital photography
can also be used to explore the passage of time with time-lapse and stop
motion photography, video, and sound.

Time-lapse Photography

Photography is used for a variety of purposes: to record a dramatic sunset, chronicle the human condition, or focus on the personal. Additionally, still photography can also reveal what we simply cannot see—from the grandeur of outer space, to the incredible detail of macro photography, to the unseen passage of time that we are all a part of. Time-lapse photography captures slices of time that you then render as a movie file to speed up the passage of time. Time-lapse photography has come a long way since we were in third grade when the teacher showed the class scratchy, flickering movies of crocuses breaking through the soil. It was fascinating to see how the plants twisted and turned, and pale stems were transformed into full blossoms. That is the magic of time-lapse photography in which a series of photographs of the same scene, taken at regular intervals over a period of time, are assembled together and then played back as a movie file. By photographing a changing scene in regular intervals over an extended period of time, you are capturing the subtle changes of light, shadow, motion, and metamorphosis that the naked eye simply cannot see. Suitable subjects you can use to start learning about time-lapse photography include sunrises and sunsets, landscapes with moving cloudscapes passing through the sky, incoming and outgoing tides, urban scenes, construction projects, and of course blossoming flowers.

The equipment

The equipment to do time-lapse photography ranges from point and shoot cameras to (as we recommend) a DSLR with a sturdy tripod and an intervalometer , which is a sophisticated cable release that controls the intervals between frames, the number of frames, and even when the camera should start to take pictures. Some cameras include an intervalometer function. To shoot for extended periods of time or longer than your camera battery can last, you’ll also need an external power supply for the camera. After shooting the source images you’ll need (for the budgetminded) QuickTime Pro, Photoshop CS4, or Photoshop CS5 Extended, or (for the sophisticated high-end) Adobe After Effects or Apple Final Cut Pro to create the image sequence.

How many frames?
For a normal representation of motion, movies are shown at either 24 or 30(really 29.97) frames per second. To create slow-motion effects, you need to play back more frames per second; for example, 60 fps will result in smooth slow motion. To create a more compressed version of time—for example, an entire sunset elapsing in a very short amount of time—you need to play back fewer frames per second. Start by deciding how long the movie should be and then decide on a playback rate based on whether you want slow motion or a faster, compressed representation of time and motion in the scene. Let’s say you want a oneminute movie; use this formula, 60 seconds x 24 frames = 1440 frames. Now you need to figure out the duration of the event. For example, when photographing sunsets, start photographing an hour before sunset and shoot until one hour after sunset. So the actual event in this case is 120 minutes or 7200 seconds. To calculate what interval to shoot at, divide the number of seconds by the number of frames. In this example, 7200 seconds/1440 frames = 5 second intervals. If you want a smoother playback, shoot more frames. For the smoothest transitions, shoot one frame per second. Keep in mind that you can always decide to not use frames that you’ve shot versus needing frames you never captured.


Take these tips to the bank!
Shooting source images for time-lapse photography can be time-consuming
and tedious, so it is best to practice shooting before traveling to an exotic location
to photograph a once-in-a-lifetime event or scene. Consider the following
tips we’ve already learned. We’ve made the mistakes so you don’t have to:
• Clean the camera sensor before a shoot, especially when shooting skies!
• Turn off the LCD preview to extend battery life.
• Use the largest camera card media possible.
• Prefocus the camera in manual focus.
• Shoot RAW to have more control over color balance and exposure
adjustments.
• Shoot JPEG files when learning and to speed up post-shoot workflow.
• If you’re shooting in daytime exteriors or interior scenes with fixed lighting,
use manual exposure. However, if you’re shooting from dawn into
daylight or day into dusk, select an automatic or program exposure mode.
• When shooting late afternoon into nighttime, be sure to use a medium
ISO and a fairly wide-open lens aperture. Otherwise, you could wind up
with exposure times that are slower than the duration between each shot.
• Beware of shooting into the sun for extended periods of time during
sunrise or sunset because the lens can focus the sun’s rays into the body
of the camera and burn pinpoints into the shutter curtain.

Stop Motion Animation

   In addition to time-lapse animation, DSLRs work quite well for creating stop motion animated stories. Stop motion animation is a slow and deliberate process that refers to carefully moving elements in a scene, stopping to record a frame, moving the elements slightly again, stopping to record another frame, and repeating the process over and over until you have a sequence of motion created by many photos. This very old animation method has been around almost as long as movies. Many classic monster and fantasy films, such as the original King Kong (1933) and Ray Harryhausen’s Jason and the Argonauts (1963) were filmed this way using motion picture film. Today, DSLRs have been used to create stop motion animation for such films as The Corpse Bride (2005), The Fantastic Mr. Fox (2010), and the Wallace and Gromit and Shaun the Sheep series of animated shorts from director Nick Park.

   DSLR Video
   The primary choice between making a movie versus a still image is determined by the action. Imagine a photograph of a beautiful landscape. Now imagine that same scene shot with video. Hold it—hold it—hold it; if nothing happens in about 20 seconds, you’ll click onto a different Web site or flip the channel. But if a character walks into the frame or if the frame pans as industrial sounds increase, the viewer will be intrigued and will continue watching. Of course, the photographer could shoot a series of images with captions that address the environmental or social issues that this scene shows, but the videographer will engage the viewer by using a much different set of tools and skills to extend the story beyond the static frame of the still image. Generally speaking, photographers have been primarily concerned with a single frame leading to a final print of the moment that uniquely captures and freezes the light, scene, and subject in a fraction of a second. Videographers and filmmakers concentrate on telling a story through the passage of time.
   A photographer is trained to respond and capture “the decisive moment,” whereas a filmmaker uses time, motion, and sound to set the scene, introduce the characters, and develop the story with a beginning, middle, and end. The design, nomenclature, and ergonomics of a DSLR remain rooted in the photographic experience of the past. Until most recently, people used a DSLR for photography and a camcorder for video. Today, photographers are intrigued with the ability to shoot video, and videographers are enamored with the ability to capture quality still images—both with the same camera and lenses. The capabilities of a DSLR as a video camera are so impressive that filmmakers are exploring their use in feature films and in HD broadcasts. The earliest results are very impressive because the fast and sharp DSLR optics allow for working in low light and with a shallow depth of field. The relative compact design and the high quality of the DSLRs allow for unusual camera perspectives that working with cumbersome video equipment simply does not. DSLR cameras have every component and are often better than those included in a modern, professional video camera. In fact, the image sensor on many DSLRs is substantially bigger than those used in most digital camcorders. Speaking naively, all the camera makers had to do to make a still camera into a video camera was add high frame-rate capture; build in the suitable frame buffer and codec support for video file compression; and take advantage of larger and faster storage.

       Technical considerations
  Working in video is more than changing the camera from single frame to video capture. The technical requirements of a video production are based on the fact that in a movie, either the subject is moving through the frame or the camera is moving, or in some cases as the director Spike Lee loves to do, both are moving. In every instance, stabilizing the camera so the image does not jump and jitter improves the effectiveness and viewer’s enjoyment of the movie. To capture a stabile video, a video tripod with a fluid head that allows you to smoothly pan or tilt the camera without shakiness is an essential piece of equipment. Fluid heads use hydraulic dampeners to make all camera movements as smooth as possible.
    To move through a scene, skilled videographers use a SteadyCam support, which looks like a medieval torture harness that the camera is attached to and the videographer wears, allowing the camera to glide through a scene. Interesting to us is that much of the technology that went into the development of image stabilization for video has been miniaturized and adapted into the image stabilization lens that still photographers now rely on to get a steady shot. Sound quality is a critical issue to consider. In fact, viewers are more likely to watch a video with poor image quality but will have zero patience for poor sound. Currently, the microphones built into DSLRs are unsuitable for high-quality sound capture, because they pick up general ambient noise, including your breathing. To capture quality sound, specialized off-camera microphones that can be placed and focused are essential. Using a directional off-camera microphone immediately improves sound quality by being isolated from the ambient noise of the scene, camera, and crew. This is an issue that is solved in feature-film productions by using an entirely separate audio system and a highly trained team of audio technicians. Even with the expertise and staff, location sound is usually augmented or replaced by audio recorded in sound studios. For the occasional moviemaker, we recommend shooting the scene without sound (or removing the sound in post) and adding effective music or narration as the audio track. Working in New York City has allowed Katrin to stumble upon numerous television and movie productions, and she can attest to the fact that nothing is as it appears in the movies or the TV series Law & Order.
    The red-blue flicker of police cars are created with rotating gelled lights positioned far away from the action; interior shots are lit with three-story high cranes, scrims, flood lights, and full-on generator trucks; and daylight rarely is, well, just daylight. There are a number of challenges in lighting for action. For a scene to be lit effectively, it must allow the actor to move through the scene with the expected exposure throughout the entire scene without changing camera exposure. Simply put, video requires using more lights with more planning. The more complex the story, the more you have to prepare. The inherent cost of shooting a film prohibits spontaneity. The scene is staged, the interior is set with props, and the actors are rehearsed. The magic is in the hard work; the devil is in the details.

Night Photography

 If you’re looking for new territory to explore with your camera, then consider leaving the familiar world of daylight behind and venturing into the darkness to explore the magical landscape of night photography. In the deep shadows and mixed lighting of the nocturnal world even ordinary locations can become mysterious and beautiful. Long exposures, motion blur, the soft glow of the moon and stars, or the colorful wash of ambient street lighting all offer many interesting possibilities for creating intriguing images. The best thing about night photography is that it requires little or no extra equipment, and many cameras, even compact models, have the necessary controls for capturing images in low light. Apart from some basic equipment, all you need is a willingness to explore and experiment, and stay out a bit later (or get up earlier) than usual .

Camera Controls

 In basic principle, night photography is no different than photographing during the day: The shutter is opened to collect an adequate amount of light so that a proper exposure can be made. The primary difference is that the shutter must be opened for much longer than an exposure made in daytime or in bright, indoor lighting. If sharpness is desired, a tripod or some other means of stabilizing the camera must be used. Before venturing into the darkness, however, it’s helpful to review some basic camera exposure information through the eyes of a nocturnal photographer.
 
 ISO

 As we discussed earlier in the book, the ISO determines the light sensitivity of the image sensor. Higher ISO numbers indicate greater sensitivity in lower-light conditions, meaning that you may be able to take a handheld shot in very low light, or if you are using a tripod, that your exposure times will not be very long. Modern digital cameras are pretty amazing in their ability to capture lownoise images at very high ISO ratings, but just keep in mind that there is a chance of more noticeable noise when working at higher ISOs. The amount of noise that you will see in your photos is primarily a factor of how the image sensor responds to low-light situations, since increasing the ISO is merely an amplification of the basic data that the sensor gathers. Some sensors perform better than others, and newer cameras are better in this regard than their older counterparts. If you have the luxury of using a tripod, our recommendation would be to use the lowest ISO setting that results in exposure times of an acceptable duration. The primary factor here is a combination of your patience, how many shots you want to capture in a given time frame, and your own tolerance for noise. Seán generally uses 100 ISO when photographing at night with a tripod because this yields better image quality with no noise. The only times he increases the ISO is if the lower setting and existing light would result in overly long exposure times. While photographing at Machu Picchu on a very dark night with no moon, he opted for an ISO of 400 so that the exposure times would not get too long (this can also affect battery life). Even with an ISO of 400, his exposure times averaged about six minutes and that included added illumination from light painting using a small flashlight. In addition to lower noise levels, another advantage of a low ISO setting is that since it results in longer exposure times, it allows you more time to add supplemental creative lighting from a flashlight or multiple firings of an external flash.



 Exposure modes
When it comes to which exposure mode to use for night photography, there
are a few different approaches you can take:
• Automatic mode. You can leave the camera on Automatic mode and let it figure out how long the exposure should be. Although leaving the camera on Automatic mode does have its limitations, such as a wideopen aperture and a limit to the duration of the exposure time, with many cameras the results are surprisingly good. If you want to explore this route, the first item to check for is a way to turn off the flash. On most cameras, there is a flash off symbol that looks like a diagonal lightning bolt arrow overlaid with the international “No” symbol of a circle with a slash through it. Having a camera where you can override the flash is a must if you want to explore the world of night photography.
• Aperture or Shutter Priority. With a semi-Automatic mode such as Aperture or Shutter Priority, the camera calculates half of the aperture/ shutter speed exposure combination. This can be useful if you need to be sure of a specific depth of field, or if a shutter speed of a certain duration is necessary to create a certain effect, such as when you want to create a motion blur effect.
• Manual. Manual mode is our favorite way of photographing at night because it allows us total control over the exposure. We typically use the same aperture for the entire session, which lets us vary the shutter speed in search of the perfect exposure. Since night photography is often a much slower, more contemplative process than daytime photography, operating the camera in Manual mode, even if you are not used to it, is very easy.

  White balance
In the past, photographing at night with film often meant unwanted color casts in the image since film is balanced for the color temperature of daylight, which is approximately 5500° Kelvin. The color temperature of night scenes can vary greatly depending on the types of illumination in the scene. Moonlight and starlight tend to produce cool lighting, whereas artificial lighting can result in color temperatures that are red, yellow, and green, or a combination of different color casts. With digital photography, dealing with the white balance issue is easier than with film. When Auto White Balance (AWB) is used, the white balance can change on a per shot basis. For artificial light sources such as street lamps or illuminated signs, you could also try one of the White Balance presets such as Tungsten or Fluorescent. Many cameras also offer a custom White Balance feature that can be used to measure the illumination at the scene and adjust the white balance accordingly. If you shoot RAW (which is highly recommended for night photography), white balance is not as much of an issue since you can adjust the White Balance setting in Lightroom, Adobe Camera Raw, or in the RAW conversion software of your choice. This allows you to use White Balance as a creative setting to modify the look and feel of the image .

Other Necessary Equipment

The only absolutely necessary piece of equipment for night photography is a camera that can do long exposures, and most cameras today have this capability. Timed shutter speeds generally go down to 30 seconds, and many cameras offer a Bulb setting for longer exposures. There are some other items that give you more control over your images.


 Tripod
Although handheld night photography is possible and blurry images are not always undesirable, a tripod is necessary if you want to make images where the scene is sharp and in focus (just be sure to turn off any image stabilization or vibration reduction features in the lens when the camera is mounted on a tripod). If you don’t have a tripod with you, any stable surface, even the ground, will do for immobilizing the camera, although your choice of camera angle and compositions may be limited by the position and height of the impromptu stabilizing surface. Having a tripod makes it much easier to explore vantage points and control the framing.

  Locking shutter release cable
As mentioned earlier, most cameras offer timed shutter speeds down to 30 seconds, although some may have longer timed shutter speeds. After that you need to use the Bulb setting, which keeps the shutter open for as long as the shutter button is depressed. But this can be awkward and at times very uncomfortable.

Extra batteries
 Long exposure photography can be a major drain on your camera’s battery, which has to supply power to the image sensor for as long as the shutter is open. Temperature can also affect battery performance, and in very cold weather, your battery life may be much shorter than normal. You should plan on carrying extra, fully charged batteries with you for your nocturnal photo shoots. The longer the exposure times, the more likely you will need to use the extra batteries.

Extending Exposure with HDR

Just as shooting multiple exposures to blend into a panorama can help to extend the frame, High Dynamic Range (HDR) imaging techniques allow you to extend the limits of camera exposure and create images that contain a range of contrast and brightness values that better reflect how you see the play of lights to darks, which would be impossible to capture in a single shot. The ability to easily create and combine multiple exposures of a scene into an HDR image has been one of the most groundbreaking aspects of digital photography, freeing photographers from the technical limitations of the image sensor in their cameras.

When to Shoot HDR

In many scenes, the dynamic range of contrast from the deep shadows to the bright highlights exceeds the capabilities of image sensors to capture all the tonal detail present in the scene in a single exposure. High-contrast outdoor scenes on bright, sunny days; sunsets ; twilight photography; and interior locations that combine darker areas with artificial illumination or views through windows to brightly lit exteriors are all situations in which you can use HDR techniques to capture a full range of tonal values.

Because HDR involves taking multiple shots of the same scene, each at a different exposure optimized to record a different level of brightness in the scene, it is not suited for all types of photography, especially where fast motion in the main subject is an integral part of the story, such as with sports photos. It is an excellent match, however, for landscapes, cityscapes, architecture, nature, travel, and still lifes, and can even be used to create an intriguing look for highly stylized portraits. For longer exposure times, a tripod is necessary, but for scenes where the shutter speeds are fast enough for handheld photography or where you can artfully brace the camera to reduce camera shake, your camera’s Auto Exposure Bracketing feature will let you create HDR images of many different subjects.

Photographing for HDR

To make a great HDR image, the first step is recognizing situations in which this technique will work well for the scene. As noted previously, any highcontrast scene with a wide range of brightness values is perfect for HDR. But even scenes with tamer contrast ratios can be enhanced in interesting ways when rendered in HDR.

Exposure considerations :

Recording a scene in HDR involves taking a series of shots so that all levels of the brightness range are recorded with a good exposure. The number of shots you create will vary depending on the range of contrast present in the scene. Three to seven are a typical number that work for many situations, but more may be necessary in extreme situations, especially when the light source, usually the sun, is in the frame. The number of shots you use will be determined by the lighting conditions and also by how much detail in the deepest shadows you want to reveal. To get the most benefit from HDR, shoot in RAW to capture as much tonal information as possible.

Here are some additional exposure considerations to keep in mind:

Aperture. In terms of camera exposure, the main thing to remember is that, as with panoramas, the aperture needs to be the same for all the shots in the sequence so that the depth of field is consistent. Differences in depth of field will cause alignment issues when the images are blended together.

Shutter speed. Because the aperture will not change, adjusting the shutter speed will create the range of different exposures. If you will be using an Auto Exposure Bracketing feature and holding the camera, keep an eye on what the shutter speeds are for the different shots. If the shutter speed gets too slow, it may not be feasible to hold the camera for the shot without some trace of camera shake being recorded. Vibration reduction lenses can help you get by when holding the camera, even at lower shutter speeds like 1/15th and 1/8th of a second. But you should test the camera’s ability to record a sharp handheld shot at those slower speeds before relying on it for a photograph that really matters. When in doubt, try to steady the camera as best you can, and for best results, use a tripod.

Exposure range. The classic approach to HDR exposure is to bracket the shots so that they are one stop apart in exposure (Figure 6.22), although some photographers use a 2-stop difference between images. In a shot where the “normal” exposure would be f/8 at 1/125, a 1-stop range would result in shutter speeds of 1/30 and 1/60 on the overexposed end, which will record more detail in the darkest shadows, and 1/250 and 1/500 on the underexposed side, which will record detail in the brightest highlights. This would produce a sequence of five images with an exposure difference of one stop each.

ISO. If the camera is not on a tripod, the ISO should be set to a number that produces exposures with shutter speeds than can be adequately handheld. Onsite testing will determine the best ISO for handheld shooting. If you do have the luxury of using a tripod, choose a lower ISO, such as 100 or 200, to minimize noise.

Seeing the Light

 John Muir, the eminent naturalist and wilderness explorer, recognized the important part that light played in creating a memorable scene. In his book, The Mountains of California, he wrote, “It seemed to me the Sierra should be called not the Nevada, or Snowy Range, but the Range of Light.” During his many years of travels in the Sierra Nevada, he had numerous opportunities to see how the high country light could transform a landscape. He recognized that, as powerful as the mountain vistas were, the light, ever changing, filtered by mist, clouds, rain, and snow, added mystery and majesty to the jagged peaks and sheer walls of granite. In photography, light is everything. It brings a scene to life. It establishes a mood and influences the emotional impact of an image. Even photos made in the dark of night, far from artificial light sources, are the result of moonlight or starlight building up over time to form an exposure. The character and quality of light can have a great influence on a photograph, and understanding how a camera “sees” light is fundamental to producing a good image. One of the most important skills a photographer needs is the ability to visualize the photograph—seeing the picture in your mind’s eye and understanding how the camera will interpret the scene. Beginners see what they know is there and are often disappointed that the picture doesn’t convey the mood or meaning that they saw when they took it; photographers learn to see what the camera sees and to work with the light and camera controls to craft the image they see in their mind’s eye.

  Digital cameras offer an impressive array of automatic features and can almost always be relied on to produce a decent picture. But making a good photograph requires an understanding of the principles of photography and knowing how a camera translates the light in the scene into the photo captured by the image sensor. The difference between an ordinary picture and a good photograph is the difference between just pointing and shooting, and consciously working with composition, light, and camera controls to create a memorable image.
 
  Measuring the Light
 The proper exposure for a photograph is achieved through a combination of aperture, shutter speed, and the ISO sensitivity of the image sensor. A light meter, either in the camera or an external, handheld model, is used to determine the optimal exposure settings by evaluating how much light is being reflected back from the scene being photographed. Most cameras have builtin light meters that do a good job of calculating an adequate exposure setting, and higher-end cameras offer more sophisticated meters that produce excellent results. For photographers, this feature is definitely a case of “better living through technology”; you can concentrate more on the composition of the image and changing dynamics within the scene while knowing that you can rely on the camera’s light meter to do a good job in most lighting situations. Although you may trust your camera’s light meter when it comes to exposure decisions, it’s still important to know how the light meter evaluates light, so you can better anticipate how the exposure settings it recommends will affect the image.

    How Light Meters See Light
 The light meters found in modern cameras are very good at analyzing the light in a scene and selecting an aperture and shutter speed combination that will yield an image that is neither drastically underexposed nor overexposed. In most cases, the exposure may actually be quite good, but it’s not necessarily the best exposure for a given scene. A light meter doesn’t know what you’re photographing, nor can it determine if you’re using it the right way or even pointing it at the right place. A light meter doesn’t give you the correct settings to use; it just gives you the settings to create a certain type of exposure based on its very narrow interpretation of the scene before your lens. This limitation of the device is due to the fact that all light meters can see only luminance (brightness) in the form of how the light is being reflected from a scene. They can’t see color, evaluate contrast, or even tell the difference between black and white, so the reflected light from every scene they analyze is averaged into a shade of medium gray. A light meter’s view of the world is so limited that the gray tone seen by light meters is not just any gray, but a very specific, 18 percent gray. This precise percentage comes from the fact that most scenes tend to reflect approximately 18 percent of the light that falls on them. When you point a light meter at a scene—no matter if it’s a snowy hillside, a dark cave, or a casual portrait—and you take a reading, the meter assumes that it’s pointed at something that is 18 percent gray; the meter is calibrated to calculate an aperture and shutter speed that will record the average reflected luminance of the scene as a middle gray.

   Types of light meters
There are two types of light meters used in photography: those that are built into the camera and external, handheld units. Handheld light meters usually can be set up to measure the light as either incident or reflective light. Incident refers to the light falling onto the subject, and reflected is light reflected off the subject. When used in reflective mode, handheld light meters are very useful for taking precise reflected light readings in landscape photography. Special spot meter attachments with viewfinders can be fitted onto some handheld meters to provide the capability to take measurements from very small areas in a scene. This allows a photographer to precisely calculate the contrast range in a scene, make an exposure that will contain the tonal information needed to make the best exposure, and create a file that is more easily processed and will also print well.

    Metering Modes
With the exception of entry-level compact models, most digital cameras offer a choice of metering modes. Metering modes tell the light meter to analyze the light in different ways. The three most common are Matrix, Center-Weighted, and Spot . Working with the appropriate metering mode allows you to get the best exposure in a variety of situations. You usually select metering modes either from within the camera’s menu system or from a control button or selector switch located on the camera body.

     Matrix
Depending on your camera, this mode may be called Multi-segment, Pattern,or Evaluative metering. We feel it’s the best mode to use for most situations, and it’s the one that we use most often. The Matrix metering pattern divides the image into sections, or zones (anywhere from 30 to well over 200, depending on the camera), and takes a separate reading for each zone. The camera then analyzes the different readings and compares them to information programmed into its memory to determine an optimal exposure setting. High-end digital SLRs that offer a range of different autofocus (AF) zones will also factor the active autofocus zone into the metering pattern, giving that area of the image more weight in the final exposure calculation. In most cases, Matrix metering works very well, and we use it for most images simply because it does such a great job of calculating an optimal exposure, letting us concentrate on image making. Matrix metering may not be the best choice in all situations, and recognizing when it’s not can help you decide whether it’s time to use another metering mode or adjust the exposure. Common situations where Matrix metering may be fooled include heavily backlit scenes, bright areas such as snow or the beach, and dark subjects that you want to record as dark. To get a sense of how your Matrix metering system responds to different lighting situations, you should take a series of pictures in as many different lighting conditions as possible—a sunny day, a cloudy day, dawn, heavy shade, open shade, high contrast, low contrast, indoors—and carefully evaluate the results. Pay particular attention to the shots you take in very bright daylight where there is an extreme contrast range between deep shadows and bright highlights. This is the type of lighting that’s most likely to cause problems for a Matrix metering system. To be fair, we should point out that this type of lighting could cause problems for any metering system, but since Matrix metering bases its exposure on many different areas of the scene, it may not be able to distinguish which area of a high-contrast scene is important to you (remember that a light meter has no idea what you’re photographing). Your goal with these exposure tests is to try to determine how the meter handles the bright highlights and deep shadows. Does it tend to preserve good shadow detail at the expense of blown-out highlights? Or does it do a good job of controlling bright highlights but fail when it comes to capturing detail in the deep shadows? Using the histogram feature on your camera (covered later in this chapter) can help you evaluate problems on the highlight end, and to some extent, you can also use it to evaluate the shadows. To properly assess the integrity of very subtle shadow detail, however, you should view the images in an image-editing program on a calibrated monitor.

   Center-Weighted
The Center-Weighted metering pattern has been used in cameras for years and was around long before Matrix metering came on the scene. The mode meters the entire frame, but as the name implies, it gives more emphasis to the center area of the frame. The ratios will vary among different cameras. But typically, a Center-Weighted meter will base 60 to 75 percent of its metering calculations on the center circle (this is usually shown in the viewfinder) and the remainder on what’s happening in the rest of the frame. Center-Weighted metering is common for use in portrait situations, since the reflectance values at the center of the frame have more influence in determining the exposure. Although this is the most common type of light meter in more consumeroriented digital cameras (if your camera doesn’t offer a choice of metering patterns, then it probably uses a Center-Weighted meter), the main drawback is that it makes the assumption that your subject is centered. While that may be the case for general snapshot photography, it certainly isn’t true for all images, especially if you’re a photographer who’s already familiar with the major tenets of photographic composition—one of which emphasizes not centering the subject for every shot. For this reason, Center-Weighted metering often produces results that are adequate but not the best for many situations .

    Spot (Partial)
Whereas Matrix metering looks at many different areas of the image to evaluate the lighting in a scene, a Spot meter is designed to measure only the light in a very small area. The exact size of the spot may vary from camera to camera, but it’s typically a 3-to-10-degree circle in the center of the frame (degree refers to the angle of view—3 degrees is about as large as a dime looks on the sidewalk between your feet). This can encompass anywhere between 2 to 10 percent of the entire scene. Some pro cameras also have Spot meters that link the metering spot to the active autofocus zone for more precise control where the metering will follow the subject that’s in focus. On cameras that feature user-selectable AF zones, this allows an incredible degree of control for metering and focusing using the same area of the viewfinder. Spot metering is appropriate when you want to measure a specific part of the scene, and you want the camera’s exposure to be based on the luminance (brightness) of that area. Since meters want to place everything into a zone of 18 percent reflectance, keep in mind that a Spot meter is no different in this regard; it just measures from a much smaller area. A classic situation where you might use Spot metering is for a scene where a relatively small foreground subject is juxtaposed against a very bright or dark background. Matrix or Center-Weighted metering would factor a bright background into its calculations, causing the foreground subject to be underexposed. By framing the image so that the spot area is on the subject, a correct meter reading could be obtained for that area of the image.

Essential Filters

   Although for some people programs such as Adobe Photoshop have reduced the need for filters, there are still some filters that we regard as being essential items in a well-stocked camera bag. Others that we’ll mention here may not be totally essential, but they help modify the light in specific ways and let you create images that would be difficult or impossible without them. Not all digital cameras can accept filters, and some can only use proprietary filters designed by the manufacturer. To accept standard screw-on filters, the front of the lens must have a threaded ring. If your lens has this feature, there should be a number indicating the size of the filter that will fit on the lens. 

Here are filters to consider:

   Skylight filters. These filters are clear glass and are mainly used as protective covers for the lens. If there is an impact to the front of the lens or contact with an abrasive surface, the filter will take the brunt of the punishment and spare the front glass lens element from being damaged. We suspect that there’s also a certain amount of add-on selling here by camera shops and online dealers, but the basic premise makes sense and we use them as lens protection. A $20 to $40 filter is less painful to write off than an expensive lens costing hundreds of dollars.

   Polarizing filter. Of all the filters that we carry in our camera bags, we’d have to say that a polarizer is the one filter we just wouldn’t want to be without. Like all filters, a polarizer modifies the light as it enters the lens. Chief among these modifications is a polarizer’s ability to remove some of the glare and reflections from the surface of glass and water. It also excels at darkening blue skies and boosting color contrast and saturation. The quality of light in a scene can be made clearer and less hazy. The amount of polarizing effect you get depends on a few variables, including the time of day, the angle of the light relative to the scene you are photographing, and the reflective properties of a given surface.  The best type of polarizing filter to use on an autofocus SLR camera is a circular polarizer. Don’t use a linear polarizer with an SLR, because it can confuse the camera’s autofocus and metering systems. A circular polarizer allows you to rotate a circular ring on the front of the filter and adjust the level of polarizing effect until you get it just right. Polarizers are darker filters that cut down on the amount of light entering the lens, and this usually means using a wider aperture or a slower shutter speed. Since they reduce the light entering the lens, they are easiest to use on cameras that use TTL metering (through the lens) so you don’t have to calculate a compensating adjustment. For rangefinder cameras that also feature an LCD screen that shows you a live preview of what the lens sees, circular polarizers can be used in much the same way as you would with a DSLR, by rotating the ring until the effect looks best. The only drawback is that these screens can be somewhat hard to see in bright light, which makes it more difficult to evaluate the polarizing effect. Seán has been able to use his larger circular polarizing filter on a smaller compact camera by simply holding it against the front of the lens. Since the preview on the LCD is sometimes hard to see, to judge which rotational angle is best, he first holds the filter up to his eye and turns it until the effect looks good. Then he places it in front of the lens, making sure to keep the filter rotation the same. Fortunately, his Tiffen (tiffen.com) circular polarizer has a small handle for turning the ring that makes it easy to use as a positioning marker. If you’re using a polarizer on a rangefinder camera that does not also have an LCD screen showing what the lens is seeing, you won’t be able to see when the polarizing effect is best, so your success will be somewhat hit or miss. For this type of camera, especially if it does not use TTL light metering, a linear polarizer is best. You’ll also need to factor in a plus-1 or plus-2 stop exposure compensation to adjust for the darker filter letting less light through the lens. Few things liven up an outdoor shot on a sunny day like a polarizing filter. Although you can use image-editing software to darken a blue sky and add some saturation into the colors of an image, it doesn’t replace a good polarizing filter.

   Neutral density (ND) filters. These filters reduce the amount of light that enters the lens. Since they are neutral, they do not affect the color balance of a scene. ND filters are used for times when you want to achieve a certain effect, such as shallow depth of field produced by a wide open aperture, or a motion blur from a slow shutter speed, but the lighting conditions are too bright to allow the necessary settings. Flowing water is a classic subject for a slow shutter speed treatment, as is long grass blowing in the wind. By placing a dark ND filter on the lens, you cut down on the amount of light, making the wider apertures and slower shutter speeds accessible (Figure 3.37). ND filters are commonly available in 1-, 2-, and 3-stop increments, and darker ND filters can also be ordered that will cut back as much as 5 stops of light (Figure 3.38). For times when you need a really strong ND filter, you can stack them together to increase the darkening effect.

    Graduated ND filters. Graduated ND filters are dark on the top half and then gradually fade to transparent along the middle of the filter. They are used in situations where the sky area of a landscape shot is much brighter than the terra firma below the horizon. By using the filter to darken the sky, a more balanced exposure can be made for both the earth and sky portions of the image. Standard graduated ND filters that screw onto the front of the lens are less than ideal since you can’t adjust the placement of the horizon line; it’s always right through the middle of the filter. A better approach is offered by Singh-Ray (singh-ray.com) with its Galen Rowell Graduated ND filters. Designed by the late, renowned nature photographer, these filters come in a range of densities and also feature either a hard-edged or soft-edged gradient transition from dark to transparent. Since the filter is a rectangular piece of acrylic that fits into a standard Cokin P-series filter holder, the photographer can adjust the graduated edge of the filter up or down to match the location of the horizon in the image. This makes them ideal for compositions where the horizon line is not centered. The advent of digital cameras has introduced a new way of achieving this effect that involves shooting two (or more) exposures on a tripod, with each one exposed for a specific area in the image. The different exposures can be blended together in the digital darkroom either manually or via an HDR process. If multiple exposures and digital postproduction seem like too much hassle, or if you simply prefer to do as much as possible in the camera and on location, graduated ND filters are an elegant solution to a common photographic problem.

How a Digital Camera Works

Lens to Image Sensor to Media

Photography—whether film or digital—is all about light. The light reflects off the scene in front of the lens, then passes through the lens and the open shutter. This process is similar to the way light passes through the lens of your eye to the cones and rods at the back of the eye and on to the optic nerve. It’s after the light passes through the open shutter that film and digital cameras start to work differently. With film photography the light that passes through the lens exposes the light-sensitive film. The film records the light through photo-molecular reactions that embed a latent (nonvisible) image in the film. The latent image becomes visible when the film is chemically processed. With digital photography the flow of light is the same, but the flow of information is not. When the light passes through the open shutter, it hits the image sensor, which translates that light into electrical voltages. The informationis then processed to eliminate noise, calculate color values, produce an image data file, and write that file to a digital memory card. The camera then prepares to take the next exposure. This all happens very quickly, with a tremendous amount of information being simultaneously processed and written to the memory card.


The Lens

Camera lenses are surprisingly complex and sophisticated in their construction and design, containing a series of elements—typically made of glass— that refract and focus the light coming into the lens. This allows the lens to magnify the scene and focus it at a specific point. After many years of working with digital cameras, we’ve come to the conclusion that digital photography requires higher-quality lenses than film photography. Film has a variable-grain structure, whereas pixels are all the same size; this means that digital cameras operate at a disadvantage when it comes to capturing fine detail. In addition, an image sensor is more sensitive to light hitting it directly as opposed to hitting it at an angle, resulting in a slight loss of sharpness particularly around the edges of wide-angle lenses. The resulting loss of sharpness can be compensated in part by using a lens of the highest quality. We’ve also learned that although you can correct one or two less than sharp images within reason, it is absolutely not enjoyable to correct hundreds and thousands of them. A bright, sharp lens is something you will never regret photographing with.

Focal Length

 Different lenses provide different perspectives, and what makes those lenses different is their focal length. Focal length is technically defined as the distance from the rear nodal point of the lens to the point where the light rays passing through the lens are focused onto the focal plane—either the film or the sensor. This distance is typically measured in millimeters. From a practical point of view, focal length can be thought of as the amount of a lens’s magnification. The longer the focal length, the more the lens will magnify the scene. At longer focal lengths the image circle projected onto the image sensor contains a smaller portion of the scene before the lens . In addition to determining the magnification of a scene, the focal length affects the apparent perspective and compression of the scene. In actuality, the focal length isn’t what changes the perspective. Rather, the change in camera position required to keep a foreground subject the same size as with another focal length is what changes the perspective. For example, if you put some objects at the near and far edges of a table and then photograph the scene with a wide-angle lens, the background objects will appear very small. If you photograph the same scene with a telephoto lens, you’ll need to back up considerably to keep the foreground objects the same size as they were when photographed with a wide-angle lens. This changes the angle of view of the subjects, so that the distance between foreground and background objects appears compressed.

Lens Speed

 Lens speed refers to the maximum aperture (minimum f number) of the lens. A “fast” lens has a large maximum aperture, allowing the lens to gather more light—a very useful feature in low-light situations such as in the early morning or in candlelight . A “slow” lens has a maximum aperture that is smaller and lets in less light. A fast lens might have a maximum aperture of f1.4 or f2.8, whereas a slow lens might have a maximum aperture of f5.6. A fast lens is more desirable and is usually more expensive. A proper exposure depends on the camera ISO setting, the aperture size, and the shutter speed to allow an appropriate amount of light to reach the digital sensor. Opening the aperture allows you to use a faster shutter speed. A fast lens with a larger maximum aperture means that you can use a faster shutter speed than would be possible with a slow lens—a significant factor in low-light situations and fastmoving situations. A fast lens also helps the camera focus. Because the lens transmits more light, the camera will be better able to acquire proper focus, even in relatively low-light situations.

The Viewfinder and the LCD

The viewfinder is aptly named; it enables you to find the view you want of a scene and compose the shot. This is the essence of photography. Although seemingly a simple aspect of a camera, not all viewfinders are alike, and it’s important to know the differences between them and how a viewfinder can affect your photography. With many cameras, the viewfinder is a small window you look through to see how the scene looks. This is referred to as an optical viewfinder. Not all compact digital cameras include an optical viewfinder, but nearly all of them feature an LCD that serves double duty as a viewfinder and as a way to review your photos. The smallest cameras frequently do not have an optical viewfinder and rely entirely on the LCD for this purpose. On most DSLR cameras the LCD is only for reviewing photos and changing camera menu settings, though some newer cameras offer a live view feature that adds viewfinder functionality to the LCD screen.

LCD: Advantages and Disadvantages

We all love instant gratification, and that is the primary advantage of the LCD display on a digital camera. It allows you to preview the photo you are about to take and review the one you just took to check for exposure and composition or to share with everyone around you. In addition, you can also look at any of your images at any time after you’ve taken them. With the non-SLR digital cameras, the LCD display adds another dimension by allowing you to use the display as your viewfinder. Instead of putting your eye to the camera to compose the scene, you can hold the camera in front of you and compose based on the preview display on the LCD. This provides a little more freedom of movement, but also provides the opportunity to take images with unique perspectives. We’ve all seen the crowds of photographers trying to take pictures of the same subject, with those to the back holding their camera over their head, clicking the shutter release, and hoping for the best. The LCD display on a digital camera takes the guesswork out of that situation. You can take pictures up high, down low, around the corner, or elsewhere, and still know what you’ll end up with. One of the big disadvantages of the LCD is that it can be very difficult to compose a shot or review your images outdoors on a bright, sunny day. For composing a photo, you’ll find yourself shading the LCD with your hand, and when reviewing images, you may have to seek the shade of a tree, hold the camera inside your camera bag, or even put a jacket over your head to get a good look at your images. Turning up the brightness of the LCD display may make it easier to see in bright light, but it also changes the appearance of the image; so you can’t really judge proper exposure based on the LCD preview alone. Your best bet is to shade the display as best you can to review your images, and use the histogram feature for exposure evaluation.

The Image Sensor

Film does the job of both recording and storing the image photographed. With digital cameras these jobs are split between the image sensor and digital media .  The image sensor replaces film as the recording medium. The sensor actually consists of millions of individual light-sensitive sensors called photosites. Each photosite represents an individual pixel. These photosites, or pixels, generate an electrical voltage response based on the amount of light that strikes them. The analog voltage response values are then translated into digital values in a process called analog to digital conversion—or in geek speak, A to D conversion. The voltage information that has been translated to discrete digital numbers represents the tonal and color values in the photographic image. There is a certain irony in the fact that at the very instant of its creation a digital image is not really digital at all! Even though digital cameras take pictures in full color, the sensors are unable to see color. They can only read the luminance, or brightness, values of the scene. Colored filters are used to limit the range of light that each pixel can read so that each pixel records only one of the three colors (red, green, or blue) needed to define the final color of a pixel. Color interpolation is used to determine the remaining two color values for each pixel.

Types of Sensors

There are different types of image sensors technology, but the most widely used image sensors in digital cameras are CCD (charged coupled device) and CMOS (complementary metal oxide semiconductor).

CCD
   CCD sensors capture the image and then act as a conveyor belt for the data that defines the image. These sensors use an array of pixels arranged in a specific pattern that gather light and translate it into an electrical voltage. When the voltage information has been collected by each pixel as an image is taken, the data conveyor belt goes into action. Only the row of pixels adjacent to the readout registers can actually be read. After the first row of data is read, the data from all other pixels is shifted over on the conveyor belt so that the next row moves into position to be read, and so on. The CCD sensor doesn’t process the voltage information and convert it to digital data, so additional circuitry in the camera is required to perform those tasks.

CMOS
 CMOS sensors are named after the process used to create the components— the same process used to manufacture a variety of computer memory components. Like CCD sensors, the CMOS sensors contain an array of pixels that translate light into voltages. Unlike on the CCD sensor, the pixels on a CMOS sensor include additional circuitry for each pixel that converts the voltage into actual digital data. Also, the data from the sensor can be transmitted to the camera’s circuitry in parallel, which provides much faster data transfer from the sensor to the camera circuitry . Because of this significant difference in how the data is processed from the sensor, CMOS sensors are also known as APS sensors (active pixel sensors). Because circuitry is used at each pixel site in a CMOS sensor, the area available to capture light is reduced for each pixel. To compensate for this, tiny micro lenses are placed over each pixel on CMOS sensors to focus the light and effectively amplify it so that each pixel is able to read more light. Because sensors using CMOS technology are able to integrate several functions on the actual image sensor, they use less power and generate less heat than their CCD counterparts.

File Formats - In the Camera

Two basic file format options are available to you when working with a digital camera to capture still images. Each has its strengths and weaknesses, and each can impact the final image quality.

RAW capture

     The RAW file format is not a file format in the traditional sense. In fact, RAW is not an acronym. It is capitalized only to resemble the other file formats. Rather than being a single file format, RAW is a general term for the various formats used to store the raw data recorded by the sensor on a digital camera. Each camera manufacturer has developed proprietary file formats for RAW capture modes. Because none of the proprietary file formats are standard image file formats, you must convert RAW files before you can edit the image.
    You can convert RAW images to a file you can work with in Photoshop or print by using the camera manufacturer’s software or as we recommend and practice by using Adobe Camera Raw or Adobe Photoshop Lightroom. Multiple RAW file formats are created by many camera manufacturers, and often new versions are included with each new camera model release. As a result, literally hundreds of RAW file formats exist. Adobe has responded to this by providing an open industry standard for RAW capture called the DNG (digital negative) file format. DNG has been adopted by several digital camera manufacturers but hasn’t managed to stem the tide of new RAW file formats. You can convert proprietary RAW captures into the DNG file format to ensure compatibility well into the future with the free Adobe DNG converter, Adobe Bridge, or Adobe Lightroom.
     The file size, in megabytes, of RAW captures for most cameras is approximately the same as the megapixel count of the camera. There is some variation in this from camera to camera, and some formats offer the option to further compress the data. Regardless, the files that result from RAW capture will be considerably larger than JPEG captures but smaller than never used or recommended TIFF captures. The advantages of the RAW format include the ability to capture high-bit data; no in-camera processing; and options for adjusting the exposure, white balance, and other settings with great flexibility during the conversion process. The disadvantages are the relatively large file sizes, files that must be converted before you can retouch, composite, or post them on the Web, and the need to work with high-bit files to obtain all the benefits of the highbit capture. We highly recommend RAW capture and use it for the vast majority of our photography. The tools for working with RAW captures have become sophisticated enough that there are no longer many strong arguments against RAW capture. We very much want to maintain the benefits of high-bit data, post-processing flexibility, and lack of lossy image compression that are among the benefits of RAW capture.


JPEG 

    The major advantage of the JPEG (Joint Photographic Expert Group) format is convenience. Just about any software application that allows you to work with image files supports JPEG images. Also, the files are small because compression is applied to the image when it’s stored as a JPEG. This compression is lossy, meaning pixel values are averaged out in the process and the image will lose detail and color, if you use the highest quality (lowest compression) settings, image quality is generally still very good. But again we don’t use the JPEG file format when image quality is the most important final criteria. Admittedly, we do use JPEG for quick party pictures or snapshots that are only destined for eBay or Facebook display and also when capturing multiframes for stop motion and time-lapse projects .
    When selecting the JPEG format in the camera, you will generally have the option of size and quality settings. We always recommend capturing the most pixels possible, taking advantage of the full resolution of the image sensor in your camera. This size option is generally labeled Large, and we recommend using the highest possible quality setting.

8 Bit vs. 16 Bit

      The difference between an 8-bit and a 16-bit image file is the number of tonal values that can be recorded. (Anything over 8 bits per channel is generally referred to as high bit.) An 8-bit-per-channel capture contains up to 256 tonal values for each of the three color channels, because each bit can store one of two possible values, and there are 8 bits. That translates into two raised to the power of eight, which results in 256 possible tonal values. A 16-bit image can store up to 65,536 tonal values per channel, or two raised to the power of 16. The actual analog-to-digital conversion that takes place within digital cameras supports 8 bits (256 tonal values per channel), 12 bits (4,096 tonal values per channel), 14 bits (16,384 tonal values per channel), or 16 bits (65,536 tonal values per channel) with most cameras using 12 bits or 14 bits.
     When working with a single exposure, imaging software only supports 8-bit and 16-bit-per-channel modes; anything over 8 bits per channel will be stored as a 16-bit-per-channel image, even if the image doesn’t actually contain that level of information. When you start with a high-bit image by capturing the image in the RAW file format, you have more tonal information available when making your adjustments. Even if your adjustments—such as increases in contrast or other changes—cause a loss of certain tonal values, the huge number of available values means you’ll almost certainly end up with many more tonal values per channel than if you started with an 8-bit file. That means that even with relatively large adjustments in a high-bit file, you can still end up with perfectly smooth gradations in the final output.

    Working in 16-bit-per-channel mode offers a number of advantages, not the least of which is helping to ensure smooth gradations of tone and color within the image, even with the application of strong adjustments to the image. Because the bit depth is doubled for a 16-bit-per-channel image relative to an 8-bit-per-channel image, this means the actual file size will be double. However, since image quality is our primary concern we feel the advantages of a high-bit workflow far exceed the (relatively low) extra storage costs and other drawbacks, and thus recommend always working in 16-bit-per-channel mode.

Resolution = Information Digital Imaging

     Resolution is one of the most important concepts to understand in digital imaging and especially in digital photography. The term resolution describes both pixel count and pixel density, and in a variety of circumstances these concepts are used interchangeably, which can add to misunderstanding. Camera resolution is measured in megapixels (meaning millions of pixels); both image file resolution and monitor resolution are measured in either pixels per inch (ppi) or pixel dimensions (such as 1024 by 768 pixels); and printer resolution is measured in dots per inch (dpi) . In each of these circumstances, different numbers are used to describe the same image, making it challenging to translate from one system of measurement to another.
   This in turn can make it difficult to understand how the numbers relate to real-world factors such as the image detail and quality or file size and print size.  The bottom line is that resolution equals information. The higher the resolution, the more image information you have.
   If we’re talking about resolution in terms of total pixel count, such as the number of megapixels captured by a digital camera, we are referring to the total amount of information the camera sensor can capture, with the caveat that more isn’t automatically better. If we’re talking about the density of pixels, such as the number of dots per inch for a print, we’re talking about the number of pixels in a given area. The more pixels you have in your image, the larger that image can be reproduced. The higher the density of the pixels in the image, the greater the likelihood that the image will exhibit more detail or quality.

Megapixels vs. Effective Megapixels
        Digital cameras are identified based on their resolution, which is measured in megapixels. This term is simply a measure of how many millions of pixels the camera’s image sensor captures to produce the digital image. The more megapixels a camera captures, the more information it gathers. That translates into larger possible image output sizes. However, not all the pixels in an image sensor are used to capture an image. Pixels around the edge are often masked, or covered up.
     This is done for a variety of reasons, from modifying the aspect ratio of the final image to measuring a black point (where the camera reads the value of a pixel when no light reaches it) during exposure for use in processing the final image. Because all pixels in the sensor aren’t necessarily used to produce the final image, the specifications for a given camera generally include the number of effective megapixels. This indicates the total number of pixels actually used to record the image rather than the total available on the image sensor.