In July, 2015 I posted results from my early testing with Lightroom 6.1 Photo Merge to HDR feature in comparison to 32-bit floating point TIFFs created in Photomatix. Since then I have continued to use the Photo Merge tool with mixed results of success and disappointment. This post is to share an updated set of side-by-side comparisons from images I shot just yesterday so these results are definitely current for Lightroom CC 2015.8. In the context of this post, where I speak of DNG files, I am referring to DNG files that were created within Lightroom's Photo Merge module only, not DNG files converted from any other photo RAW. And where I speak of TIFF files, I am referring to 32-bit floating point TIFFs created in Photomatix.
Each of the updated images below include easy to understand call-outs with the specific comparisons so I won't re-type all the comparisons here. But what you will see are examples where Lightroom DNGs lack detail retention, creation of false aberrations / color banding, de-saturation of color, and creation of artifacts.
For the record, let me be clear that I'm not saying all is bad with Lightroom's DNG/HDR files. I use the Photo Merge module on a regular basis with nature / landscape HDR images and I usually love the results. In my original testing in July, 2015 I shared results that spanned scenarios including interior, landscape, and architectural elements. And since then I have seen improvements, particularly in areas of noise control, or at least the merge process not actually creating noise. I still experience a lot of artifacts being created from Lightroom's de-ghosting algorithms, but things have improved. Best yet, the entire Photo Merge process seems to run faster now. But unfortunately I am not seeing improvements to the results I am getting with interior HDR images.
On of my longest-running and personal favorite projects is creating HDR images of church sanctuaries. And in the purest sense of high dynamic range scenes, many of those spaces are as 'brutal' of an environment as you will ever encounter. When I photographed this space yesterday, I did so with the window shutters open and as you see here, with the shutters closed. The window glass is tinted in different colors but is essentially clear glass. The right side of the building faces due south and this was shot on a bright day with the sun full-on the south side of the building. As bad as the results look here, the results are worse with images shot with the shutters open.
Please note the images shown here are not finished images. The DNG and the TIFF were created from the same set of 15 exposures, captured at 1.0EV stops, ranging from 1/1250-second to 13-seconds. After each file was merged and imported back into Lightroom, I performed basic toning as a 'starting point' for final processing, and that's what you see here. Both files are still rather flat but both are ready to go into Photoshop for the final steps.
Here are the new comparisons. After clicking on the first image you can use left and right navigation in the photo viewer to see all images.
Original post from July, 2015:
In a previous article titled Maximizing Your HDR Photography I explained my approach to HDR photography and my method for creating 32-bit floating point TIFFs using Photomatix (see Post Processing segment). Since posting that article I have been asked about using the Lightroom (Lr) PhotoMerge HDR feature introduced with version 6. At the time I had yet to test the PhotoMerge feature thoroughly, but after much excitement and anticipation I have compared the two file formats head-to-head and the results are in! In a word, my initial impression of the Lightroom 6 "HDR DNG raw file" is this: disappointment.
Prior to LR6 being released I had read the new PhotoMerge HDR feature would generate a 32-bit floating point TIFF (which turned out to be false). I was very excited to have another alternative for creating floating point TIFFs, and having that new tool built right into Lightroom sounded awesome for further streamlining my workflow. And while the workflow in itself is simplified quite a bit using the PhotoMerge feature, I find the quality of the resulting image surprisingly disappointing in my experience so far.
Without further ado, let's compare several real-world examples. All files for this comparison were created equally. The photography processes that went into these projects are verbatim to that described in the aforementioned article and both files were created using the same set of exposures. Once I had the 32-bit TIFF and the Adobe HDR/DNG files assembled, I developed both in Lightroom 6.1 to match them as closely as I possibly could. All of the comparisons made here are 1:1 screen shots of the two files side-by-side as displayed in Lightroom's comparison view. In all examples the Lightroom PhotoMerge HDR/DNG is displayed on the left, and the Photomatix 32-bit floating point TIFF is displayed on the right. For the purpose of this article I'll refer to these as "DNG" and "TIFF" respectively. Click on all thumbnails for a larger view.
Figure A: 18 exposures at +/- 1.0EV ranging from 1/1000 to 90 seconds. |
Figure A |
Figure A at left is a classic example of my interior HDR photography. This type of space can be difficult to photograph considering the vast differences between highlights and shadows, not to mention trying to manage color correctness given the typical mixed lighting. Having said that, it is paramount that your master tone-blended file be as clean as possible to ensure superior image quality in the end. Figure B below takes a close look at the pros and cons of the DNG and TIFF files for this space. To quickly address each point:
Figure B |
Figure B |
Figure C: 13 handheld exposures ranging from 1/8000 to 1/40 second |
Figure C |
Figure C shows the final image of a 13-exposure project. While the final image was processed in black and white, the examples used to compare the DNG and TIFF files are shown in color to illustrate the original condition of each file. I am providing two examples for this image in Figure D and Figure E below due to the multiple issues I have observed with the DNG file.
Looking at Figure D:
Figure D |
Figure D |
Picking up with Figure E we move to the cemetery for our next set of comparisons, where unfortunately we find much of the same issues as with Figure D. The issues are:
Figure E |
Figure E |
Figure F: 9 exposures at +/- 1.0 EV |
Figure F |
Although I prepared several more examples for this comparison, as I continue with this article I realize I'm just beating a dead horse and that I'm only rehashing the same issues over and over. But in closing I would like to provide one last example that seems important to me. Figure F shown at left is a scene that was practically made for HDR photography. This image was assembled with 9 exposures at +/- 1.0 EV. The reason I feel this image is important is that it has a characteristic that we deal with often in HDR landscape photography which is motion of objects and de-ghosting in post processing.
Figure G below shows artifacts in the DNG file from what I suspect is Lightroom's de-ghosting algorithm. I know for sure the clouds were moving and I think it's likely there was a slight breeze in the trees. The artifacts in the leaves shown in Note A of the DNG file are inexcusable. Not only is the artifact noise horrible, the colors of the leaves are completely different from their surroundings. The same condition is found throughout all trees in this image. Finally, Note B shows a fair amount of artifact noise around the edge of the clouds. You might guess all of the cloud edges look this way.
Figure G |
Figure G |
So, in closing, I think it best that I let the pictures speak for themselves. I do have high hopes that eventually Adobe will work out these kinks. But as it stands, at least based on my experiments here, it is my opinion that Lightroom isn't ready for prime time with the PhotoMerge HDR feature. I simply place too high of a premium on image quality over convenience to consider PhotoMerge as a go-to tool for my HDR photography at this time. Here's to hoping ...
Until next time, happy snappin'!
]]>Figure A |
For a little background: one of the issues with focus stacking is the final composite image typically has to be cropped on all four sides to eliminate pixels that show overlapping of the frames being stacked. Figure A to the right shows a comparison between composites with the normal stacking order on left and the reversed stacking order on the right. Please ignore the heavy vignette in the comparisons - I'll come back to that in a moment.
Both images above were stacked from the same 124 progressively-focused frames but the two stacking orders show a big difference in results. Look at the top and bottom edges of the image stacked in the normal order. The streaks you see on both edges (most apparent on the bottom) show where all of the overlapping frames occur. The same overlaps also occur on the left and right edges but are somewhat hidden in the black vignette.
Figure B |
Figure B at left shows a 1:1 view of the bottom edge where I measure approximately 480 pixels that were the overlapping frames. Clearly those overlapped edges get cropped away for the final image, so the point I'm making here is that we lose a lot of image resolution when having to crop away those edges. The number of pixels you have in overlapped edges may differ from project to project depending on your focusing technique and how many frames you use for the focus stack. For this particular project, the total difference is approximately 960 pixels side-to-side and approximately 640 pixels top-to-bottom. That's a fair amount of resolution to sacrifice right from the beginning.
Notice the overlapped edges are not present in the reversed-order stack. And you will also notice the reversed-order stack shows an apparent larger image, as if there is greater magnification in the second image. The apparent difference in magnification is also a result of the stacking order. So by looking at these comparisons it seems that reversed-order stacking has a clear advantage so why not reverse the order of images for all focus-stacking projects? The answer depends on one very important element when capturing the frames to begin with, and that involves the technique you use for focusing your lens.
For anyone who doesn’t understand focusing prime (non-telephoto) lenses like a macro lens: The main lens used in this project is an 85mm macro lens. The focal length is 85mm, period. The focal length never changes. However, when the focus is adjusted, there are glass elements that move inside the lens. The glass elements move in one direction to focus on things further away from the camera, and move in the opposite direction to focus on things closer to the camera. So while the focal length is always 85mm, there is a perceptible difference in image magnification when the glass elements move in one direction or the other during focusing.
Figure C |
Figure C at left shows the extremes in focusing that I used in this project. In this comparison the image on the left is the first frame captured which is focused on a point that I decided would be the deepest zone of sharp focus. The image on the right is focused on the closest foreground element. You can see the same apparent difference in magnification as with Figure A above.
Back to the topic at hand which is choosing the order to stack your images. As I mentioned before, it just depends on the technique you use for focusing your lens to begin with. During capture you have to adjust lens focus in one of two ways: Either front-to-back, which means you focus on the foreground first and progressively refocus as you work toward the background. Or, back-to-front, which means you focus on the background first and progressively refocus as you work toward the foreground. As a matter of habit I typically focus front-to-back. There's no particular reason for me doing that - it's just the habit I got into when I started dabbling with focus stacking in late 2012. But for this project I decided to work from back-to-front. For whatever reason I could see things better in that order for this particular subject. When I first imported the files into Zerene Stacker I used the default stack order. After seeing all of the overlapping frame edges I remembered that I captured the frames in a 'reversed order' than I typically would, so I reversed the order and allowed Zerene to re-stack the images.
After seeing the results of the reversed-stack composite I then understood why Zerene would offer the option for reversing the order of the image stack. The option is definitely accommodating to whichever direction you choose to focus your lens. If you're using a different program for focus stacking I recommend looking into a way to achieve similar results within that program.
About that heavy vignette: The main lens used here is a Nikon DX lens shot on an Nikon FX body (In Nikon-ease, DX lenses are designed to be used with 1.5x ‘crop’ image sensors and FX denotes a full-frame image sensor). The vignetted area is the difference between the sizes of the full-frame image sensor versus the footprint of the DX lens being used. The camera can auto-crop the full-frame area to fit the DX lens footprint if the Auto-DX mode is enabled and produces 4800 x 3200-pixel images. For this project I disabled the Auto-DX mode to see if I could increase the resolution a bit with my own crop, which I was successful in doing. I was able to scrape out 5823 x 3734 pixels for this one so it does show some nice detail at full size!
And finally, here's the full-size final image from this project. Hover over the image for all the juicy details.
I hope you enjoy the image and until next time, Happy Stacking!
]]>One important note before continuing: The focal lengths discussed in this post, specifically where the angle of view is concerned, assume a 35mm (or full-frame) camera.
Figure A: 24-85mm lens shot at 36mm focal length. Figure B: 14-24mm lens shot at 20mm focal length. Figure A and Figure B illustrate the vast difference in presentation that is possible when using different lenses and focal lengths.
To keep the composition of both photographs as consistent as possible, I tried to frame several elements as closely to the same as I could, such as the open area to the right of the cannon, the open area above the roof of the house, and the open area below the wheel closest to the camera. Neither image was cropped in post processing.
You can clearly see the difference in how prominently the house and the cannon are presented between the two images. And depending on the viewer's perception of the apparent size of the objects, one could perceive the distance between the cannon and the house is closer together or farther apart. And as the photographer it is this play on perception that enables you to select a lens and focal length for emphasizing one subject in a scene over another.
When employing this technique, one important element in lens selection is to consider a lens' angle of view. Figure A was photographed with a 24-85mm lens at 36mm focal length and Figure B was photographed with a 14-24mm lens at 20mm focal length. The 24-85mm lens has a maximum angle of view of 84 degrees where the 14-24mm lens extends the field of view to 114 degrees. The difference in the two angles is best illustrated in Figure B by the apparent increased width of the frame where the viewer can see more of the trees along the left and right edges. Notwithstanding the increased angle of view, the big difference is how close I was able to get to the cannon with the 14-24mm lens. In my opinion this is the real magic in playing on the perception of distance and scale. I was able to get roughly 10-to-12 feet closer to the cannon in the second photo which was necessary to maintain framing of the composition. Figure B demonstrates the big difference in perceivable scale where the foreground element was so much closer to the lens, effectively "compressing" the background into a much smaller space.
I like both images equally - one for the greater scale of the house and one for the greater scale of the cannon. But I wouldn't necessarily claim either to have a better composition than the other. "Perception" has been used quite a bit in this article and perception is very much to each as own. This is an example where one's personal preference in style might dictate which image one might hang on their wall.
Fig. A: St. Luke's Episcopal ChurchNikon D800 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 15 exposures at +/- 1.0 EV ranging from 1/320 - 50 seconds Too often I hear people asking what is the best HDR-imaging software program. And while its true that some programs are better than others (we'll get to that later), superior HDR photography begins with precise camera work. And depending on the scene that could be a lot of camera work. In short, my philosophy is that one must capture the entire dynamic range of a scene in-camera in order to collect the necessary data that is needed later during the tone blending process. That means that if a scene's highlights meter at 1/320 second, and the shadows meter at 90 seconds, then so be it. That's why we call it high dynamic range photography and you will be well served to capture each and every stop of light between 1/320 and 90 seconds. Notice that I said every stop of light, not every 2nd, 3rd, or 4th stop. In the example of 1/320 to 90 seconds, let's call that 16 stops of dynamic range, and you need all of them! My approach has become to capture those 16 stops of dynamic range in 1-stop increments. Figure A above is a good example where 15 exposures were used for tone blending.
So what's the difference in capturing 13, 14, or 15 exposures at +/- 1.0 EV instead of 3 to 5 exposures at +/- 2.0 EV? What's to gain you ask? The answer is simple: more data. And more data equals greater fidelity. And when it comes to post processing (which is obviously a must in HDR photography), your tone-blending software will make much better decisions with more data. A key part in maintaining excellent image quality in post processing is the transition between highlights and shadows and keeping luminance and color noise to an absolute minimum. Consider this: for every stop of light that you don’t capture in-camera, you are relying on your tone-blending software to interpolate the missing luminance and color data for filling in the gaps between widely varying exposure values. That interpolation is where much noise and posterization is generated, and the result is degraded image quality. We could probably all agree the more data a computer has to make decisions, the more accurate the computer’s decisions will be; the same goes for digital images and that philosophy applies doubly when blending color and tones from multiple exposures into a single file.
Fig. B: Duke ChapelNikon D800 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 10 exposures at +/-1.0 EV ranging between 1/50th to 10 secondsWe’ve seen enough HDR images created in recent years using 3-to-5 exposures at +/- 2.0 EV that we've become accustomed to the results produced by that approach. But let’s face it, the +/- 2.0 EV approach is just not the perfect one-size-fits-all-dynamic-ranges photography tool, particularly for many interior spaces. Most church interiors that I have photographed run around 13 stops of dynamic range. While the +/- 2.0 EV approach is the quickest route, the highlights and shadows typically get stretched way too far during local tonemapping and the midtones are left to bridge the gaps. As I mentioned, that can lead to a lot of posterization which is never a good thing, especially if you're going for excellent image quality. Figure B above of Duke Chapel is the final result of 10 exposures ranging between 1/50th and 10 seconds. Figure C below is a screenshot of the 10 exposures that were used. Looking at the first frame you'll see the brightest highlights (the chandeliers) are barely exposed. And if those chandeliers had shown bare light bulbs I would have exposed them even less - maybe a little as 1/800 second. You need that sort of nearly-black frame to protect the highlights when processing your final tone-blended image.
Fig. C Many folks rely on adjusting the exposure value of a single raw file for producing “multiple exposures” and then re-blend those files into a faux HDR image. But why do that when you could have captured a few more stops of light to begin with? If you blow the highlights in a single take, they’re gone. There’s simply no detail in blown highlights and no matter what adjustments you make to the original RAW file, you will never regain that lost detail. There is only one solution to this problem: more data, and more natural exposures. Don't waste your time adjusting exposure values in multiple RAW files. For every exposure adjustment you make to those individual RAW files, you're introducing more and more likelihood of luminance noise. Slow down, take your time, and get the files you need from the beginning! Your final project will be much, much cleaner if you put in the necessary work up front.
Fig. D: Lamp Detail Figure D at right is a 1:1 screenshot of an example of the type of detail you can retain if you protect the highlights with the initial photography. Click on the thumbnail to enlarge the image and look at the circular grates on the bottom of the lamp. Those grates are concealing bare light bulbs, and while the bare bulbs aren't visible at this angle, those areas of the lamps are super hot with light. The fact that the grate detail is even visible is because the initial photography included a nearly black frame (probably around 1/500 second) to protect the highlights of the interior lights.
Fig. E: Christ Episcopal ChurchNikon D7000 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 13 exposures at +/- 1.0EV ranging from 1/50 second to 60 seconds Finally, Figure E at left is tone-blended from 13 exposures. In theory, 4 exposures at +/- 4.0 EV could have produced the same results, so I tried it with the exact same tonemapping settings I used with 13 exposures. From a broad perspective, I did get similar results. The differences are in the details like harsher light falloff directly at the light sources (especially around the windows). The 13-exposure version has a smoother transition to midtones in these areas and the colors on the windows look a tad better. Also in the 4.0 EV version I see the super-light highlights (like the light bulbs) aren't quite as white; they have begun to take on that gray tone that we see on so many HDR images when the highlights are beaten into submission. Finally the 13-stop version gives you far better noise-to-signal ratios and thus produces much cleaner, less noisy images.
Nikon D800 - Nikon 14-24mm f/2.8 - 14mm - f/16 - ISO100 - 9 exposures at +/- 1.0 EV
One might think this article was written specifically for interior photography but that's not the case. I regularly use this same technique in landscapes, particularly with waterfalls. I think the biggest challenge we have with HDR photography is movement in the scene between the frames that leads to ghosting in the composite image. You simply have to use your own discretion at the moment of capturing your frames as to how much time you can allow to elapse (which effects the number of frames you capture) depending on whatever motion there is in the scene. The waterfall shown here gives the appearance of motion in the water, which is my preference for photographing waterfalls. This was photographed on a very calm morning with no breeze so there is no apparent motion in the leaves. This waterfall was shot with 9 exposures at +/- 1.0 EV.
Wrapping Up The Photography: The bullets below list my basic workflow for the camera work discussed above.
Post Processing: Where do I begin?! Post processing of HDR images is one of the most hotly contested subjects of the last decade in photography. And there is no question that post processing your HDR images can make or break the whole deal. As you may have guessed by reading my opening statements, I am not a fan of the gritty, grungy, techno-crap-tone-mapped images that has given HDR photography such a bad rap. I prefer to process my HDR images to be as realistic as possible - to look as much like a "perfect single exposure" as possible. Of course I have never duplicated what my eyes have seen with an HDR photograph, but I do the best I can based on my memory of the scene and with the impression it left upon me. Today there are many programs available for blending all your exposures into a single file, and I won't even begin to review them all. I will simply comment on the tools I use today and what has proven to suit my style the best.
Today I use 32-bit floating point TIFFs exclusively for my HDR composites. I still have Nik HDR Efex Pro2 on my PC, as well as several other capable programs, but I wholly believe that 32-bit floating point TIFF files are the holy grail of HDR post processing. My high-level post processing workflow is:
Raw File Development: For this example let's just say I'm working on a bracket of 12 exposures. Since I have 12 one-stop exposures, that is all the luminosity data I need. I NEVER adjust exposure, contrast, highlights, shadows, whites, blacks, hue, saturation, or lightness during RAW development; I want those elements to stay as "natural" as possible in the individual exposures. This is my RAW development workflow:
Merging 16-bit uncompressed TIFFs to a 32-bit floating point TIFF: For the past couple of years I have been using the free trial version of Photomatix for this function. I do not use Photomatix to make any adjustments to the image, I ONLY use photomatix to facilitate creating the 32-bit floating point TIFF. Within Lightroom I have created an export agent that interfaces directly with Photomatix so this helps streamline the process a bit. The process works like this:
NOTE as of 4/26/15: Last week I installed Lightroom 6 which includes the new Phote Merge to HDR feature which will create the 32-bit floating point TIFF. I've used it a couple times in testing and it looks pretty slick. But I can't comment right now on how well the deghosting feature works so I'll come back to this at a later time ...
Adjusting 32-bit Floating Point TIFF in Lightroom: When you first see this file in Lightroom you will probably think it looks terrible! But you now have a single 32-bit TIFF file that contains a tremendous amount of luminance and color data - just think of this as a super-duper RAW file! But at this point, the "tonemapping process" is to each is own. My suggestion is that you click on the "Auto Tone" button in Lightroom just to get a good starting point and then perform whatever adjustments you feel necessary. I try to keep things simple when I'm processing the 32-bit file in Lightroom. I'll make basic adjustments to setting the white and black points and then exposure, highlights, shadows, and maybe (just maybe) clarity. I'll take a look at color hue, saturation, and lightness but if I make any adjustments here I will typically go easy (save the heavy work for Photoshop). Once I'm done with the 32-bit file in Lightroom: Right click on the file > Edit in > Edit in Adobe Photoshop > Edit a copy with Lightroom adjustments.
Adjustments in Photoshop CS6: This process is very individual based on each of our styles for post processing. I won't try to explain all that I do in CS6 because clearly that can get very laborious. I commonly use several targeted color adjustment layers for hue/saturation/lightness. And I will finish the image with whatever luminosity masks I need for fine-tuning exposure and contrast.
And in a nutshell, that's it! :) I would be most appreciative on any feedback you have related to this post.
Wishing you the best of happy photography!
]]>Fig. C: Click to enlarge | Fig. D: Click to enlarge |
Once you have applied an appropriate mask to your image, release the ALT key (Option key on Mac) and use the Amount, Radius, and Detail sliders to sharpen your image as needed. Although we have masked off a large area of the examples used here, you can still over-sharpen the areas of the image that were visible through the mask. My recommendation is to take it easy … a little dab’ll do ya!
TIP: Holding the ALT key (Option key on Mac) while adjusting the Amount, Radius, and Detail sliders will also reveal a mask that is helpful for evaluating the sharpening effect.
Note: It is important to recognize that applying the sharpening mask does not remove noise/grain from your images – it only prevents the noise/grain from being accentuated when sharpening is applied.
Below is Adobe’s definition of the sliders used for image sharpening in Lightroom:
Click to enlarge |
180-degree Panorama stitched from 12 portrait-oriented panels ~ All panels combined with 12 exposures at +/- 1.0EV ranging from 1/500 to 4 seconds |
Workflow Part 1: Get That Lens Level!
There are countless tutorials on the internet that discuss the importance of lens alignment when capturing your panoramic scenes. But I don't recall any tutorial that visually illustrates the importance of lens alignment. I explain the importance of lens alignment in the context of your final composition. The bottom line is your lens angle will have a significant impact on your final composition (or final crop). The illustration below is exaggerated to demonstrate several scenarios of how poor lens alignment can cause problems in the overall project.
Fig. A: This example assumes that 5 panels were captured for the panoramic stitch, and assumes the lens is angled upward. Problem #1 is Perspective Distortion: Consider the effect that an upward camera angle has on vertical elements - the lens will bend verticals inward as the verticals rise from bottom-to-top.
The primary problem with perspective distortion in panoramic photography is what happens during the stitching process. It is possible for the stitching software to attempt to correct perspective distortion by warping, bending, repositioning, and cropping each panel in order to make all of the panels align. The 5 blue panels shown in Fig. A at left are exaggerated to show gaps that may occur between the panels if the stitching software is successful in correcting all of the vertical perspective distortion. This impacts your final composition in a big way since you now have to crop away those gaps or clone the gaps from other parts of the image. Problem #2 is Lens Arch: In my opinion, the lens arching as the camera pans is a bigger problem than perspective distortion. I choose to pan my camera from left-to-right. Fig. A shows the arch that will occur if your lens is angled upward. Your stitched image will show evidence of the arch and you're faced with another big problem for your final composition. Again, you either have to crop away all of the empty space around the stitched image or clone all of the blank spaces from other areas of the image.
Fig. A shows two problems that arise from poor lens alignment when cropping your final composition. The yellow box represents the overall canvas size that your stitching software might create. The initial canvas size will be determined by the outer coordinates of the pixels used from each of the individual panels. Now you really only have a few options. 1) Attempt to use the entire canvas by cloning away all of the empty space (the yellow space), or by layering in bits and pieces from your original files. Both can be difficult to achieve. 2) Make a HUGE crop by cropping inside of the stitched pixels. I've seen this done a lot and it usually creates a terrible aspect ratio and sometimes negates the effort of creating a panoramic image in the first place. 3) Use a crop that is somewhere between 1 and 2.
NOTE: Depending on the scope of the project, there will be times when the camera must arch. If that is the case, you just need to plan on capturing more rows of panels to fill in the gaps above and below the arch.
Workflow Part 2: Evaluate the Size of the Scene
Fig. B: Click to enlarge Fig. B: Evaluate your scene to determine how many panels you will need to capture. Most tutorials I've read about panoramic photography suggest 25% - 30% overlap across the panels. In my opinion, that isn't enough. Again, consider your final composition and final crop. One of the phenomenons that occur in the stitching process is that your panels may take on an oval shape. I don't know the science behind this problem but it has something to do with the rotating motion of the lens as it rotates on your tripod (in a cylindrical fashion), and then the stitching software trying to make a flat image from something that was captured "in a cylinder". Fig. B shows two crops that are available depending on how many panels were captured. As before the yellow box represents the overall canvas size and as before, you either have to crop away the empty spaces or clone them in. Each scenario is fairly self-explanatory. I choose the second scenario by overlapping my panels by 50% so that I have more pixels to fill in the gaps between the panels.
NOTE: Fig. B emphasizes each panel in portrait orientation. I choose to capture my panoramas in portrait orientation so that I can gain maximum height across the image overall. This gives me a lot more freedom with composing in-camera and for final crop.
Part 3: Evaluate the Overall Exposure
I consider how the exposure may vary across the entire scene. Depending on your angle to the light, it is possible for one side of the scene to be much lighter than the other side of the scene. It is a matter of personal preference how you expose each of the panels. But you need to be aware how the light changes across the scene so that you can decide if your exposures should vary as you pan the camera. I make this decision depending on the complexity of the light for each scene. Below are two examples to make the point.
Fig. C: In this blue-hour scene the sun was very low on the horizon. The sun is just out of the frame to the left. By evaluating this scene I found that the overall exposure varied by two full stops from left-to-right, and in this particular case I chose to vary my exposures to compensate for the changes in light. Of course it may have been more "realistic" to photograph the scene as it was but I considered that the right side of the scene may have been very dark and may have upset the final product. Right or wrong, it is your choice how to manage this problem. Fig. D illustrates what the scene may have looked like if I had not varied my exposures while panning.
Fig. C: Click to enlarge | Fig. D: Click to enlarge |
Fig. E |
Figure E shows the five panels that were used for stitching |
Part 3A: Evaluate the Overall Exposure, continued
To "HDR", or not to "HDR" - that is the question. When it comes to panoramic photography, I approach the in-camera work the same as I would for a single-frame image. Depending on the challenges for a given scene, I will photograph the scene one of two ways: 1) The stitch will use single-exposure panels. Or, 2) The stitch will use panels that are created from multiple-exposure tone-blended images. I consider the techniques I use for HDR photography as an arrow in my quiver. I simply use the tool that I need to complete the vision I have for a given scene. Although I regularly rely on the dynamic range of my raw files for single-exposure images I have found that I can get far superior results (in terms of image quality) by using my HDR techniques if and when the scene demands it. The cityscape image above is a good example of why I might choose the HDR route. The bottom line is that I wanted to see a lot of detail in the buildings, bridges, and reflections in the water but the buildings were mostly back lit given the position of the sun. Although I could have stretched the tones in single-exposure raw files, taking the HDR route gave me much better control over the signal-to-noise ratio. I metered 5 stops of difference between the highlights and shadows in this scene, and that's what each panel received - 5 exposures at +/- 1.0 EV. Fig. F below illustrates the dynamic range that was captured for each panel.
Fig. F |
Fig. G: Click to enlarge Most of my panoramas are stitched from single-exposure panels. This is especially easy to achieve on overcast days. The light in Fig. G to the right also varied in overall EV from side-to-side, but only by approximately 2/3 a stop. I knew I could easily adjust the highlights and shadows of each raw file to compensate for the differences in light. So in this case I locked the exposure and simply panned the camera.
Part 4: Determine the Focus Point
I have had cases where I was able to use a locked focus throughout the capture process, and I have had cases where I chose to adjust the focus point throughout the capture process. Using a locked focus point: The cityscape above (Fig. C) is an example where I used a single AF point that was locked through the capture. A good understanding of hyperfocal distance can be very useful in this area. The camera is approximately 1,600 feet from the cluster of buildings in the middle of the frame. I chose to photograph the scene at 24mm. Frankly, at that distance and using 24mm, I could have shot it at f/2.8 and still have sharp results. But in this case I shot at f/11 considering the anticipated sharpness with that lens at 24mm. In either case I knew the hyperfocal distance would carry the entire depth of the scene. So I used a single AF point, locked the focus on the buildings, and locked the focus to manual. If you do this, just be careful not to touch the focus ring throughout the capture or you'll basically have to start all over. I have used a piece of tape placed across the focus ring to prevent accidentally changing the focus in cases like this. Using a variable focus point: I varied the focus throughout the capture process in Fig. G above. I chose to do that because much of the subject was physically closer to the camera than the rest of the subject. In this case I used the full AF array and refocused the lens with each panel. In a few panels I refocused the lens multiple times to balance the number of AF points that were locked onto the subject. As I mentioned before I overlap my panels by as much as 50%. Some stitching software programs use on the sharpest pixels from overlapping frames when merging the panels, as was the case in CS5 at the time this image was stitched together. Assuming CS5 did its' job, Fig. G is comprised of only the sharpest pixels front-to-back, side-to-side, and top-to-bottom.
Part 5: Set your White Balance to MANUAL
Iif you're not familiar with white balance, I suggest reading this article. I've seen a lot panoramic images ruined when shooting on auto white balance. It is possible that if using auto white balance, the camera may pick up a shift in light quality or color signal as the camera pans, and suddenly you have one or more panel that doesn't match the white balance of the other panels. Using manual white balance can reduce the risk of this happening. Fig. C and Fig. G above were both captured using "cloudy" white balance.
Part 6: Capture the Scene
Part 7: Post Processing
Post processing is probably the biggest area where people's processes vary. Since I shoot in raw, I intend to maximize the quality of the stitch by processing each raw file prior to stitching. Above I mentioned single-exposure projects versus HDR projects. My raw processing for each scenario looks something like this:
A) Single-exposure panels: Lens corrections - white balance - highlights - shadows - contrast - clarity - hue/saturation/lightness of color - noise reduction - output sharpening. Lightroom makes it easy to copy and paste the adjustment settings to multiple files. So I only manually adjust one panel and copy and paste the settings to the other panels.
B) Multiple-exposure tone-blended panels: Lens corrections - white balance - clarity - noise reduction - output sharpening. Since each panel will be a blend of multiple exposures, I don't adjust highlight, shadows, contrast, or colors before stitching. All of that information comes from the multiple exposures that are about to be blended for each panel.
Stitching the Panels: There are a lot of fine stitching programs on the market. Today I only use two: 1) Photoshop's PhotoMerge feature, which also allows an option to blend on the sharpest pixels from the overlapped areas of each panel. 2) Microsoft ICE, which is a freeware and is a snap to use has some fantastic save-as options like being able to save the panorama as Photoshop layers (PSD or PSB files).
Final Post Processing: Once I have the final stitch, I determine what to crop away and what areas to clone in if needed. I save the cropped image as a Photoshop file and begin making my global layers adjustments using a series of luminosity masks, HSL adjustment, etc. When I'm finished with the editing I save the final image as a 16-bit uncompressed TIFF and bring it back into Lightroom so that I can apply (non-destructive) output sharpening and vignetting (which is conditional).
Please use the comments form below for any questions related to this article.
]]>This method can be achieved in any software that offers HSL (hue, saturation, lightness) adjustments. The basis of this method simply adjusts the luminosity (or brightness) of the colors to achieve the black and white balance and contrast that suits your preference. To say that another way, when you decrease or increase the luminosity of a specific color within a black and white image, the corresponding tones become darker or lighter.
Today I use Lightroom for approximately 75% of my black and white conversion work. To the left is a screenshot of Lightroom's adjustment panel for B&W images. The appearance of the HSL panel may vary between image editing programs but typically they all do the same job. The adjustment sliders you see to the left represent the luminosity of the primary and secondary colors that may be present in a photograph. Look at the left and right ends of each slider and you will notice they are darker on the left and brighter on the right. One simply moves the slider to the left or right to adjust the luminosity of that color. |
The sunflower photographs below illustrate how we might adjust the luminosity of a color to achieve different effects. Image A is the original color photo. Image B shows the photo converted to black and white by only desaturating the image (simply removed the color). Note the HSL adjustments for image B; they are still sitting in the middle which represents the "natural tone" of each color when simply desaturating the photograph. One of my personal preferences when converting to black and white is to use a dark sky. Image C shows how that is done by reducing the luminosity of the blue channel.
A) Original photo | B) All color desaturated, but with no tonal adjustments to any color | C) Tones for blues and aqua reduced to create a much darker sky |
Click for larger view | Click for larger view | Click for larger view |
Beware of Posterization | |
One of the problems with many black and white conversion methods is posterization occurring during the conversion process. Look at the basic HSL adjustment panel below and notice how the colors flow within the adjustment sliders: Orange occurs between Red and Yellow, Blue occurs between Aqua and Purple, and so on. Posterization can occur if you create a wide divide in the tones that lie very close to or in between the primary and secondary colors. The posterization you see in the image to the left - the banding that occurs when tones are stretched too far - is an exaggerated example of the nasty effect.
|
|
The Reds: Look at the roof of the barn and the roof of the smaller building to the far left. Both buildings have rusty metal roofs which contain a high amount of red. I have pushed the reds to +100 to make the roofs’ tones flow with their surroundings to suit my preference. Those roofs could easily be pushed to black by setting the slider to -100. The Yellows: The yellows in this image appear in four primary areas – the fence, the dirt and grass in the middleground, the wood siding of the barn, and the highlights in all the trees. For the most part I set the yellows to +100 to bring out the highlights in the trees and barn siding but again, that was only my preference. It is quite possible that you would have interpreted those tones differently to suit your own taste. |
|
You can see how I handled any concern of posterization in this image by adjusting the Aqua and Purple sliders. I have reduced the luminosity of both colors along with the blues to help create a smooth gradient, or transition, of tones between the three colors. |
|
Why go through all the trouble?: Have you ever tried to see in black and white? I know that sounds crazy and admittedly it can be difficult to think of having black and white eye vision. But the idea is to look beyond the color saturation and begin comparing tones. When you begin to think in this manner you quickly realize how similar the tones are in things that are all around us. Let me provide you a good example with the comparison shown here. The black and white image is a simple desaturation of the color photo - the color was simply removed and no tonal adjustments were made. In the color photo we easily recognize the difference between the flower and the brick but when we strip away the saturation, you can then see just how close the tones are between the flower and the brick. |
|
|
The conversion method I’ve discussed here is the solution to problems like this and allows you a tremendous amount of latitude for interpreting a scene many different ways. |
]]>