Kevin Childress Photography: Blog https://www.kdcphoto.com/blog en-us (C) Kevin Childress Photography [email protected] (Kevin Childress Photography) Mon, 16 Mar 2020 06:48:00 GMT Mon, 16 Mar 2020 06:48:00 GMT https://www.kdcphoto.com/img/s/v-12/u283890218-o944476517-50.jpg Kevin Childress Photography: Blog https://www.kdcphoto.com/blog 79 120 Lightroom Photo Merge DNG versus 32-bit Floating Point TIFF https://www.kdcphoto.com/blog/2017/1/32-bit-comparison UPDATED January 20, 2017: The original post from July, 2015 follows the page break below.

In July, 2015 I posted results from my early testing with Lightroom 6.1 Photo Merge to HDR feature in comparison to 32-bit floating point TIFFs created in Photomatix. Since then I have continued to use the Photo Merge tool with mixed results of success and disappointment. This post is to share an updated set of side-by-side comparisons from images I shot just yesterday so these results are definitely current for Lightroom CC 2015.8. In the context of this post, where I speak of DNG files, I am referring to DNG files that were created within Lightroom's Photo Merge module only, not DNG files converted from any other photo RAW. And where I speak of TIFF files, I am referring to 32-bit floating point TIFFs created in Photomatix.

Each of the updated images below include easy to understand call-outs with the specific comparisons so I won't re-type all the comparisons here. But what you will see are examples where Lightroom DNGs lack detail retention, creation of false aberrations / color banding, de-saturation of color, and creation of artifacts.

For the record, let me be clear that I'm not saying all is bad with Lightroom's DNG/HDR files. I use the Photo Merge module on a regular basis with nature / landscape HDR images and I usually love the results. In my original testing in July, 2015 I shared results that spanned scenarios including interior, landscape, and architectural elements. And since then I have seen improvements, particularly in areas of noise control, or at least the merge process not actually creating noise. I still experience a lot of artifacts being created from Lightroom's de-ghosting algorithms, but things have improved. Best yet, the entire Photo Merge process seems to run faster now. But unfortunately I am not seeing improvements to the results I am getting with interior HDR images. 

On of my longest-running and personal favorite projects is creating HDR images of church sanctuaries. And in the purest sense of high dynamic range scenes, many of those spaces are as 'brutal' of an environment as you will ever encounter. When I photographed this space yesterday, I did so with the window shutters open and as you see here, with the shutters closed. The window glass is tinted in different colors but is essentially clear glass. The right side of the building faces due south and this was shot on a bright day with the sun full-on the south side of the building. As bad as the results look here, the results are worse with images shot with the shutters open.

Please note the images shown here are not finished images. The DNG and the TIFF were created from the same set of 15 exposures, captured at 1.0EV stops, ranging from 1/1250-second to 13-seconds. After each file was merged and imported back into Lightroom, I performed basic toning as a 'starting point' for final processing, and that's what you see here. Both files are still rather flat but both are ready to go into Photoshop for the final steps.

Here are the new comparisons. After clicking on the first image you can use left and right navigation in the photo viewer to see all images.

 


Original post from July, 2015:

In a previous article titled Maximizing Your HDR Photography I explained my approach to HDR photography and my method for creating 32-bit floating point TIFFs using Photomatix (see Post Processing segment).  Since posting that article I have been asked about using the Lightroom (Lr) PhotoMerge HDR feature introduced with version 6.  At the time I had yet to test the PhotoMerge feature thoroughly, but after much excitement and anticipation I have compared the two file formats head-to-head and the results are in!  In a word, my initial impression of the Lightroom 6 "HDR DNG raw file" is this: disappointment.

Prior to LR6 being released I had read the new PhotoMerge HDR feature would generate a 32-bit floating point TIFF (which turned out to be false). I was very excited to have another alternative for creating floating point TIFFs, and having that new tool built right into Lightroom sounded awesome for further streamlining my workflow. And while the workflow in itself is simplified quite a bit using the PhotoMerge feature, I find the quality of the resulting image surprisingly disappointing in my experience so far. 

Without further ado, let's compare several real-world examples. All files for this comparison were created equally. The photography processes that went into these projects are verbatim to that described in the aforementioned article and both files were created using the same set of exposures. Once I had the 32-bit TIFF and the Adobe HDR/DNG files assembled, I developed both in Lightroom 6.1 to match them as closely as I possibly could. All of the comparisons made here are 1:1 screen shots of the two files side-by-side as displayed in Lightroom's comparison view. In all examples the Lightroom PhotoMerge HDR/DNG is displayed on the left, and the Photomatix 32-bit floating point TIFF is displayed on the right. For the purpose of this article I'll refer to these as "DNG" and "TIFF" respectively. Click on all thumbnails for a larger view.

Figure A: 18 exposures at +/- 1.0EV ranging from 1/1000 to 90 seconds.
Figure A

Figure A at left is a classic example of my interior HDR photography. This type of space can be difficult to photograph considering the vast differences between highlights and shadows, not to mention trying to manage color correctness given the typical mixed lighting. Having said that, it is paramount that your master tone-blended file be as clean as possible to ensure superior image quality in the end. Figure B below takes a close look at the pros and cons of the DNG and TIFF files for this space. To quickly address each point:​

  • Note A: It is typical to have light spill and color spill when photographing stained glass windows. It is not typical for that light spill to form with hard edges as it has in the DNG file. The TIFF file shows a more natural and smoother transition in the light and color spilling from the glass. Note this occurs around all of the windows.
  • Note B: Looking at the DNG file, notice the band of red color cast that is collected across the top of the pillar. Notice that color cast did not occur in the TIFF file. There was certainly no red light in that area. It actually appears as if the Adobe tone-blending algorithms created a chromatic aberration at that spot. 
  • Note C: Similarly to Note B, there is an orange and red halo emitting from the lamp in the DNG file. Clearly the TIFF file looks far better with no color cast and with nicer, crisper edges around the lamp's framework.
  • Note D: Look at the detail in the glass in the DNG file. Or more accurately, notice the lack of detail in the glass in the DNG file. Note all of the glass looks the same. As I mentioned earlier both files were created with the same set of exposures. And although the highlights were protected with a 1/1000-second exposure, the DNG file does not contain the tonal range needed to completely dump the highlights in the windows. Aside from all the shortcomings listed in notes A through C, blown out highlights is a showstopper for me where my interior photography is concerned. Clearly the TIFF file wins this contest.
Figure B
Figure B

 


 

Figure C: 13 handheld exposures ranging from 1/8000 to 1/40 second
Figure C

Figure C shows the final image of a 13-exposure project. While the final image was processed in black and white, the examples used to compare the DNG and TIFF files are shown in color to illustrate the original condition of each file. I am providing two examples for this image in Figure D and Figure E below due to the multiple issues I have observed with the DNG file.

 

Looking at Figure D:

  • Note A: One of the things I've admired in Lightroom is the program's ability to reduce color noise. But in using the PhotoMerge HDR feature I find it curious that Lightroom actually generates color noise. Look in the shadow under the eave and notice all of the green color noise, or green speckling. Note this doesn't occur in the TIFF comparison.
  • Note B: We're looking at detail and contrast in the shingles. The TIFF file wins again.
  • Note C: Notice the clarity of the glass in the TIFF file. The DNG file is rife with color noise and luminosity noise.
  • Note D: Again, lots of color noise and luminosity noise in the siding of the DNG file compared to the smooth color and tone in the TIFF. Also note the clean edges of the siding in the TIFF. 
Figure D
Figure D

Picking up with Figure E we move to the cemetery for our next set of comparisons, where unfortunately we find much of the same issues as with Figure D. The issues are:

  • Note A: Notice that orb of light on the grave stone. Why did the DNG create that orb of light? There were no lens flares in any of the exposures used for this project. I'll take the TIFF ...
  • Note B: Color noise, luminosity noise, and more color noise and luminosity noise. Photomatix clearly does a far superior job in reducing luminosity noise in any underexposed file.
  • Note C: If you look closely you can read the inscription on the grave stone in the TIFF file. The DNG completely lost this detail in the noise.
  • Note D: You may have noticed in the photo caption that I shot this handheld. Being so, both programs had to align the files when creating the composite. The DNG actually has a slight edge here for aligning the detail in the leaves. Finally, I see something positive in the DNG! 
Figure E
Figure E

 


 

Figure F: 9 exposures at +/- 1.0 EV
Figure F

Although I prepared several more examples for this comparison, as I continue with this article I realize I'm just beating a dead horse and that I'm only rehashing the same issues over and over. But in closing I would like to provide one last example that seems important to me. Figure F shown at left is a scene that was practically made for HDR photography. This image was assembled with 9 exposures at +/- 1.0 EV.  The reason I feel this image is important is that it has a characteristic that we deal with often in HDR landscape photography which is motion of objects and de-ghosting in post processing. 

Figure G below shows artifacts in the DNG file from what I suspect is Lightroom's de-ghosting algorithm. I know for sure the clouds were moving and I think it's likely there was a slight breeze in the trees. The artifacts in the leaves shown in Note A of the DNG file are inexcusable. Not only is the artifact noise horrible, the colors of the leaves are completely different from their surroundings. The same condition is found throughout all trees in this image. Finally, Note B shows a fair amount of artifact noise around the edge of the clouds. You might guess all of the cloud edges look this way. 

Figure G
Figure G

So, in closing, I think it best that I let the pictures speak for themselves. I do have high hopes that eventually Adobe will work out these kinks. But as it stands, at least based on my experiments here, it is my opinion that Lightroom isn't ready for prime time with the PhotoMerge HDR feature. I simply place too high of a premium on image quality over convenience to consider PhotoMerge as a go-to tool for my HDR photography at this time. Here's to hoping ...

Until next time, happy snappin'!

]]>
[email protected] (Kevin Childress Photography) 32 bit 32-bit TIFF DNG Floating Point HDR Lightroom Lightroom 6 Photomatix Photomerge Post Processing TIFF photography https://www.kdcphoto.com/blog/2017/1/32-bit-comparison Thu, 19 Jan 2017 20:53:00 GMT
Reversing Order For Focus-Stacked Images https://www.kdcphoto.com/blog/2015/8/stacking This article discusses techniques for capturing and combining multiple frames for focus-stacking photography. As the article title suggests, we will take a particular look at the order in which you import and stack your images into your focus-stacking software. I discovered a bit of this information a couple days ago when working on my latest macro image. I've been using Zerene Stacker for the last several months for focus-stacking and when I first obtained the program I had noticed an option for reversing the order of the image stack (where the ‘normal’ stacking order follows file names progressively) but I hadn't experimented with the option until assembling this project.

Figure A

For a little background: one of the issues with focus stacking is the final composite image typically has to be cropped on all four sides to eliminate pixels that show overlapping of the frames being stacked. Figure A to the right shows a comparison between composites with the normal stacking order on left and the reversed stacking order on the right. Please ignore the heavy vignette in the comparisons - I'll come back to that in a moment. 

Both images above were stacked from the same 124 progressively-focused frames but the two stacking orders show a big difference in results. Look at the top and bottom edges of the image stacked in the normal order. The streaks you see on both edges (most apparent on the bottom) show where all of the overlapping frames occur. The same overlaps also occur on the left and right edges but are somewhat hidden in the black vignette.

Figure B

Figure B at left shows a 1:1 view of the bottom edge where I measure approximately 480 pixels that were the overlapping frames. Clearly those overlapped edges get cropped away for the final image, so the point I'm making here is that we lose a lot of image resolution when having to crop away those edges. The number of pixels you have in overlapped edges may differ from project to project depending on your focusing technique and how many frames you use for the focus stack. For this particular project, the total difference is approximately 960 pixels side-to-side and approximately 640 pixels top-to-bottom. That's a fair amount of resolution to sacrifice right from the beginning.

Notice the overlapped edges are not present in the reversed-order stack. And you will also notice the reversed-order stack shows an apparent larger image, as if there is greater magnification in the second image. The apparent difference in magnification is also a result of the stacking order. So by looking at these comparisons it seems that reversed-order stacking has a clear advantage so why not reverse the order of images for all focus-stacking projects? The answer depends on one very important element when capturing the frames to begin with, and that involves the technique you use for focusing your lens. 

For anyone who doesn’t understand focusing prime (non-telephoto) lenses like a macro lens: The main lens used in this project is an 85mm macro lens. The focal length is 85mm, period. The focal length never changes. However, when the focus is adjusted, there are glass elements that move inside the lens. The glass elements move in one direction to focus on things further away from the camera, and move in the opposite direction to focus on things closer to the camera. So while the focal length is always 85mm, there is a perceptible difference in image magnification when the glass elements move in one direction or the other during focusing.

Figure C

Figure C at left shows the extremes in focusing that I used in this project. In this comparison the image on the left is the first frame captured which is focused on a point that I decided would be the deepest zone of sharp focus. The image on the right is focused on the closest foreground element. You can see the same apparent difference in magnification as with Figure A above. 

Back to the topic at hand which is choosing the order to stack your images. As I mentioned before, it just depends on the technique you use for focusing your lens to begin with. During capture you have to adjust lens focus in one of two ways: Either front-to-back, which means you focus on the foreground first and progressively refocus as you work toward the background. Or, back-to-front, which means you focus on the background first and progressively refocus as you work toward the foreground. As a matter of habit I typically focus front-to-back. There's no particular reason for me doing that - it's just the habit I got into when I started dabbling with focus stacking in late 2012. But for this project I decided to work from back-to-front. For whatever reason I could see things better in that order for this particular subject. When I first imported the files into Zerene Stacker I used the default stack order. After seeing all of the overlapping frame edges I remembered that I captured the frames in a 'reversed order' than I typically would, so I reversed the order and allowed Zerene to re-stack the images.

After seeing the results of the reversed-stack composite I then understood why Zerene would offer the option for reversing the order of the image stack. The option is definitely accommodating to whichever direction you choose to focus your lens. If you're using a different program for focus stacking I recommend looking into a way to achieve similar results within that program.

About that heavy vignette: The main lens used here is a Nikon DX lens shot on an Nikon FX body (In Nikon-ease, DX lenses are designed to be used with 1.5x ‘crop’ image sensors and FX denotes a full-frame image sensor). The vignetted area is the difference between the sizes of the full-frame image sensor versus the footprint of the DX lens being used. The camera can auto-crop the full-frame area to fit the DX lens footprint if the Auto-DX mode is enabled and produces 4800 x 3200-pixel images. For this project I disabled the Auto-DX mode to see if I could increase the resolution a bit with my own crop, which I was successful in doing. I was able to scrape out 5823 x 3734 pixels for this one so it does show some nice detail at full size!

Tiger Swallowtail on Crape Myrtle BlossomNikon D-800 ~ Nikon 50mm f/1.8 (wide open) reversed upon Nikon Macro 85mm f/3.5 at f/22 ~ ISO100 ~ 124 frames, exposed to 0.8 seconds, with lens travel adjusted in 0.06mm increments
Final Image

And finally, here's the full-size final image from this project. Hover over the image for all the juicy details.

I hope you enjoy the image and until next time, Happy Stacking!

]]>
[email protected] (Kevin Childress Photography) Zerene Stacker focus stacking macro photography https://www.kdcphoto.com/blog/2015/8/stacking Fri, 07 Aug 2015 15:57:53 GMT
Lens Selection, Focal Length, And Perceived Distance To An Object https://www.kdcphoto.com/blog/2015/5/focal-length When discussing this subject it is common to hear a lot of big words like perspective distortion, extension distortion, compression distortion, and several variations thereof. My goal with this post is not to elaborate on all the technical terms rather than to provide a simple, real-world example of how lens selection and different focal lengths can be used to change how one perceives the distance between objects within a photograph. This technique is very effective for emphasizing one component of a scene over another by changing the perceived size, and perceived distance, to a subject in relationship to the distance from the camera. 

One important note before continuing: The focal lengths discussed in this post, specifically where the angle of view is concerned, assume a 35mm (or full-frame) camera. 

Figure A: 24-85mm lens shot at 36mm focal length. Figure B: 14-24mm lens shot at 20mm focal length. Figure A and Figure B illustrate the vast difference in presentation that is possible when using different lenses and focal lengths.

To keep the composition of both photographs as consistent as possible, I tried to frame several elements as closely to the same as I could, such as the open area to the right of the cannon, the open area above the roof of the house, and the open area below the wheel closest to the camera. Neither image was cropped in post processing. 

You can clearly see the difference in how prominently the house and the cannon are presented between the two images. And depending on the viewer's perception of the apparent size of the objects, one could perceive the distance between the cannon and the house is closer together or farther apart. And as the photographer it is this play on perception that enables you to select a lens and focal length for emphasizing one subject in a scene over another. 

When employing this technique, one important element in lens selection is to consider a lens' angle of viewFigure A was photographed with a 24-85mm lens at 36mm focal length and Figure B was photographed with a 14-24mm lens at 20mm focal length. The 24-85mm lens has a maximum angle of view of 84 degrees where the 14-24mm lens extends the field of view to 114 degrees. The difference in the two angles is best illustrated in Figure B by the apparent increased width of the frame where the viewer can see more of the trees along the left and right edges. Notwithstanding the increased angle of view, the big difference is how close I was able to get to the cannon with the 14-24mm lens. In my opinion this is the real magic in playing on the perception of distance and scale. I was able to get roughly 10-to-12 feet closer to the cannon in the second photo which was necessary to maintain framing of the composition. Figure B demonstrates the big difference in perceivable scale where the foreground element was so much closer to the lens, effectively "compressing" the background into a much smaller space.

I like both images equally - one for the greater scale of the house and one for the greater scale of the cannon. But I wouldn't necessarily claim either to have a better composition than the other. "Perception" has been used quite a bit in this article and perception is very much to each as own. This is an example where one's personal preference in style might dictate which image one might hang on their wall.

]]>
[email protected] (Kevin Childress Photography) Perceived Distance compression distortion extension distortion focal length image compression lens selection perspective distortion https://www.kdcphoto.com/blog/2015/5/focal-length Wed, 06 May 2015 14:59:28 GMT
Maximizing Your HDR Photography https://www.kdcphoto.com/blog/2015/4/hdr-photography In the world of photography, one can take several different paths to reach similar end results, and nothing could be more true than with HDR (High Dynamic Range) photography. And in the world according to me there are good, better, and best techniques for producing those HDR images. This article is written to discuss what I believe is the best approach using techniques I have developed over the last several years with lots of practice along the way. If your goal is produce gritty, grimy, techno-crap images, then you can stop here. If your goal is to to produce technically superior HDR images, read o​n!

St. Luke's Episcopal Church in Lincolnton, NCFig. A: St. Luke's Episcopal ChurchNikon D800 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 15 exposures at +/- 1.0 EV ranging from 1/320 - 50 seconds Too often I hear people asking what is the best HDR-imaging software program. And while its true that some programs are better than others (we'll get to that later), superior HDR photography begins with precise camera work. And depending on the scene that could be a lot of camera work. In short, my philosophy is that one must capture the entire dynamic range of a scene in-camera in order to collect the necessary data that is needed later during the tone blending process. That means that if a scene's highlights meter at 1/320 second, and the shadows meter at 90 seconds, then so be it. That's why we call it high dynamic range photography and you will be well served to capture each and every stop of light between 1/320 and 90 seconds. Notice that I said every stop of light, not every 2nd, 3rd, or 4th stop. In the example of 1/320 to 90 seconds, let's call that 16 stops of dynamic range, and you need all of them!  My approach has become to capture those 16 stops of dynamic range in 1-stop increments. Figure A above is a good example where 15 exposures were used for tone blending. 

So what's the difference in capturing 13, 14, or 15 exposures at +/- 1.0 EV instead of 3 to 5 exposures at +/- 2.0 EV? What's to gain you ask? The answer is simple: more data. And more data equals greater fidelity. And when it comes to post processing (which is obviously a must in HDR photography), your tone-blending software will make much better decisions with more data. A key part in maintaining excellent image quality in post processing is the transition between highlights and shadows and keeping luminance and color noise to an absolute minimum. Consider this: for every stop of light that you don’t capture in-camera, you are relying on your tone-blending software to interpolate the missing luminance and color data for filling in the gaps between widely varying exposure values. That interpolation is where much noise and posterization is generated, and the result is degraded image quality. We could probably all agree the more data a computer has to make decisions, the more accurate the computer’s decisions will be; the same goes for digital images and that philosophy applies doubly when blending color and tones from multiple exposures into a single file.

Duke Chapel in Durham, NCFig. B: Duke ChapelNikon D800 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 10 exposures at +/-1.0 EV ranging between 1/50th to 10 seconds

We’ve seen enough HDR images created in recent years using 3-to-5 exposures at +/- 2.0 EV that we've become accustomed to the results produced by that approach. But let’s face it, the +/- 2.0 EV approach is just not the perfect one-size-fits-all-dynamic-ranges photography tool, particularly for many interior spaces. Most church interiors that I have photographed run around 13 stops of dynamic range. While the +/- 2.0 EV approach is the quickest route, the highlights and shadows typically get stretched way too far during local tonemapping and the midtones are left to bridge the gaps. As I mentioned, that can lead to a lot of posterization which is never a good thing, especially if you're going for excellent image quality. Figure B above of Duke Chapel is the final result of 10 exposures ranging between 1/50th and 10 seconds. Figure C below is a screenshot of the 10 exposures that were used. Looking at the first frame you'll see the brightest highlights (the chandeliers) are barely exposed. And if those chandeliers had shown bare light bulbs I would have exposed them even less - maybe a little as 1/800 second. You need that sort of nearly-black frame to protect the highlights when processing your final tone-blended image. 

Fig. C Many folks rely on adjusting the exposure value of a single raw file for producing “multiple exposures” and then re-blend those files into a faux HDR image. But why do that when you could have captured a few more stops of light to begin with? If you blow the highlights in a single take, they’re gone. There’s simply no detail in blown highlights and no matter what adjustments you make to the original RAW file, you will never regain that lost detail. There is only one solution to this problem: more data, and more natural exposures. Don't waste your time adjusting exposure values in multiple RAW files. For every exposure adjustment you make to those individual RAW files, you're introducing more and more likelihood of luminance noise. Slow down, take your time, and get the files you need from the beginning! Your final project will be much, much cleaner if you put in the necessary work up front.

Fig. D: Lamp Detail Figure D at right is a 1:1 screenshot of an example of the type of detail you can retain if you protect the highlights with the initial photography. Click on the thumbnail to enlarge the image and look at the circular grates on the bottom of the lamp. Those grates are concealing bare light bulbs, and while the bare bulbs aren't visible at this angle, those areas of the lamps are super hot with light. The fact that the grate detail is even visible is because the initial photography included a nearly black frame (probably around 1/500 second) to protect the highlights of the interior lights. 

Fig. E: Christ Episcopal ChurchNikon D7000 - Sigma 10-20mm at 10mm - ISO100 - f/16 - 13 exposures at +/- 1.0EV ranging from 1/50 second to 60 seconds Finally, Figure E at left is tone-blended from 13 exposures. In theory, 4 exposures at +/- 4.0 EV could have produced the same results, so I tried it with the exact same tonemapping settings I used with 13 exposures. From a broad perspective, I did get similar results. The differences are in the details like harsher light falloff directly at the light sources (especially around the windows). The 13-exposure version has a smoother transition to midtones in these areas and the colors on the windows look a tad better. Also in the 4.0 EV version I see the super-light highlights (like the light bulbs) aren't quite as white; they have begun to take on that gray tone that we see on so many HDR images when the highlights are beaten into submission. Finally the 13-stop version gives you far better noise-to-signal ratios and thus produces much cleaner, less noisy images.

Nikon D800 - Nikon 14-24mm f/2.8 - 14mm - f/16 - ISO100 - 9 exposures at +/- 1.0 EV One might think this article was written specifically for interior photography but that's not the case. I regularly use this same technique in landscapes, particularly with waterfalls. I think the biggest challenge we have with HDR photography is movement in the scene between the frames that leads to ghosting in the composite image. You simply have to use your own discretion at the moment of capturing your frames as to how much time you can allow to elapse (which effects the number of frames you capture) depending on whatever motion there is in the scene. The waterfall shown here gives the appearance of motion in the water, which is my preference for photographing waterfalls. This was photographed on a very calm morning with no breeze so there is no apparent motion in the leaves. This waterfall was shot with 9 exposures at +/- 1.0 EV.

Wrapping Up The Photography: The bullets below list my basic workflow for the camera work discussed above. 

  1. ​Set manual white balance. Remember that you're bracketing a bunch of frames so don't rely on auto WB here. The slightest change in light quality can really shift the color pallete across the bracket with auto WB. 
  2. Meter highlights.
  3. Meter shadows. Might require some guess work for exposures that exceed 30 seconds.
  4. Align lens and lock the tripod head tight.
  5. Set focus. If shooting an interior, I typically select a single AF point. If shooting landscapes I typically use the full AF array.
  6. Switch camera/lens to manual focus. This ensures the focus point does not change across your bracketed frames. Be careful not to touch the focus ring on the lens after locking focus.
  7. Capture the frames (and I strongly urge you to use a remote shutter release): My D800 can bracket up to 9 exposures so I typically only have to advance the bracket once when I need more than 9 exposures. My D7000 can only bracket 3 exposures. Clearly the D7000 requires much more camera work to capture all the frames that I want. If your camera can only bracket 3 exposures:
    • Set the camera to capture the exposure bracket progressively.
    • Begin your first bracket with the shutter speed metered for the highlights (the fastest shutter speed), and capture the first three frames.
    • Manually advance the shutter speed to one stop slower than the last frame captured in the previous bracket.
    • Repeat process until you have all the exposures metered between the highlights and shadows.

 

Post Processing: Where do I begin?! Post processing of HDR images is one of the most hotly contested subjects of the last decade in photography. And there is no question that post processing your HDR images can make or break the whole deal. As you may have guessed by reading my opening statements, I am not a fan of the gritty, grungy, techno-crap-tone-mapped images that has given HDR photography such a bad rap. I prefer to process my HDR images to be as realistic as possible - to look as much like a "perfect single exposure" as possible. Of course I have never duplicated what my eyes have seen with an HDR photograph, but I do the best I can based on my memory of the scene and with the impression it left upon me. Today there are many programs available for blending all your exposures into a single file, and I won't even begin to review them all. I will simply comment on the tools I use today and what has proven to suit my style the best.  

Today I use 32-bit floating point TIFFs exclusively for my HDR composites. I still have Nik HDR Efex Pro2 on my PC, as well as several other capable programs, but I wholly believe that 32-bit floating point TIFF files are the holy grail of HDR post processing. My high-level post processing workflow is:

  • RAW file development in Lightroom.
  • Export developed RAW files as 16-bit uncompressed TIFFs in Lightroom.
  • Merge all the uncompressed 16-bit TIFFs into a single 32-bit floating point TIFF.
  • Import new 32-bit TIFF into Lightroom and develop the image just as if it were a single RAW file.
  • Export adjusted 32-bit TIFF into Photoshop CS6 for final rendering with luminosity masks and whatever adjustment layers I deem necessary to meet my vision.

Raw File Development: For this example let's just say I'm working on a bracket of 12 exposures. Since I have 12 one-stop exposures, that is all the luminosity data I need. I NEVER adjust exposure, contrast, highlights, shadows, whites, blacks, hue, saturation, or lightness during RAW development; I want those elements to stay as "natural" as possible in the individual exposures. This is my RAW development workflow:

  1. Choose a single file to work with - usually from the middle of the bracket.
  2. Lens corrections if needed including perspective and color.
  3. Straighten image if needed, but perform no other crop.
  4. Verify white balance. If needed, select a manual WB point from the image. Refer to Figure E above: I used the white in the flags hanging in the pulpit to validate WB in the source RAW files. 
  5. Add a touch of clarity (usually no more than +5 on the Lightroom sliders).
  6. Noise reduction.
  7. Sharpening.
  8. Copy adjustment settings and paste to the other 11 RAW files.
  9. Export all 12 files as 16-bit uncompressed TIFFs.

Merging 16-bit uncompressed TIFFs to a 32-bit floating point TIFF: For the past couple of years I have been using the free trial version of Photomatix for this function. I do not use Photomatix to make any adjustments to the image, I ONLY use photomatix to facilitate creating the 32-bit floating point TIFF. Within Lightroom I have created an export agent that interfaces directly with Photomatix so this helps streamline the process a bit. The process works like this:

  1. Select all 12 TIFF files in Lightroom.
  2. Right click on the selected files > Export > Export to Photomatix PRO.
  3. Once the Photomatix import dialog opens I select the following options:​​
    • Align Images by Matching Features. 
    • Remove Ghosts with Selective Deghosting Tool. And I strongly urge you to select any ghosting areas manually. 
    • Reduce Noise on Underexposed Images Only. 
    • Remove Chromatic Aberrations (even though I've already done this in RAW development, it doesn't hurt to get a second opinion!).
    • Show Intermediary 32-bit HDR Image.
  4. Now Photomatix will import all the 16-bit uncompressed TIFF files and will run through the aforementioned script. 
  5. Once Photomatix creates the HDR image it will display the intermediary 32-bit TIFF. The image will look absolutely horrible but don't worry about this right now.
  6. Select FILE > SAVE AS > save as Floating Point TIFF.
  7. Import the new 32-bit floating point TIFF into Lightroom.

NOTE as of 4/26/15: Last week I installed Lightroom 6 which includes the new Phote Merge to HDR feature which will create the 32-bit floating point TIFF. I've used it a couple times in testing and it looks pretty slick. But I can't comment right now on how well the deghosting feature works so I'll come back to this at a later time ...

Adjusting 32-bit Floating Point TIFF in Lightroom: When you first see this file in Lightroom you will probably think it looks terrible! But you now have a single 32-bit TIFF file that contains a tremendous amount of luminance and color data - just think of this as a super-duper RAW file! But at this point, the "tonemapping process" is to each is own. My suggestion is that you click on the "Auto Tone" button in Lightroom just to get a good starting point and then perform whatever adjustments you feel necessary. I try to keep things simple when I'm processing the 32-bit file in Lightroom. I'll make basic adjustments to setting the white and black points and then exposure, highlights, shadows, and maybe (just maybe) clarity. I'll take a look at color hue, saturation, and lightness but if I make any adjustments here I will typically go easy (save the heavy work for Photoshop). Once I'm done with the 32-bit file in Lightroom: Right click on the file > Edit in > Edit in Adobe Photoshop > Edit a copy with Lightroom adjustments.

Adjustments in Photoshop CS6: This process is very individual based on each of our styles for post processing. I won't try to explain all that I do in CS6 because clearly that can get very laborious. I commonly use several targeted color adjustment layers for hue/saturation/lightness. And I will finish the image with whatever luminosity masks I need for fine-tuning exposure and contrast.

And in a nutshell, that's it!  :)  I would be most appreciative on any feedback you have related to this post.

Wishing you the best of happy photography!

]]>
[email protected] (Kevin Childress Photography) CS6 HDR HDR photography Lightroom Raw work flow conversion editing image interior HDR interior photography photography photoshop post processing work flow https://www.kdcphoto.com/blog/2015/4/hdr-photography Sun, 26 Apr 2015 16:34:22 GMT
Taking Control of Sharpening in Lightroom https://www.kdcphoto.com/blog/2013/9/take-control-of-sharpening-in-lightroom This article discusses an image sharpening strategy found in Lightroom but also uses illustrations that highlight the importance of applying image sharpening in a selective manner when necessary. This Lightroom feature is a staple in my image processing workflow! 

The basis of this Lightroom sharpening strategy is to ensure that you apply sharpening only to targeted areas of an image and avoid sharpening areas of an image that contain noise, grain, or otherwise undesirable textures that should not be sharpened or accentuated otherwise. When people view your images, their eye will be attracted to sharp edges and areas of higher contrast. The last thing you want is to accentuate noise/grain, and add contrast to parts of the image that detract from your primary subject. Lightroom gives us an awesome feature in the Masking slider that allows us to literally see the areas of an image that is being targeted for sharpening.

Fig A: Click to enlarge See Figure A: Notice the Detail panel at far right. This is where you’ll find your image sharpening tools including the Masking slider. The Detail panel slider settings you see in Figure A are the Lightroom defaults for raw file conversion. Notice the Masking slider is set to zero. This means that no areas of the image are being masked from sharpening and subsequently the entire image will be sharpened equally. Depending on the image this may be a very bad thing!  

Fig. B: Click to enlarge See Figure B: The magic begins when you hold the ALT key (Option key on Mac) while clicking on the Masking slider. By doing this Lightroom will reveal the “mask” that can be applied to your image and visually shows you what areas will be sharpened and what areas will be masked. The idea of the mask is the same as with any image processing software that uses masks: “black conceals and white reveals”. This means that all black areas of the mask will be concealed (masked off and will not be sharpened), while all white areas of the mask will revealed (no mask applied) and will be sharpened. 

Stop. Pay attention to the noisy/grainy texture in the sky of Figure B. In this particular case the noise/grain is a result of underexposure in that region of the photo. And you will see the same sort of noise/grain from other images that have high ISO noise. In either case, you DO NOT want to sharpen or accentuate these areas of an image. Using the Masking slider will enable you to mask (conceal) these regions and constrict the sharpening mask down to edges that will attract the eye.

See Fig. C and Fig. D below: You will see the Masking slider has been advanced to 20% and 70% respectively. Notice how dramatic of an affect the mask has had on the noisy/grainy sky. At 20% the noise has been significantly reduced and at 73% the noise has been completely masked off. Also compare how the mask constricts down to the edges of the buildings and the moon as the Masking slider advances. These edges are the areas of the image that you DO want to sharpen and accentuate.

Fig. C: Click to enlarge Fig. D: Click to enlarge


Once you have applied an appropriate mask to your image, release the ALT key (Option key on Mac) and use the Amount, Radius, and Detail sliders to sharpen your image as needed. Although we have masked off a large area of the examples used here, you can still over-sharpen the areas of the image that were visible through the mask. My recommendation is to take it easy … a little dab’ll do ya!

TIP: Holding the ALT key (Option key on Mac) while adjusting the Amount, Radius, and Detail sliders will also reveal a mask that is helpful for evaluating the sharpening effect. 

Note: It is important to recognize that applying the sharpening mask does not remove noise/grain from your images – it only prevents the noise/grain from being accentuated when sharpening is applied. 

Below is Adobe’s definition of the sliders used for image sharpening in Lightroom:

  • Amount: Adjusts edge definition. Increase the Amount value to increase sharpening. A value of zero (0) turns off sharpening. In general, set Amount to a lower value for cleaner images. The adjustment locates pixels that differ from surrounding pixels based on the threshold you specify and increases the pixels’ contrast by the amount you specify.
  • Radius: Adjusts the size of the details that sharpening is applied to. Photos with very fine details may need a lower radius setting. Photos with larger details may be able to use a larger radius. Using too large a radius generally results in unnatural-looking results.
  • Detail: Adjusts how much high-frequency information is sharpened in the image and how much the sharpening process emphasizes edges. Lower settings primarily sharpen edges to remove blurring. Higher values are useful for making the textures in the image more pronounced.
  • Masking: Controls an edge mask. With a setting of zero (0), everything in the image receives the same amount of sharpening. With a setting of 100, sharpening is mostly restricted to those areas near the strongest edges.
]]>
[email protected] (Kevin Childress Photography) Adobe Lightroom mask sharpening https://www.kdcphoto.com/blog/2013/9/take-control-of-sharpening-in-lightroom Sat, 21 Sep 2013 14:48:38 GMT
A Panoramic Photography Workflow https://www.kdcphoto.com/blog/2013/9/a-panoramic-photography-workflow This post is to share my personal approach and workflow for creating stitched panoramic images. People that practice panoramic photography often develop a set of techniques that work well for them. These are techniques that I have developed based on my own experiences and my own preferences - this is what works for me.

Click to enlarge
180-degree Panorama stitched from 12 portrait-oriented panels ~ All panels combined with 12 exposures at +/- 1.0EV ranging from 1/500 to 4 seconds

 

 

 

 

 

 

 

 

 

 

 

Workflow Part 1: Get That Lens Level!

There are countless tutorials on the internet that discuss the importance of lens alignment when capturing your panoramic scenes. But I don't recall any tutorial that visually illustrates the importance of lens alignment. I explain the importance of lens alignment in the context of your final composition. The bottom line is your lens angle will have a significant impact on your final composition (or final crop). The illustration below is exaggerated to demonstrate several scenarios of how poor lens alignment can cause problems in the overall project.

Fig. A: This example assumes that 5 panels were captured for the panoramic stitch, and assumes the lens is angled upward. Problem #1 is Perspective Distortion: Consider the effect that an upward camera angle has on vertical elements - the lens will bend verticals inward as the verticals rise from bottom-to-top.  

Fig. A: Click to enlarge

The primary problem with perspective distortion in panoramic photography is what happens during the stitching process. It is possible for the stitching software to attempt to correct perspective distortion by warping, bending, repositioning, and cropping each panel in order to make all of the panels align. The 5 blue panels shown in Fig. A at left are exaggerated to show gaps that may occur between the panels if the stitching software is successful in correcting all of the vertical perspective distortion. This impacts your final composition in a big way since you now have to crop away those gaps or clone the gaps from other parts of the image. Problem #2 is Lens Arch: In my opinion, the lens arching as the camera pans is a bigger problem than perspective distortion. I choose to pan my camera from left-to-right. Fig. A shows the arch that will occur if your lens is angled upward. Your stitched image will show evidence of the arch and you're faced with another big problem for your final composition. Again, you either have to crop away all of the empty space around the stitched image or clone all of the blank spaces from other areas of the image.

Fig. A shows two problems that arise from poor lens alignment when cropping your final composition. The yellow box represents the overall canvas size that your stitching software might create. The initial canvas size will be determined by the outer coordinates of the pixels used from each of the individual panels. Now you really only have a few options. 1) Attempt to use the entire canvas by cloning away all of the empty space (the yellow space), or by layering in bits and pieces from your original files. Both can be difficult to achieve. 2) Make a HUGE crop by cropping inside of the stitched pixels. I've seen this done a lot and it usually creates a terrible aspect ratio and sometimes negates the effort of creating a panoramic image in the first place. 3) Use a crop that is somewhere between 1 and 2.

NOTE: Depending on the scope of the project, there will be times when the camera must arch. If that is the case, you just need to plan on capturing more rows of panels to fill in the gaps above and below the arch.

 

Workflow Part 2: Evaluate the Size of the Scene

Fig. B: Click to enlarge Fig. B: Evaluate your scene to determine how many panels you will need to capture. Most tutorials I've read about panoramic photography suggest 25% - 30% overlap across the panels. In my opinion, that isn't enough. Again, consider your final composition and final crop. One of the phenomenons that occur in the stitching process is that your panels may take on an oval shape. I don't know the science behind this problem but it has something to do with the rotating motion of the lens as it rotates on your tripod (in a cylindrical fashion), and then the stitching software trying to make a flat image from something that was captured "in a cylinder". Fig. B shows two crops that are available depending on how many panels were captured. As before the yellow box represents the overall canvas size and as before, you either have to crop away the empty spaces or clone them in. Each scenario is fairly self-explanatory. I choose the second scenario by overlapping my panels by 50% so that I have more pixels to fill in the gaps between the panels.

NOTE: Fig. B emphasizes each panel in portrait orientation. I choose to capture my panoramas in portrait orientation so that I can gain maximum height across the image overall. This gives me a lot more freedom with composing in-camera and for final crop.

 

Part 3: Evaluate the Overall Exposure

I consider how the exposure may vary across the entire scene. Depending on your angle to the light, it is possible for one side of the scene to be much lighter than the other side of the scene. It is a matter of personal preference how you expose each of the panels. But you need to be aware how the light changes across the scene so that you can decide if your exposures should vary as you pan the camera. I make this decision depending on the complexity of the light for each scene. Below are two examples to make the point.

Fig. C: In this blue-hour scene the sun was very low on the horizon. The sun is just out of the frame to the left. By evaluating this scene I found that the overall exposure varied by two full stops from left-to-right, and in this particular case I chose to vary my exposures to compensate for the changes in light. Of course it may have been more "realistic" to photograph the scene as it was but I considered that the right side of the scene may have been very dark and may have upset the final product. Right or wrong, it is your choice how to manage this problem. Fig. D illustrates what the scene may have looked like if I had not varied my exposures while panning.

Fig. C: Click to enlarge Fig. D: Click to enlarge
Fig. E
Figure E shows the five panels that were used for stitching

Part 3A: Evaluate the Overall Exposure, continued

To "HDR", or not to "HDR" - that is the question. When it comes to panoramic photography, I approach the in-camera work the same as I would for a single-frame image. Depending on the challenges for a given scene, I will photograph the scene one of two ways:  1) The stitch will use single-exposure panels. Or, 2) The stitch will use panels that are created from multiple-exposure tone-blended images. I consider the techniques I use for HDR photography as an arrow in my quiver. I simply use the tool that I need to complete the vision I have for a given scene. Although I regularly rely on the dynamic range of my raw files for single-exposure images I have found that I can get far superior results (in terms of image quality) by using my HDR techniques if and when the scene demands it. The cityscape image above is a good example of why I might choose the HDR route. The bottom line is that I wanted to see a lot of detail in the buildings, bridges, and reflections in the water but the buildings were mostly back lit given the position of the sun. Although I could have stretched the tones in single-exposure raw files, taking the HDR route gave me much better control over the signal-to-noise ratio. I metered 5 stops of difference between the highlights and shadows in this scene, and that's what each panel received - 5 exposures at +/- 1.0 EV. Fig. F below illustrates the dynamic range that was captured for each panel. 

Fig. F

Fig. G: Click to enlarge Most of my panoramas are stitched from single-exposure panels. This is especially easy to achieve on overcast days. The light in Fig. G to the right also varied in overall EV from side-to-side, but only by approximately 2/3 a stop. I knew I could easily adjust the highlights and shadows of each raw file to compensate for the differences in light. So in this case I locked the exposure and simply panned the camera.

 

Part 4: Determine the Focus Point

I have had cases where I was able to use a locked focus throughout the capture process, and I have had cases where I chose to adjust the focus point throughout the capture process. Using a locked focus point: The cityscape above (Fig. C) is an example where I used a single AF point that was locked through the capture. A good understanding of hyperfocal distance can be very useful in this area. The camera is approximately 1,600 feet from the cluster of buildings in the middle of the frame. I chose to photograph the scene at 24mm. Frankly, at that distance and using 24mm, I could have shot it at f/2.8 and still have sharp results. But in this case I shot at f/11 considering the anticipated sharpness with that lens at 24mm. In either case I knew the hyperfocal distance would carry the entire depth of the scene. So I used a single AF point, locked the focus on the buildings, and locked the focus to manual. If you do this, just be careful not to touch the focus ring throughout the capture or you'll basically have to start all over. I have used a piece of tape placed across the focus ring to prevent accidentally changing the focus in cases like this. Using a variable focus point: I varied the focus throughout the capture process in Fig. G above.  I chose to do that because much of the subject was physically closer to the camera than the rest of the subject. In this case I used the full AF array and refocused the lens with each panel. In a few panels I refocused the lens multiple times to balance the number of AF points that were locked onto the subject. As I mentioned before I overlap my panels by as much as 50%. Some stitching software programs use on the sharpest pixels from overlapping frames when merging the panels, as was the case in CS5 at the time this image was stitched together. Assuming CS5 did its' job, Fig. G is comprised of only the sharpest pixels front-to-back, side-to-side, and top-to-bottom. 

 

Part 5: Set your White Balance to MANUAL

Iif you're not familiar with white balance, I suggest reading this article. I've seen a lot panoramic images ruined when shooting on auto white balance. It is possible that if using auto white balance, the camera may pick up a shift in light quality or color signal as the camera pans, and suddenly you have one or more panel that doesn't match the white balance of the other panels. Using manual white balance can reduce the risk of this happening. Fig. C and Fig. G above were both captured using "cloudy" white balance. 

 

Part 6: Capture the Scene

  1. Shutter Release: Several months ago I was testing a couple of lenses inside of a store. Based on the light in the store I shot shutter speeds of 1/6 and 1/3 of a second. When I evaluated those images on my PC, I saw for the first time how violent mirror-slap can be. I mean that I could see motion blur in the images due to the vibration caused by the mirror. Of course if you're using a mirrorless camera, this isn't a problem. But if you're shooting with an SLR, you don't want motion blur ruining your images - especially on big panoramic projects. I suggest using a remote shutter release and shoot in mirror-up release mode. This is particularly helpful if taking the HDR route. At very minimum, I suggest using the cameras internal timer to release the shutter.
  2. Panning: Hopefully while you were evaluating the scene to determine the number of panels needed, you mentally marked the outer edges of your scene. If your view finder (or live view) has a grid overlay option, I suggest using it. The grid really helps to determine how far to pan the camera from frame-to-frame. I use a 3-way tripod head from my panoramas. Of course the vertical and horizontal axes are locked down but I leave the rotating part of the head just loose enough that I can pan the camera across the panels.

 

Part 7: Post Processing

Post processing is probably the biggest area where people's processes vary. Since I shoot in raw, I intend to maximize the quality of the stitch by processing each raw file prior to stitching. Above I mentioned single-exposure projects versus HDR projects. My raw processing for each scenario looks something like this: 

A) Single-exposure panels: Lens corrections  -  white balance  -  highlights  -  shadows  -  contrast  -  clarity  -  hue/saturation/lightness of color  - noise reduction  -  output sharpening. Lightroom makes it easy to copy and paste the adjustment settings to multiple files. So I only manually adjust one panel and copy and paste the settings to the other panels.

B) Multiple-exposure tone-blended panels: Lens corrections  -  white balance  -  clarity  -  noise reduction  -  output sharpening. Since each panel will be a blend of multiple exposures, I don't adjust highlight, shadows, contrast, or colors before stitching. All of that information comes from the multiple exposures that are about to be blended for each panel.

Stitching the Panels: There are a lot of fine stitching programs on the market. Today I only use two: 1) Photoshop's PhotoMerge feature, which also allows an option to blend on the sharpest pixels from the overlapped areas of each panel. 2) Microsoft ICE, which is a freeware and is a snap to use has some fantastic save-as options like being able to save the panorama as Photoshop layers (PSD or PSB files). 

Final Post Processing: Once I have the final stitch, I determine what to crop away and what areas to clone in if needed. I save the cropped image as a Photoshop file and begin making my global layers adjustments using a series of luminosity masks, HSL adjustment, etc. When I'm finished with the editing I save the final image as a 16-bit uncompressed TIFF and bring it back into Lightroom so that I can apply (non-destructive) output sharpening and vignetting (which is conditional). 

Please use the comments form below for any questions related to this article.

]]>
[email protected] (Kevin Childress Photography) panorama panoramic panoramic photography photography workflow https://www.kdcphoto.com/blog/2013/9/a-panoramic-photography-workflow Fri, 20 Sep 2013 15:30:46 GMT
A Method for Black & White Conversion https://www.kdcphoto.com/blog/2013/9/blackandwhite There are many different methods one can use for black and white photo conversions, and generally speaking all conversion methods can be categorized in a good / better / best structure. This article discusses one technique that I consider to be better than most. I would like to tell you this technique is the best method but that would be a matter of opinion. Just as people have preferences for color saturation and contrast, people also have their preferences for what appeals to them in black and white images. I mainly use this method because it gives me a great deal of control of how the tones of the primary and secondary colors are blended during the conversion process.

This method can be achieved in any software that offers HSL (hue, saturation, lightness) adjustments. The basis of this method simply adjusts the luminosity (or brightness) of the colors to achieve the black and white balance and contrast that suits your preference. To say that another way, when you decrease or increase the luminosity of a specific color within a black and white image, the corresponding tones become darker or lighter.

Click to enlarge

Today I use Lightroom for approximately 75% of my black and white conversion work. To the left is a screenshot of Lightroom's adjustment panel for B&W images. The appearance of the HSL panel may vary between image editing programs but typically they all do the same job. The adjustment sliders you see to the left represent the luminosity of the primary and secondary colors that may be present in a photograph. 

Look at the left and right ends of each slider and you will notice they are darker on the left and brighter on the right. One simply moves the slider to the left or right to adjust the luminosity of that color.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The sunflower photographs below illustrate how we might adjust the luminosity of a color to achieve different effects. Image A is the original color photo. Image B shows the photo converted to black and white by only desaturating the image (simply removed the color). Note the HSL adjustments for image B; they are still sitting in the middle which represents the "natural tone" of each color when simply desaturating the photograph. One of my personal preferences when converting to black and white is to use a dark sky. Image C shows how that is done by reducing the luminosity of the blue channel.

A) Original photo B) All color desaturated, but with no tonal adjustments to any color C) Tones for blues and aqua reduced to create a much darker sky
Click for larger view Click for larger view Click for larger view
Beware of Posterization

One of the problems with many black and white conversion methods is posterization occurring during the conversion process. Look at the basic HSL adjustment panel below and notice how the colors flow within the adjustment sliders: Orange occurs between Red and Yellow, Blue occurs between Aqua and Purple, and so on. Posterization can occur if you create a wide divide in the tones that lie very close to or in between the primary and secondary colors. The posterization you see in the image to the left - the banding that occurs when tones are stretched too far - is an exaggerated example of the nasty effect. 

Finally we'll look at a landscape image that required multiple HSL tone adjustments to achieve my vision. The HSL adjustment panel below and right represents the landscape photo you see below. Let’s start with the boldest characteristic of this image by looking at the Blue slider. You’ll see the slider is set to -80. This translates to reducing the luminosity of all blues by 80%, or making the blues 80% darker than how the camera captured the color. And the effect is seen in the dark areas of the sky that I love so much! Those dark areas could be pushed all the way to black by moving the blue slider all the way to the left (to -100). The same principle applies to all primary and secondary colors available in your software’s HSL adjustment feature. Let’s also look at the Red and Yellow sliders. Notice they are pushed all the way to the right at +100 and into the light end of the slider.

The Reds: Look at the roof of the barn and the roof of the smaller building to the far left. Both buildings have rusty metal roofs which contain a high amount of red. I have pushed the reds to +100 to make the roofs’ tones flow with their surroundings to suit my preference. Those roofs could easily be pushed to black by setting the slider to -100. The Yellows: The yellows in this image appear in four primary areas – the fence, the dirt and grass in the middleground, the wood siding of the barn, and the highlights in all the trees. For the most part I set the yellows to +100 to bring out the highlights in the trees and barn siding but again, that was only my preference. It is quite possible that you would have interpreted those tones differently to suit your own taste.

You can see how I handled any concern of posterization in this image by adjusting the Aqua and Purple sliders. I have reduced the luminosity of both colors along with the blues to help create a smooth gradient, or transition, of tones between the three colors. 

Why didn’t I adjust the Orange and Magenta sliders? Well, I did. I make a habit of moving every slider in both directions to see how it affects the image globally. Moving each slider back and forth several times and observing the changes will help you see where each color exists throughout the image and how the shifts in tones affect adjacent tones. To my eye the orange and magenta adjustments in this image were of no consequence so I left the sliders at zero.

Why go through all the trouble?: Have you ever tried to see in black and white? I know that sounds crazy and admittedly it can be difficult to think of having black and white eye vision. But the idea is to look beyond the color saturation and begin comparing tones. When you begin to think in this manner you quickly realize how similar the tones are in things that are all around us. Let me provide you a good example with the comparison shown here. The black and white image is a simple desaturation of the color photo - the color was simply removed and no tonal adjustments were made. In the color photo we easily recognize the difference between the flower and the brick but when we strip away the saturation, you can then see just how close the tones are between the flower and the brick. 

The conversion method I’ve discussed here is the solution to problems like this and allows you a tremendous amount of latitude for interpreting a scene many different ways. 

 

]]>
[email protected] (Kevin Childress Photography) B&W black and white conversion image editing photography https://www.kdcphoto.com/blog/2013/9/blackandwhite Wed, 18 Sep 2013 15:27:07 GMT