How useful is RAW to the lazy/unskilled?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I think a strict WB correction does do basically that, "stretching." Often there is an assumption that there is a point in the photo that "should be" black and another that "should be" white.
@vondo: I agree in principle. AFAIK, this is how Aperture handles post-processing WB correction. The key word is "strict," though. The "WB" techniques I'm talking about don't accomplish the "WB" effect in this way.
But I don't doubt that you can do somewhat better than just stretching whatever info is there, but you can't just reconstruct the red out of nothing. There has to be something there to work with in the first place.
Making something out of nothing. I know it sounds crazy, but I'm not exactly pulling red color out of thin air. The "WB" techniques I described in my post essentially create a B/W image from the original and then use it as a replacement for the red tone/channel. With additional tweaking of hue/saturation, one can get surprising results this way. Does that make sense?
I realize that this seems like a lot of trouble just to achieve the "WB" effect, but "lazy" people should know that any series of image manipulations within Photoshop can be made into an Action. The entire set of manipulations is run by simply clicking the "play Action" button. This can be a huge timesaver for repetitive Photoshop work.

On a related note, do you think that it would be helpful to start a new thread on how some of "WB" techniques work? I think there a lot of divers out there who want to know how to improve the blue-green color cast of their UW pics but have minimal experience with Photoshop: working with layers, merging/blending layers, color channel separation, etc. I am by no means an expert on this type of Photoshop work, but perhaps some graphics pros might be willing to share a few techniques.
And there is also no question that there is more red channel information in a RAW photo from a decent camera than there is in a jpg photo. And considering how much red is lost underwater (90% or more) there is no question for me. Shoot RAW if you can.
A great deal of my UW photography in local San Deigo water is macro stuff using the flash. Under these conditions, the higher quality of RAW over JPG is barely perceptible to the amateur photographer. For wide-angle shots (with vis typically in the 10-20 ft. range), I will generally white balance before snapping the pic. For these pics, RAW is preferred but JPG can be made to work with a little bit of elbow grease. For wide-angle stuff in the Caribbean, I would prefer to shoot in RAW for the many reasons discussed in this thread.
 
I'd like to correct a couple of misconceptions about RAW & JPG capabilities. I'm an electrical engineer who designs digital cameras, and I've also written JPG compression algorithms. The following information applies to standard JPG (& not jpg2000 which is not available in the majority of cameras anyway).

As others have correctly said, RAW keeps all of the raw pixel information, whereas JPG throws some of that information away. It's the WHAT that gets thrown away in JPG that's of great importance for underwater white balance.

First off, it's important to understand that at depth, red photons are *reduced* by the water column, but they're not removed entirely. There *are* red photons; it's just that there are a lot fewer of them at depth, which is why JPG photos appear blue. For that matter, non-white-balanced RAW images look blue too.

If you look at a RAW image taken under water, there is information (non-zero values) in the red pixels; it's just that they're a great deal fainter (dimmer, lesser intensity, whatever word you prefer) than if that same photo was taken above water.

So, with that understanding, let's now consider JPG encoding that image. You can think of JPG as being a 2-stage process. The first stage is to break the image into a luminance (brightness) field and a chrominance (colour) field. The luminance is like "black & white", whereas the chrominance is colour information. Typically the luminance field is kept as-is, whereas the chrominance field is reduced by 2, ie, every second line is thrown away. In this way we keep our full image resolution (in black & white thanks to the luminance field) and we keep a good estimation of our colour information (thanks to the now cut down chrominance field) while reducing our data size.

DAMN, what a great post. Thank you for explaining this so clearly to the rest of us. One little detail is still not quite clear to me: What exactly do you mean by saying the jpg compression process "throws away every second line in the chrominance field." What lines?
 
DAMN, what a great post. Thank you for explaining this so clearly to the rest of us. One little detail is still not quite clear to me: What exactly do you mean by saying the jpg compression process "throws away every second line in the chrominance field." What lines?
@pteranodon:
I think that WetEar was referring to what's called "downsampling of the chrominance components" and the "lines" refer to the individual chrominance values for a line of pixels in the XY plane.
Recall that in the first step of JPEG compression, the photo can be transformed to a luminance-chrominance color space, such as YCbCr (Y=luminance, Cb=blue chrominance, Cr=red chrominance). As he stated previously, the human visual system is more sensitive to luminance than chrominance (color).

FYI, the rest of this explanation is taken from the FileFormat.info site on JPEG compression:
"The simplest way of exploiting the eye's lesser sensitivity to chrominance information is simply to use fewer pixels for the chrominance channels. For example, in an image 1000x1000 pixels, we might use a full 1000x1000 luminance pixels but only 500x500 pixels for each chrominance component. In this representation, each chrominance pixel covers the same area as a 2x2 block of luminance pixels. We store a total of six pixel values for each 2x2 block (four luminance values, one each for the two chrominance channels), rather than the twelve values needed if each component is represented at full resolution. Remarkably, this 50 percent reduction in data volume has almost no effect on the perceived quality of most images."

From what I've read, the downsampling does not really throw away every other line of chrominance info. It actually averages the chrominance values over a group of pixels, e.g., a 2x2 pixel block. This makes more sense to me than just "throwing out" data. Yeah, I know, this is a subtle point.
 
Yes, Bubbletrubble is exactly correct in what he writes. JPG reduces the resolution of the "chrominance representation of the image" while keeping the full resolution of the "luminance representation of the image". Usually anyway (every JPG implementation is slightly different, but this is pretty standard practice.)

The reason I wrote JPG "throws away every second line in the chrominance field" is because in some quick&dirty JPG implementations, that's exactly what you do, and also it's easy to understand. In any real quality implementation you wouldn't do that in a literal sense however - you'd perform some type of an averaging function (often a bilinear function) first, again as Bubbletrubble pointed out.

Also, as we're getting into more detail here, I should more precisely state that three-quarters of the chrominance information is thrown away. Because we reduce the resolution of the chrominance image representation by half in both the X & Y directions. Using BT's example of a 1000x1000 pixel image, our chrominance resolution is 500x500; that's only a quarter the number of pixels. Right? 1000x1000 is 1,000,000. Whereas 500x500 is 250,000. So, averaging or not, three-quarters of the chrominance information in the image is discarded, before we even start into the second portion of the algorithm (where even more chrominance information is discarded).

It's tough being a JPG image, hey?

It really brings home the challenge of trying to accurately white-balance a JPG afterwards. You can really see that if you didn't have a good manual white-balance to begin with, there might not be a lot of red left in your JPG image to play with afterwards.
 

Back
Top Bottom