I think there are a few points that some of you may not be considering.
- Yes, we can do white balancing in PS/LR. But, that balances everything in the image to the same baseline. Part of what she is saying is that an object that is further away from the camera has lost more of its color than an object close to the camera. So, the white balance should be different, depending on the distance from the camera. Of course, that is not something we can (easily) do in PS/LR. I think this is why the video shows her placing her card and then shooting it multiple places from a distance and then swimming closer and closer. That allows her algorithm to calibrate itself for how much of each color is lost at each distance.
- Strobes will give you the color for things that are close, but not things that are far. Her approach would let you shoot ambient light and get the natural color for everything (more or less - I think).
- You might think that shooting ambient light all the time and using this algorithm to give the color would be unfeasible a lot of the time because of the slow shutter speeds that would be required. I'm not so sure about that. At least, not for the more high end cameras. Google ISO Invariance, if you aren't familiar with it. Bottom line: If there is even halfway decent ambient light, you shoot with no strobes and the same shutter speed and "fix" the exposure in PS/LR. And use this algorithm to fix the colors.
Don't get me wrong on that last point. Even with ISO Invariance, the right amount of light is better than too little light. Shooting in ambient and raising the exposure in post is still going to end up with more image noise than having enough light in the first place. I'm just saying that I think there are a fair number of my own pictures where the ambient light was adequate for a good exposure, but I still used strobes anyway, to get the best color. Color I would not have gotten by simply adjusting WB in post. Or maybe I've been doing something wrong in post... But, I think that this algorithm would potentially allow for a pretty decent chunk of a lot of people's photos that were shot with strobes to be done without a strobe and applying this algorithm instead.
Bottom line: I think this algorithm would only even be intended to be used at depths where there simply IS enough ambient light. I'm just thinking that said depth might be a little deeper than what you might think at first.
- All of that said, it sounds like her algorithm depends on having a bunch of photos of the dive site to use in order to calibrate itself. That's probably not going to really be feasible for a LOT of situations. It is also not clear to me whether the algorithm would be useful for, say, shooting up from the bottom to take a picture of a big fish in blue water. I say that partly because it's unclear if the algorithm is building a 3D model of the dive site and using that to calculate distances to everything in each photo. If it is, then shooting up into blue water would not allow application of the algorithm as there would be nothing in the photo as a reference from which to calculate distances. I'm considering what I know about her algorithm and what I know about 3D Photogrammetry in saying that. Which, admittedly, is not really a lot about either.
- I suspect that in the process of commercialization, there will be a handful of "standard Sea-Thru profiles" developed that can be used for when you aren't able to swim around a take a bunch of photos of the site yourself. Something like a green water profile and a blue water profile. Maybe a few variations of each. The photographer could pick which one to use (and/or change it in post, if shooting in RAW, and using a post-processing application that knows what to do with these different Sea-Thru profiles).
- For my own photography, I think there would still be times where I would want to use strobes. It lets me put the "focus" (ha ha!) of the photo where I want it and everything else is de-emphasized by being blue. Similar to using shallow depth of field in land-based portraiture, to make the photo subject sharp and really stand out by having everything else in the frame appear fuzzy/blurry. But, I can also envision times where having everything in the photo look like it was shot in air would give a more-pleasing result (like you'd normally see done in a land-based landscape photo). In other words, if this algorithm really "makes it" into public use, it would just be another tool in the toolbox.