Sean Walberg
Contributor
I don't see what all the fancy math and calculations provide that we can't already do -- with just one image -- and we don't have to have a static image that we must look at from various distances.
I'm kinda out of my wheelhouse on the marine side of it here but I think the paper mentions this:
Previously, it was assumed that β D c = β B c , and that these coefficients had a single value for a given scene [9], but in [1] we have shown that they are distinct, and furthermore, that they had dependencies on different factors.
And as DoctorMike said it's for processing lots of pictures on computer for research, so multiple images shouldn't be a problem.
I'm still trying to figure out if those 1,100 images with the color palette are just for checking accuracy or if they're training a model on it. Sea-Thru: A Method for Removing Water from Underwater Images | Hacker News has some discussion by people more attuned to the AI/vision aspects of this.