Nirvana
Contributor
I am interested in knowing what are the theoretical and practical bases for planning decompression using gradient factors Low and High. That is, the reason behind choosing a Gf Lo and a different Gf Hi. Why ZHL-16C 40/70 and not ZHL-16C 70/70, for instance.
The reason for introducing Gf (lo) and Gf (hi) to Bühlmann's ZHL-16 algorithm, as I recall, was to try and reduce the amount and volume of bubbles formed during decompression. It was, in a sense, a departure from its dissolved gas content foundation. It was taken in a time when it became clear that even DCS free dives produced some detectable bubbles, and people started directing their effort to try to control bubble growth during ascent. If I am not mistaken, the adoption of gradient factors was more or less concurrent with the gain in popularity of Pyle stops and VPM.
More recent research, however, has downplayed the need to control the bubbling that occurs early in the ascent. From my understanding of the discussions of the past few years, it is now thought that the most important metric in determining the probability of DCS is the integral of supersaturation. It appears, then, that it is more important to emphasise the reduction of supersaturation of slow and medium compartments.
Given all that, my question is: what is the reason for continuing to employ a pair of gradient factors, instead of scraping the Gf Lo altogether (or, put it another way, set it the same as Gf Hi)? Has any study been done that shows that using any Gf (lo) produces less DCS cases, or at least less intravenous bubbles, than not using it? Has it been shown that using a Gf (lo) reduces the calculated integral of supersaturation?
edited for clarity
The reason for introducing Gf (lo) and Gf (hi) to Bühlmann's ZHL-16 algorithm, as I recall, was to try and reduce the amount and volume of bubbles formed during decompression. It was, in a sense, a departure from its dissolved gas content foundation. It was taken in a time when it became clear that even DCS free dives produced some detectable bubbles, and people started directing their effort to try to control bubble growth during ascent. If I am not mistaken, the adoption of gradient factors was more or less concurrent with the gain in popularity of Pyle stops and VPM.
More recent research, however, has downplayed the need to control the bubbling that occurs early in the ascent. From my understanding of the discussions of the past few years, it is now thought that the most important metric in determining the probability of DCS is the integral of supersaturation. It appears, then, that it is more important to emphasise the reduction of supersaturation of slow and medium compartments.
Given all that, my question is: what is the reason for continuing to employ a pair of gradient factors, instead of scraping the Gf Lo altogether (or, put it another way, set it the same as Gf Hi)? Has any study been done that shows that using any Gf (lo) produces less DCS cases, or at least less intravenous bubbles, than not using it? Has it been shown that using a Gf (lo) reduces the calculated integral of supersaturation?
edited for clarity
Last edited: