Question Why GF high not lower that GF low?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I have, but I don't recall anything that contradicts what I said.

Which paper did you read? Try the one called 'Clearing Up The Confusion Abut "Deep Stops"' and look at the 3rd paragraph from the end that says
The addition of deep stops in a profile will generally increase the time required at the shallow stops as well as the overall decompression time. However, if truly "sufficient decompression" is the result, then the concept of "economic decompression" is not really compromised.

And the bit in the preceding section: "Illustrating the problem", that explains why TTS is increased.
 
I show this in this document (in french, sorry)

View attachment 838531
Very interesting, thank you for doing this analysis.

The axis horizontal is depth in metres, yes?
What is the ScubaPro algorithm doing? Why does it change with depth?

It appears similar to our typical 'recreational' GF choice of 65/80 on Shearwater.

On a dive with max depth of ~55 metres on air (21/00) with the ScubaPro G2, my deeper stops were shorter compared to buddy using Shearwater. Perhaps he was on a default GF ~40/85.
 
Which paper did you read? Try the one called 'Clearing Up The Confusion Abut "Deep Stops"'
Yep, read it. I agree shallow time goes up, as does TTS. None of that invalidates the aspects I previously mentioned in my view. So again, which of those specific things do you believe to be false?
 
are you referring to this
"The Thalmann model, on the other hand, treats uptake and washout kinetics as dissimilar: While gas uptake is still represented as exponential, the washout is modeled by a function with a linear component. This means that gas washout is treated as a slower process than gas uptake."

Has this been compared to any experimentally measured rates of off-gassing from an aqueous medium, or human body? Couldn't they put people in a chamber (or special rebreather) that accurately measures actual gases coming out of them? You could use nitrogen or helium isotopes?

Regarding the linked NEDU model-fitting study, if I understand correctly, they back-fit to a pre-existing database that is relatively sparse for the extra-deep dives under consideration. Also, few actual DCS incidences actually occurred under any of the training data (Table 8)?

I am curious what kinds of standard GF values they would have predicted, if they just trained a model specifically on that, alongside this Thalman thing. Why train a model that is "too computationally expensive to be implemented in a diver-worn decompression computer", when you could train for GFs that do easily work in our computers? Or was that also done? How do you know your new model is better at prediction, versus other models that you could also have fit?

Given data & training constraints, it seems very uncertain what the actual DCS rate (and prediction error) would be for a cohort of new dives using new fitted models. When just a couple of divers out of 20+ experiences some DCS, it seems very hard to be completely sure it that the only explanation is small differences in deep stops. High uncertainty, other factors not adequately washed out by small n.
 
Yep, read it. I agree shallow time goes up, as does TTS. None of that invalidates the aspects I previously mentioned in my view. So again, which of those specific things do you believe to be false?

This:

If we ... compare 10/100 to 100/100, I believe the probability of DCS is higher in the 10/100 case. They both have all tissues below their respective M values. However, the slower tissues are closer to their limits -- and remain that way for longer -- in the 10/100 case.

The probability is encoded in M values. As long as the tissue is below its M-value, the probability holds. Ergo, 10/100 is no "less safe" than 100/100.
 
"The Thalmann model, on the other hand, treats uptake and washout kinetics as dissimilar: While gas uptake is still represented as exponential, the washout is modeled by a function with a linear component. This means that gas washout is treated as a slower process than gas uptake."

Has this been compared to any experimentally measured rates of off-gassing from an aqueous medium, or human body? Couldn't they put people in a chamber (or special rebreather) that accurately measures actual gases coming out of them? You could use nitrogen or helium isotopes?

Regarding the linked NEDU model-fitting study, if I understand correctly, they back-fit to a pre-existing database that is relatively sparse for the extra-deep dives under consideration. Also, few actual DCS incidences actually occurred under any of the training data (Table 8)?

I am curious what kinds of standard GF values they would have predicted, if they just trained a model specifically on that, alongside this Thalman thing. Why train a model that is "too computationally expensive to be implemented in a diver-worn decompression computer", when you could train for GFs that do easily work in our computers? Or was that also done? How do you know your new model is better at prediction, versus other models that you could also have fit?

Given data & training constraints, it seems very uncertain what the actual DCS rate (and prediction error) would be for a cohort of new dives using new fitted models. When just a couple of divers out of 20+ experiences some DCS, it seems very hard to be completely sure it that the only explanation is small differences in deep stops. High uncertainty, other factors not adequately washed out by small n.
I wonder if the either or options ( deep- shallow) aren't sufficient to adequately explain the complexities of of decompression models and this is an attempt to combine both with some sort of algorithm to weigh the deeper dives as opposed to using a model to cover all depths. It seems overly simplistic to use a model to try and cover all ranges of depth and time in one package
 
... It seems overly simplistic to use a model to try and cover all ranges of depth and time in one package

Yes, but the goal is to get the diver out of the water not bent. If an overly simplistic model achieves that just at well (all things beings equal) as a clever sophisticated one, the former wins on KISS principle.
 
Yes, but the goal is to get the diver out of the water not bent. If an overly simplistic model achieves that just at well (all things beings equal) as a clever sophisticated one, the former wins on KISS principle.
i think were at a stage where computers can handle a clever sophisticated one, if theres a better model why wouldn't you use it - the linked article above explains the situation in the first paragraph



Dr. David Doolette’s lecture on Advances in Decompression Theory and Practicewas among the most anticipated presentations of last year’s Rebreather Forum 4 (RF4). Toward the end of the linked video (starting at about 47:00), Doolette spent a few minutes explaining how deco schedules generated by Bühlmann-based models produce an escalating risk of decompression sickness (DCS) for dives with greater inert gas exposure. That is, a profile for a 100 m/330 ft dive with 30 minutes of bottom time will have a greater risk of DCS than a profile for a 70 m/230 ft dive with 20 minutes of bottom time generated with the same parameter settings (e.g., gradient factors).

Or, as Kevin Gurr put it here in InDEPTH, “[…] Much of the deeper diving we do is simply an extrapolation of the early shallow water research. We now know that this does not always work.”
 
i think were at a stage where computers can handle a clever sophisticated one, if theres a better model why wouldn't you use it - the linked article above explains the situation in the first paragraph

"All things being equal" and "better model" are the issue here.

The reason computers want to use the KISS one is because we have a few decades of Software Engineering, UI design, Reliability Engineering, you name it, all showing things work better that way.
 
The probability is encoded in M values. As long as the tissue is below its M-value, the probability holds. Ergo, 10/100 is no "less safe" than 100/100.
No, a tissue right up against the M value has a higher probability of DCS than one further away from the line. There are 15 other tissues in each case that are not at the line, but the deep stops and additional time from 10/100 push the slower tissues closer to the line and therefore higher risk.
 
https://www.shearwater.com/products/teric/

Back
Top Bottom