Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.
Benefits of registering include
Let me see if I understand your logical process:Yes, but if the expected incidence "is usually assumed to be one hit in a few thousand dives"(*) at GF100, and lowering it to GF70 results in one hit in a few thousand and one dive, then why bother. It maybe would matter if it makes it one hit in a few tens of thousands of dives, -- depending on one's personal take on risks, -- but we have no data to show it's one or the other, or anything in between.
*) The Theoretical Diver – Theorizing about scuba diving
Actually, you are right. I was trying to get your result of 27 min for the 20/85 using SubSurface and fiddled the numbers till it matched (which it didnt initially) then used those number with 50/75.I get 32 minutes of total deco for 50/75.
Is the difference between a gf high of 85 and 75 significant? What is the probability of being bent at 85 vs 75?
Let me see if I understand your logical process:
- If we don't know the value of X
- Therefore it is illogical to behave as if X is a value consistent with what we do know and not my made up number
I don't think that data exists, which is why I'm not saying it is "significant."To echo you question: Is the difference between a GF low of 20 and a GF low of 50 significant? What is the probability of being bent at 50 vs 20?
The answer to both questions, according to the most recent significant studies by NEDU, is that lowering the GF High reduces DCS more than raising GF low (at least till GF low is ~>50). But exact details require more research.
... You can use GF to modify Bulhman to make it close to what the study did,
a Bulham schedule is never going to match the schedule used in the study. Unlike Bulhman models in the study, they did not take into account the extra on-gassing that occurs with the deeper stops because they were looking at the "efficiency" of the decompression algorithms.
I knew I was close! (And who needs cigars, they are bad for your RMV)Close but no cigar. Better luck next time.
We don't know the value of X at Y=100, nor how X scales as we change Y. Therefore we assume reducing Y by N points results in a meaningful reduction of X.
I liked most of your post, but this line of thought bothersome. Just because a study does not test a particular algorithm does not mean that it does not provide meaningful data for evaluating it. It is not in any way conclusive evidence, but it does give rational people enough information to draw broad conclusions. It provides evidence validating certain theories and invalidating others. It suggests which parameters have greater impact and which have less. Etc.That isn't what the NEDU study said, though. It used VVal-18 and BVM(3) to build the schedule, and "gradient factors" were not a part of the study ...
I knew I was close! (And who needs cigars, they are bad for your RMV)
We have a rough approximation of X at y =100, we know that X=0 at Y=0. We have lots of evidence and theory that X increases monotonically as Y increases.
Are you really that ignorant? Or just pretending?Everything else in this theory operates on log curves, so it's just as likely that this one's a log curve too. That would mean first halving of the M-value results in, what, 4% reduction of risk? I'm sure it's worth it, better safer than safe, right?