Info Deeply Safe Labs: A website for dive computer testing

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

These methods imply a longer decompression than the one you can calculate if you use the load of inert gas in the leading tissue of the second dive.

Hi Éric,

Either I don't understand you, or I've to disagree, at least for the MN90 tables for which I've redone some computations to check that my understanding was indeed how the tables were computed. Obviously I've checked only a small sample of the tabular data. I don't intend to do the same work for any other table.

The table follows an haldanian model without introducing longer decompression as far as it is allowed by a pre-computed tabular format which has necessarily to group up things in order to be usable, just like profiles are grouped in "square profile". More, while for the profiles the grouping is conservative according to the model (that is a computer tracking all the compartments for the precise profile will always come up at most with the same stops), for the successive dives the grouping doesn't seem to be always conservative (that is I'm pretty sure that a computer tracking all the compartments can end up in some cases with more load in some compartments than the table is using; I've not tried to check if the numerical values of the load limits used would indeed in such case gives longer stops).

Yours,
 
Up until "pure" ZH-L16 C computers, all dive computers have been computing repetitive stops to roughly match the increase that happens with dive tables. ZH-L16 C computers manufacturers, using only the inert gas load for repetitive dives, are stepping away from these "old" procedures. One can argue about the cautiousness of these old procedures, but "pure" ZH-L16 C is the first and only algorithm that ignores the state of the art, which leads to some unreasonable results.

For example, considering that for a 30m (100ft) dive of 16 min, the leading compartment is 12.5 minutes, it means that with such a calculation, with 75 minutes of surface interval (6 half times), you can repeat this dive with no decompression stops again, and again, and again... infinitely. Our tests confirm this, whatever the GF settings, the NDL (and/or decompression schedule) is identical for all dives in the "repetitive dives" protocol.
This is an interesting observation of the Bühlmann theory. And I have verified it to be correct (at least to four repetitive dives with 75min surface intervals) using Subsurface. As an aside, using a public algorithm is great because what it does can be tested and used for planning without actually doing the dive or using a pressure pot. While slower tissues do on- and off-gas under Bühlmann, for this example they aren’t controlling.
Conversely, the DSAT/PADI tables allow for 20min on the first 30m dive, dropping to pressure group C after a 75min surface interval, then a 12min NDL for repetitive dives thereafter (with 75min SI between). A 40% reduction in NDL is significant. The relative difference would be less significant when comparing longer dives. Say a ~1hr NDL at 16-18m followed by a 1hr surface interval.
In my opinion, the interesting question is not so much whether the algorithms are different (we know they are), but whether there should be a substantial penalty for a short (16mim) bottom time followed by a comparatively long (75min, nearly 5x bottom time) surface interval. I don’t know the answer to that. I doubt a navy is going to do a trial to test, aiming to bend enough divers to be statistically significant (as the US navy did to test deep stops). But maybe there are data from recompression chambers on whether decompression sickness is more frequent following single or repetitive dives, and what the relative surface interval to bottom time ratio was.
 
Greetings divers,

Over the past half year or so, we have been conducting tests on dive computers using a miniature hyperbaric chamber. Some of the results we have found, regarding repetitive diving, we believe could be of concern. This was identified with our first test protocol, two square dives of 30 minutes at 30 meters (100 ft), spaced by a 90 minutes surface interval. The results show that some dive computers, specifically these that implement ZH-L16 C, compute significantly lower decompression times than other computers for the second dive while being similar on the first dive, some leeway considered. This aroused suspiscions that these implementations did not account for repetitive dives in any other way that simple offgasing during the surface interval. We first confirmed this with theoretical calculations, and contacted relevent manufacturers to bring this to their attention. Some of them simply confirmed that absence of additionnal procedures, without bringing any argument to why they are not taking into account aggravating factors, like the right-left pulmonary shunt.

Following this discovery, we have continued on testing computers, and have decided to upload a website showcasing extended results as well as some additional computers models. More recently, we added another test protocol, designed by Professor A. A. Bühlmann in the 1994 UHMS workshop named "The Effectiveness of Dive Computers in Repetitive Diving", consisting of 6 dives for 16 minutes at 30 meters, spaced by 75 minutes of surface interval. We found the results to show even more cause for concern.

We have detailed why we find these results concerning on the website, under the sections "Forewords", "Test conditions" and "Guide to interpreting results". Therefore, I will not be detailing the reasons in this initial post, as it would be way too long, so we encourage you to read said sections instead.

The website is available at the following link: Deeply Safe Labs (works best on a computer screen).

Even though I am a former technical manager, computer specialized, from a long installed diving equipment manufacturer, I am no Professor, Doctor, or any kind of such experts. I have been doing this in collaboration just with experienced divers having a strong interest in decompression theory. We are eager to read what any of you may think about our analysis, and will gladly answer any question you may have on our work.

Best regards,
Eric,
Deeply Safe Labs.
interesting
 
Hi @JMarc ,

It's possible we don't understand each other, and I can make mistakes. Could you share one of these computations you have done?

Best regards,
Eric Frasquet,
Deeply Safe Labs.
 
Hi @JMarc ,

It's possible we don't understand each other, and I can make mistakes. Could you share one of these computations you have done?

Best regards,
Eric Frasquet,
Deeply Safe Labs.

I'll try the remember to check if I've saved the spreadsheet the next time I've access to the computer I used yesterday. I worked backward from the tables:

1/ check that from the gas load and depth, the time to add to the duration of the successive dive was indeed the time needed to load the 120' compartment at that depth to the gas load value
2/ check that from the GPS letter to the gas load after an interval it was just a pure exponential unloading for the 120' compartment; that allowed me to find which initial gas load was meant by GPS letter
3/ check for a few dive entries (with and without stops) that the GPS letter from the table was indeed corresponding to the gas load of the 120' compartment after the dive with the matching determined in step 2.

That's just an haldanian model simplified to consider only the 120' compartment for the interval and guessing all the other compartments load at the start of the second dive from the 120' one and the depth of the second dive, while a dive computer will track all the compartments. It seems to me that the table method is not always more conservative than the computer one, although I've not looked further to determine in which conditions it is the case (my guess, and that's just a guess, is short interval, short deep first dive, shallow second dive; in such circumstances, I'd not be surprised if you could start the second dive with more load in a short-period compartment than what is possible to reach at the depth of the second dive). At least two factors may limit the effect: the consecutive rule which merges the two dives for very short interval, and the fact that the load limits are higher for short-period compartment.
 
I've read this whole thread and I'm still not sure what it is you're trying to tell us @Deeply Safe Labs .

Best as I can understand it, you don't think ZH-L 16C algorithm is sufficient. You tested a bunch of computers and confirmed they follow ZH-L 16C.

I'm unclear what it is you think needs to be considered in addition to the current algorithm.
 
I've read this whole thread and I'm still not sure what it is you're trying to tell us @Deeply Safe Labs .

Best as I can understand it, you don't think ZH-L 16C algorithm is sufficient. You tested a bunch of computers and confirmed they follow ZH-L 16C.

I'm unclear what it is you think needs to be considered in addition to the current algorithm.
They seem to think that microbubbles from previous dives interfere with gas exchange at the alveolar level enough to at least partially invalidate the dissolved gas models, but as far as I can see have referenced no actual research to support that assertion. Dr. Mitchell, OTOH, did summarize a study in response #50 which appears to indicate that this is not an issue.
 
I'll try the remember to check if I've saved the spreadsheet the next time I've access to the computer I used yesterday. I worked backward from the tables:

1/ check that from the gas load and depth, the time to add to the duration of the successive dive was indeed the time needed to load the 120' compartment at that depth to the gas load value
2/ check that from the GPS letter to the gas load after an interval it was just a pure exponential unloading for the 120' compartment; that allowed me to find which initial gas load was meant by GPS letter
3/ check for a few dive entries (with and without stops) that the GPS letter from the table was indeed corresponding to the gas load of the 120' compartment after the dive with the matching determined in step 2.

That's just an haldanian model simplified to consider only the 120' compartment for the interval and guessing all the other compartments load at the start of the second dive from the 120' one and the depth of the second dive, while a dive computer will track all the compartments. It seems to me that the table method is not always more conservative than the computer one, although I've not looked further to determine in which conditions it is the case (my guess, and that's just a guess, is short interval, short deep first dive, shallow second dive; in such circumstances, I'd not be surprised if you could start the second dive with more load in a short-period compartment than what is possible to reach at the depth of the second dive). At least two factors may limit the effect: the consecutive rule which merges the two dives for very short interval, and the fact that the load limits are higher for short-period compartment.
Thank you JMarcB.

An haldanian model is not, in itself, adequate for repetitive dives. This has been pointed out by J. S. Haldane himself in his original article from 1908, "The prevention of compressed-air illness" in which he recommended: "to add together the two periods of exposure, and adopt the corresponding rate of decompression shown in the tables".

This "addition" method has been used by the US Navy for close to 50 years, even though it was deemed too conservatory. In 1956, they had computers powerful enough to compute repetitive tables. The way they designed them is described in "U.S. Navy Experimental Diving Unit, Research report 1-57, Calculation of repetitive diving decompression tables", by J. W. Dwyer. For these tables, they did track all compartments, but they added one additional step: "At start of surface interval, each tissue saturated to maximum tissue pressure allowed by surfacing ratio".
This clearly states their belief that a "pure" calculation was not satisfactory.

More recently, dive tables (including French MN90) have been using one compartment to determine a "time supplement" to be added to the second dive. A "pure" calculation computes inert gas load by the end of the surface interval, and stops there. Dive tables compute this off-gassing on a single compartment, and from that calculation, determine a supplementary time to be added to the second dive. It is equivalent to starting the dive with all compartments pre-saturated, even when a "pure" calculation would have the faster ones fully desaturated.

You are mistaken in thinking that tables are just a simplified haldanian model. They are, by design, always more conservative for repetitive dives than a "pure" calculation.

Repetitive tables were designed following experiments and statistical analysis, and wouldn't be as conservative as they are if it wasn't necessary.

Best regards,
Eric Frasquet,
Deeply Safe Labs.
 
Still waiting for that Deeply Safe Labs computer whenever this webpage, the paper behind it or the fact that Shearwater have ignored Deeply Safe Labs litanies letters get peddled on the internet :shakehead: .

Reading between the lines, using ZH-L 16 for repetitive air dives is unsafe, as it produces shorter stops at GF90/90 compared to other proprietary algorithms.

I still don't understand why is that a problem.

Based on the study, UK South coast should have the highest incidence of DCI in the world. Every weekend - weather permitting :tired: - there are several charters with predominantly older, portly and out of shape divers who run a 30-45 meter profile with perhaps 20 minutes of (accelerated) deco, followed by a 50-70 minutes surface interval and a 20 meter drift dive.

Based on purely empirical evidence, very few people get bent, despite the majority of them diving some form of Buhlmann and despite possibly having slower tissues more saturated then on the 30 meters/30 minutes air dive from the study. If people were getting bent, all the charters would be out of business.

I do understand that the authors come from a CMAS/FFESSM perspective of sports diving, that I believe favours multiple repeated shorter dives on air to depths where other divers would either use nitrox or a separate decompression gas, possibly with trimix, with longer bottom times and longer surface intervals (a bit ad hominem, I know).

What would be interesting to see is the theory translated into repeated technical dives - for example, two dives with 90-100 minutes runtimes separated by a 3-4 hours long surface interval. Intuitively, the difference in stop times between computers should be even larger and Buhlmann should be even more insufficient than it is on the short air dives.
 
You are mistaken in thinking that tables are just a simplified haldanian model. They are, by design, always more conservative for repetitive dives than a "pure" calculation.

- I'm convinced that I understand how the MN90 tables are computed. I fear that nothing short of several entries for which my computation would be significantly different from the published tables would make me reconsider that opinion, and I don't intend to automate the process which would make looking for such entries convenient.

- I'm not convinced that the way repetitive dives are computed is always safe within the model. I'm not convinced otherwise either. It would need a more careful analysis than I'm ready to do to convince me one way or the other.

- I'm convinced that that the way repetitive dives are computed with the MN90 tables are intended to be close to the haldanian model while still making the tables usable in practice (and the previous point shows how much it can be close). Making a table usable in practice means bundling different dives under the same entry, and thus getting an inherently safer behavior for some of those dives if you do the bundling in a safe way. You seem to argue that there is another additional source of conservatism. To convince me that's the case you'd need to either convince me that I don't understand how the MN90 tables are computed and that I miss a relevant point, or that there is a way to bundle dives making usable tables introducing less conservatism and avoided by the MN90 designers because of this lesser conservatism.

On the question if using an haldanian model to handle repetitive dives is safe enough or not, looking at those who have already commented, I'm totally unqualified to add anything.
 

Back
Top Bottom