Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

fsardone

Solo Diver
Messages
584
Reaction score
629
Location
Rome, Italy
# of dives
I just don't log dives
Hello everyone,

This post more than asking a question or providing an answer to the subject topic, is a pause of reflection upon the issue and maybe provide an initiating seed for some useful discussion.

We usually hear saying plan your dive and dive your plan and in Tech diving to plan we do use decompression software on which we rely. Also one of the no-no of (technical) diving is no trust me dives (dives in which we are diving beyond our level of training/knowledge/confidence). This notwithstanding we entrust our deco profile to something we do not fully understand “trusting” that those who wrote the software do understand them and follow best practices.

So are we ok in trusting such software? If yes why (no betters alternative)? If not why do we do it?

From my point of view there are 3 classes of software:

Commercially developed by a company that is in deco is line of business (dive computer manufacturers). These are closed architecture, take it as it is, in some cases limited in term of gases, or platforms, depth ranges or available algorithms. Backed by the company philosophy on deco, but with limited external scrutiny. Here, in my view righteously or not, the trust is in the company’s name.

Commercially developed by a small software house which does that as a primary product, closed architecture, with a lot of feature, cross platform, with no external scrutiny. Here, in my view, the trust is in the software house showing capability to ensure software correctness.

Open source software that while developed by a small number of people, is available in source format to the scrutiny of a large audience of programmers and divers. Examples of these are Subsurface (dive log and planning tool) and OSTC (Open Source Tag Computer) a diving computer, which allows you to load you own software and makes their own available in source format.

None of the above do research and experimentation to my knowledge, in the field of decompression. That is the realm of Navies (probably one Navy only has the budget to do it properly) and scholars in the field. Some is covered by patents.

Recent evolution of the mainstream thought about decompression, and ensuing discussions, made me wonder what would be needed to “trust” diving software (and implicitly the people who write and test said software). Do we trust the software implementation of the algorithm? Do we trust that the line depicting the maximum allowable supersaturation in ZHL-16C/GF is correctly interpolated from GF low to GF high? Do we trust the correct implementation of floating point math in a microcomputer for VPM computationally heavy calculations? Do we trust that whoever is it implementing the algorithm is using recognized best practices both in computer software development and decompression research?

I am not here to try to single out any of the available software or recommend one over the other but what should we be asking of those manufacturers? We ask scientists to be peer reviewed and the scientific methods require a thesis a supporting arguments and validating experiences. We need to be able to predict reality from our thesis. The scientific method can only disprove thesis but cannot ever prove them 100%. Theories that were tought correct for decades if not centuries had been proven wrong by improvement of our capability to assess reality. I could discuss Newtonian mechanics vs restricted relativity vs general relativity but I would end up off topic and people have written books about it.

So how would we want the software manufacturer to reassure us they are using the best available methodology to calculate a safe ascent? How would we want change in knowledge to propagate in best practices and in products? Is the software we are using inherently safe?

The floor is yours
 
The issue is trust.

In order to trust my life to a model, I have to have

(a) faith in the model
(b) faith in the implementation

About models

If we talk about models, my thinking has changed over the years as the results of research come in. In the late 90's and early 2000's a lot of divers were on the deep stop band wagon. RGBM became popular and several ascent strategies based upon slower deeper ascents sounded appealing. The logic of controlling bubbles seemed sound and the promise of RGBM to redistribute time from shallow to deep, therefore getting the diver out of the water sooner, seemed to offer promise.

At the same time WKPP were bending divers on a regular basis using these kinds of ascent strategies, in their case with ratio deco, and I started taking a more conservative approach. I started putting in the deep stops suggested by VPM (I was using VPM, not RGBM but its internal workings are based upon similar principles) and the shallow stops suggested by Buhlmann, which is the model my computer was using. It seemed like a reasonable thing to do at the time.... a sort of fence sitting.

Then Mark Ellyatt just about killed himself using RGBM in 2004 (I think) and my thinking changed again. I spoke to Mark at length about that dive and his thoughts about the current deco models and came away from that convinced that RGBM could only be used on fairly benign technical dives. Mark also told me that at the time ALL of the deep divers had gone back to using Buhlmann. He also said to me that even Buhlmann needed significant "padding" for really deep dives and that more research was needed to calibrate it for use at depths deeper than 100m.

My conclusion was that there are no "perfect" models but of the imperfect ones, Buhlmann seemed like the wiser choice. So after 2004 I went back to using Buhlmann again but slowed my ascent from 21m to my first required stop to 3m/min. That seemed like a reasonable thing to do. I've been doing that ever since, so far (knock on wood) with good results. At the same time I NOW think this procedure needs reassessment but I haven't taken time as of yet to talk to my diving partners at length about it.

Then in ... 2007? NEDU started doing very interesting research and by 2011 it was clear that the bubble models were essentially "broken". I felt vindicated in my faith in Buhlmann as some of my friends were still using RGBM or gradients that made Buhlmann work like RGBM. Since 2011 I have become convinced that the bubble models are unsuitable for technical diving and my faith is now squarely on Buhlmann again. Back to square one. At this point I was still doing my last stop at 6m with (knock on wood) good results. Given the most recent discussions and the advent of heat maps for visualizing dives, I now believe that this procedure needs reassessment too.

So basically where it comes to trust, I "trust" Buhlmann more than any other model, even though I know it isn't perfect, and I try to base my procedures upon not pushing the boundaries of that model.

About Implementation

If we talk about implementation there are two main factors for me.

1) can I trust that the model was programmed professionally and correctly?
2) can I trust that the company takes innovation seriously?

On #1 I tend to want to know that the programmers working on the project are good. In that sense "open source" implementations appeal to me because there are no secrets and there are some very good programmers working on these projects. I know this isn't always feasible so a company with a strong history of problem free implementations is a good second bet. Everyone can name the main players in the market right now so I don't need to do that here but what I'm saying is that I would choose one of those before I chose for a computer made by a Chinese toy store and coded in a "code factory" in India.

On #2 what I find important is to know that the programmers and/or companies are up to speed on the most recent research and have the best interests of divers in mind as their programs evolve. In a recent discussion, I said that I am no longer using Vplanner or Multideco and it is for this reason. I'm reasonably certain that the programs are correctly implemented but I am also sure that the people involved have no interest in further development of those products beyond the paradigm that was popular in 2000. In fact, in recent discussions it has become clear that they not only fail to innovate but zealously resist innovation. That's a red-flag to me.

I would, therefore chose for an implementation that evolves over time as our understanding of deco theory evolves over time. For example, Suunto, despite the horrible decision they made to embrace RGBM, is such a company that takes innovation seriously and stays abreast of current deco research. I'm sure that most of the big players among computer manufacturers do the same. Seeing that a company does this is good for my confidence. Good for trust.

R..
 
This is indeed a complicated subject and what makes it much more difficult is that the outcome of decompression depends on so many factors (including individual physiology, circumstances) that effectively it is probabilistic and you will never be able to say "this profile is safe" and "that will get you bent". The best possible knowledge would be of the type "this profile will get you bent in less than 10000 dives you follow it". And that makes empirical evaluation much much harder. Which is one of the reasons why there is so little meaningful research (in controlled environments).

The good news on the other side is that after all there are not so many different dives (the two main parameters being depths and bottom times) and it seems people have some sort of consensus how to do those without terrible results. So pretty much anything (reasonable on the market that is) is probably fine.

I would not overestimate the value of open source here (and I say this as an author of open source decompression software): It's a nice idea in theory but honestly, guess how many people have actually read and understood the relevant parts of the source code in say Subsurface or OSTC? For Subsurface my guess would be about a handful. I would hope that you would at least get that level of scrutiny for the closed implementations in commercial dive computers. But hey, why don't you, the reader, look into this and start contributing? For my part, I am not a professional software developer, I only do this in my spare time. I try to follow some patters but I am sure, in the professional world, this could be done with much stricter methodology.

But on the plus side (following the "there are not too many different dives" idea) there is another things that one could do which reduces the influence of model and implementation: You can compare the deco schedules produced with different programs. Here it does not matter if some stops differ by a few minutes (this is noise that will get lost in the inherent fuzziness of the subject) but you want to get results in the same ballpark. Then you can be sufficiently sure that others have dived similar plans before and you are not doing horribly wrong things (you can then always follow the more conservative plans in the spectrum). And for Subsurface, that is what we have done: We have compared plans for a number of dives we thought of as typical and compared those with a number of different programs. And all seem to at least roughly agree (at least if they claim to implement the same or similar models). So, once more, you are probably fine for dives that are not much deeper than 100m with total runtimes not exceeding a few hours breathing reasonable gases.

Note well that the Mark Ellyatt dive does not fall in this category. And even if one model produced that schedule, we are so much out of the usual here that other models (in particular Bühlmann possibly with gradient factors) consider that far too aggressive. There you are outside the consent regime and you better know what you are doing.

Yes, it would be great to have more empirical research. But not only it is hard to do that such that you get reasonable numbers out, this is still very very far from knowing how to modify your models. See the years of discussion after the NEDU "deeps stops might not be as good as you thought" study. That was very carefully designed and the outcome is statistically well established. Still it is unclear what the precise message beyond "you probably want to deemphasise the role of the deeper stops" is.

The good thing on the other side is that this whole business is effectively an inequality: You can almost always deco longer and be safer, it only gets harder if you try to be out of the water as fast as possible (and note this was one of the assumptions in the mentioned NEDU study: For good reasons, they wanted to compare dives with the same total runtime).

So, yes, the empirical basis and model/implementation verification side are by far not that of an ideal world. But my guess is it is still good enough. At least as long as you stay somewhere in the mainstream type of (technical) dive and don't go into record breaking territory.
 
I think it is best to be rather sceptical with regard to the output of planning software and the behaviour of dive computers.

For a medical device there are laws which mean developers end up with a proper design and review process, testing and so forth. I do not get the sense that similar rigour is applied here.

On the other hand the output is very vague, it is not like X-ray dosing or cutting metal panels. Plus or minus a bit is mostly ok.

The issues I see are not about the deco calculations so much as ancillary stuff. The two most obvious I have noticed are carrying plans over between metric and imperial settings and just leaving the numbers as they were (30 minutes at 40m - how can that be a no stop dive? Oops it has reverted to imperial! Duh!) and a planner which misleads you as to which SAC rate will be used for bailout calculations.

So users have to have a clue about what they are doing, otherwise the software can deceive them. This has implications for training - for example skipping all the adding up required for gas planning might not be so smart if it leaves a diver unable to check the planning software, Having a vague idea of how deco time relates to depth and bottom time is good too.
 
I would not overestimate the value of open source here (and I say this as an author of open source decompression software): It's a nice idea in theory but honestly, guess how many people have actually read and understood the relevant parts of the source code in say Subsurface or OSTC? For Subsurface my guess would be about a handful.

I think that's the case for most open source software, and that's OK. A handful of people looking at it is much better than no code review at all (which is unfortunately not uncommon in small companies).
 
OSTC wrote obscure code on purpose.
And AFAIK, they're not open anymore by the way.

It's also very easy to skim through the code to know if you can trust a developer or not. (for someone that has knowledge of this, obviously)
 
For a medical device there are laws which mean developers end up with a proper design and review process, testing and so forth. I do not get the sense that similar rigour is applied here.

I can actually guarantee this. I work in the computer industry and when "safety critical" software is developed -- for example for flight systems on an airplane or even for software that ensures that stop lights at a crossing can't been green in both directions at the same time -- it goes through an especially intensive interdisciplinary "certification" process involving multiple 3rd party technical and safety audits that goes much MUCH deeper than most software engineers would even think possible. What you are saying about medical instruments is probably also true but I've only been witness to developing prototypes and those projects were not certified.

My impression is that almost all decompression software that runs on your PC is written by computer programmers who may be excellent programmers (or not) but who have never been involved in a certification process of the type I'm describing. I'm actually not sure what a company like Suunto does to test it's software but I would be willing to bet that it's more stringent than a similar "free to download" planning package for your PC.

That said, I remember when Mark Ellyatt was bent that Bruce Wienke blamed Abyss for faulty software, Abyss insisted its software was not at fault but the model itself and the truth about who was really guilty for the faulty schedule never came out. A certification process would have made it absolutely clear that the software, in any case, was built to specs, so that kind of lifts the skirt a little bit on how computer makers test their software. Around the same time software issues in the Uwatec computer got people bent too and in that case the bug was so glaring that even a "scratch and sniff" type of certification would have caught it. All this is to say that this kind of software needs a "body of evidence" that it has been written correctly and occasionally that body of evidence appears to come from seeing if divers get bent or not.

Obviously your first and most important line of defense in evaluating the results is to use common sense. If you're planning a dive on your PC and your software is calculating stops that seem illogical then assume that it is wrong and check it against another program. For example, I plan a dive using two different algorithms and if one of them is wildly different than the other then I would be strongly inclined to check it against tables.

That said, minor differences are normal because the algorithms are not the same and we know that some algorithms calculate deeper ceilings than others. My personal approach to this is to "measure with a micrometer and cut with an axe". Once I'm in the water, my computer is the main instrument I use and I've had a LOT of dives with it in which my faith in the information it gives me is good. My buddy uses a different computer (this is a happy coincidence) and if something seems illogical to one of us then we can always refer to the other diver's computer and/or the tables we cut as a last resort for the dive.

In the absence of "certified" software, having multiple references really is necessary.

R..
 
I agree, certification would add to the trust in an implementation. But would you be willing to pay for a dive computer at a medical devices price? Plus "is implemented according to specification" is already non-trivial for a simple thing like "Implement the Bühlmann model" as I tried to explain here: Why is Bühlmann not like Bühlmann

For the time being, cross checks between different computers or software already gets you quite somewhere (just make sure they are not just the same thing in different boxes).
 
What you are saying about medical instruments is probably also true but I've only been witness to developing prototypes and those projects were not certified.

By the way, I know this is a major tangent to the thread so go ahead and ignore it..... but I recently attended a technical seminar given by the architect working on my current project. He has been working on a device together with some universities and hospitals in western europe that can detect lung cancer in a matter of seconds by breathing out into a tube.

I don't remember the hit-rate exactly but they are now over 90% (with some false-positives). The whole device (at least the prototype) costs a couple of thousand Euros to build so in the future it will be feasible for your family doctor to have one in their office.

Obviously it's not there yet because they're only prototyping it but the first indications are that this could be big. The whole thing (for the computer nerds) was built using a "beagle bone", a 3d printer and some "chemical sensitive" sensors.

So while the technology may cost a couple of thousand Euros, the "certified" version of this device is bound to cost 5 times that (if we're lucky). This is what "atdotde" meant when he mentioned the costs. We *could* certify deco software but who would be able to pay for it?

R..
 
For medical devices it is not about a after the fact certification. It is about having a design process and following it. That process has to be appropriate for the risks. Medical devices with software are automatically regarded as risky. Part of the design process will be acceptance testing. Some of that will be to conform to existing standards, some that the device actually does its job.

It is assumed that having a proper process avoids stupid mistakes. the process will be audited. If you say you will do code reviews and you do none and someone breaks a critical feature just before the release then there will be consequences.

Testing starts at the start.
 
https://www.shearwater.com/products/peregrine/
http://cavediveflorida.com/Rum_House.htm

Back
Top Bottom