Uwatec Aladin Air X Nitrox court cases

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

The case may be old, but I wasn't diving then, and neither were many others, and the case was not known to me. So thanks for bringing it up.

Discussing that specific case is no longer relevant.

I have one general question, though. And I think it is an important one.

Would it be typical of an U.S. company to cover defects for financial benefit and/or legal immunity?

I mean... all the court cases... Would anyone in their right minds publicly announce that their product may have a lethal flaw?

Should I buy european computers instead? In many other countries sueing anyone anytime for anything seems less common... Maybe corporations in other countries might be more willing to admit their mistakes and do recalls?

In the US it was common for car companies to do a cost-benefit analysis of the problem. If the cost of the repair is more expensive than the projected costs of the lawsuits then the companies would just pay the settlements. Ford did this with the Pinto and GM did this with the Corvair and their pickup trucks. This was well before the US became such a litigious society.

I would argue that you are better off with an American dive computer because if the projected litigation and settlement costs are high (as in the US) companies are better off disclosing and fixing the problem. For example, there was a recent case of a company selling defective truck tires imported from China, when pressed they publicly admitted the problem and recommended the tires be taken out of service (then promptly filed for bankruptcy).

---------- Post added October 19th, 2013 at 03:30 PM ----------

The good news is that the lying ******* who caused the harm is out of the company. The current owners seem innocent.

Are you sure he was the culprit or just the scapegoat?

---------- Post added October 19th, 2013 at 03:43 PM ----------

I believe (naively) that most employees care about the customers well being.

It is also human to make mistakes. If I were an employer, I would not employ anyone that claims not to make any mistakes. Those liars...

I guess that many would want to announce the mistakes done, and to issue fixes/replacements, but then the legal department interferes. I would not blame the individual. I would blame the environment. But this is becoming a political issue then...

If my LIFE depends on an algorithm, then I DO prefer about any company that admits its mistakes. I believe that people (and companies) can be divided into two categories: Those that admit their mistakes, and those that lie.

People are not perfect and mistakes happen but in products you would expect that any defects would be uncovered during the testing process.

JO is a publicly-traded company. Their main responsibility is to their shareholders NOT their customers. So you should expect them to lie to protect them. Especially when a large percentage of manager's wealth is tied to the company's stock price. Besides American companies Japanese companies do the same thing, Toyota and Toshiba come to mind.
 
The owners of Uwatec, wanting to sell, kept the flaw a secret.

It was a small set of known serial numbers. The flaw was fixed as soon as it was found, and the later serial numbers were okay.

The owners of Uwatec fired the engineers that wanted to do a recall.

JO bought the firm. As far as they were "concerned", everything produced under their watch was flawless.

Finally, one of the engineers got the flaw and need for recall to the right people. By then, some divers diving *aggressive* dives were bent.

You will often find people on this board almost hostile to others that "dive their computers". The above story left a bad taste in a lot of people's mouths. Bottom line, is learn the tables, make sure your dive plan makes sense, and follow the computer as long as IT makes sense within the known parameters.

The one common detail in the lawsuits was that the divers were blindly following their computers.
 
The one common detail in the lawsuits was that the divers were blindly following their computers.

Dive computers are supposed to be a reliable replacement for tables. As such, ~99.9% of the people who use a computer do that when doing recreational dives where there is no planned deco obligation. Those who got bent were doing the exact same thing.

But just in case let me ask this question - do you plan each and every one of your dives on paper first?
 
I would agree about the OSTC, its open source - which means in theory anybody can code software for it, which could lead to unproven software being used?
The advantage of open source is that anyone can look at the program source not only the developers. The chance to find bugs early is greater this way than with closed source software.
It does not mean that the manufactor has to include third party contributions.
 
This is a sobering reminder of how dependent on technology we can be as divers. Yes, I will admit that I "dive my computer" as that is the advantage of a device that accounts not only for nitrox but also for multi-level diving profiles. I do expect my computer to be a replacement for table diving calculated on square profiles, otherwise why have one?

The bottom line is that nothing can protect you from a hidden algorithm fault (or, in this case, a faulty assumption-- diver breathes nitrox during SI-- inside of a valid algorithm).

I guess the best way to check any computer when you first get it is to use tables to plan and make a series of repetitive square profile dives, and see how closely the computer tracks the plan during and after the actual dives including time-to-fly. If the computer is close to the tables, then it should be functioning properly.
 
One of the things to note is that Raimo had heard about the software problem and he bent himself on purpose (to get compensated) at least this is what I heard many years ago.
 
The advantage of open source is that anyone can look at the program source not only the developers. The chance to find bugs early is greater this way than with closed source software.
It does not mean that the manufactor has to include third party contributions.

I'm someone who manages development of embedded software for spacecraft, which is the same sort of "safety critical" application space (except here, if your dive computer crashes, you do have a manual workaround: abort the dive and go into your preplanned ascent profile). The concerns I would have would be for subtle logic errors or proper handling of off-nominal conditions that are unlikely to have occured in field use or in testing, without specifically planning for that tests. Open Source is no better or worse than closed source in my experience; a lot depends on the specific people involved in the development, and their motivations and software development background.

Open source might not be any better for finding subtle errors, because the challenge is that for any non-trivial software, just having the source probably isn't enough to understand the logic and function behind it. You might find small logic errors (e.g. not handling some error conditions) or typos. Someone recently found a typo in software that I wrote where I had "a1 = a0 - 32768; b1 = b0-32760;".. the small error in the calculation for b1 (turning a 0-65536 from an ADC to a -32768:32767 range) did not cause any noticeable change in the overall performance (because fixed offsets get filtered out downstream in the processing). It is just that kind of typo that someone who is reading the code might find. (and, one might argue, that my style was bad.. it should have been coded a1=a0-BASEOFFSET; b1=b0-BASEOFFSET; using the same symbolic constant for both calculations, for just this kind of typo reason)

But, beyond this kind of bug, you're probably not going to be able to figure out whether the translation from the theoretical underpinnings (e.g. all the differential equations used to model the gas behavior) to the implementation properly covered all the edge cases and boundary conditions. Almost certainly, the "main line" normal function will be correct. It's the off-nominal where the problems arise, and that requires an understanding of the architecture and structure of the software.

For many open source programs (not saying that's the case here), where a very few people created the original program, and who had a clear picture of the architecture in mind (or drawn on the white board, etc.), there aren't sufficient comments or supporting documentation to gain the needed high-level conceptual understanding. There was never any need to "explain" the software to someone else, so such documentation didn't get created (or even thought about). A lot of open source software also doesn't come with all the unit and integration tests: you get the source, and a makefile, and after that you're on your own.

There's also a tendency to "trust the older software" that you've incorporated. (Heartbleed/SSL issues, for instance).
With respect to the OSTC software (for which it's not too hard to find the source code..)
You have to be pretty motivated to dig through 70 or so files in Assembler and figure out how they are arranged. Yes, there's a "readme" that tells you how to assemble/compile/link the software, but there's not a whole lot of other documentation in the repository. Not a whole lot of description about how the algorithms from Buhlmann are arranged in the code, etc. There ARE comments in the C and ASM code which describe what each section is doing, so maybe with a couple day's work you could figure out how it all fits together. There may well be descriptions of how it all fits together in other places (e.g. an online forum, or something)

I'm going to guess that there's probably a half dozen people in the entire world who actually understand this code well enough to make meaningful analysis for logic and implementation errors.

Like much open source code, the repository has only the source code, no test cases, no unit test harnesses to verify the function of individual routines, etc. main.c has a few "end to end" test cases, but doesn't provide a lot of information on what you should see as output (e.g. it calls "print_stops()" and presumably one checks that against some other reference).

One might even argue that a closed source development by a company where there is the threat of significant consequences for a defect might produce a higher quality product. There may well be enough resources in such a development to pay people to do code inspections, independent test, hire outside consultants to review the implementation, etc.. An open source provider could do the same thing, of course, but perhaps the financial and other resources would only be available because of the potential return from a proprietary competitive advantage. Aircraft engine control software is held to a very high performance and reliability standard, and is most decidedly not open-source, since that's one of the big differentiators between engines from company A or company B.

---------- Post added October 25th, 2014 at 09:33 AM ----------

The bottom line is that nothing can protect you from a hidden algorithm fault (or, in this case, a faulty assumption-- diver breathes nitrox during SI-- inside of a valid algorithm).

I guess the best way to check any computer when you first get it is to use tables to plan and make a series of repetitive square profile dives, and see how closely the computer tracks the plan during and after the actual dives including time-to-fly. If the computer is close to the tables, then it should be functioning properly.

Actually, a better way would be to calculate what you think the computer should say, by an independent means (be it tables, or analytical means, or even another program), for a "unusual profile" (multiple bounces or something). Put the computer in a pressure pot, run the profile, and see if it comes up the same as the independent calculation.

Square profiles are the easy case: they are the most likely one that has been tested by the software developer. You want crazy "down to 30m in 1 minute (carrying a big weight), stay 3 minutes, up to 20m and back down to 30m in 30 seconds (oops got distracted, overinflated BC), spend some time, slow ascent to 10m (finally, remembering to dive the plan) , then wait 5 minutes, then drop like a stone to 35m and pop back up to 30m (e.g. an overshoot going back down to get something)." those are the kind of profiles that will really exercise the algorithms and any "mode switching" logic that is supposed to recognize ascents and descents.
 
Back
Top Bottom