The advantage of open source is that anyone can look at the program source not only the developers. The chance to find bugs early is greater this way than with closed source software.
It does not mean that the manufactor has to include third party contributions.
I'm someone who manages development of embedded software for spacecraft, which is the same sort of "safety critical" application space (except here, if your dive computer crashes, you do have a manual workaround: abort the dive and go into your preplanned ascent profile). The concerns I would have would be for subtle logic errors or proper handling of off-nominal conditions that are unlikely to have occured in field use or in testing, without specifically planning for that tests. Open Source is no better or worse than closed source in my experience; a lot depends on the specific people involved in the development, and their motivations and software development background.
Open source might not be any better for finding subtle errors, because the challenge is that for any non-trivial software, just having the source probably isn't enough to understand the logic and function behind it. You might find small logic errors (e.g. not handling some error conditions) or typos. Someone recently found a typo in software that I wrote where I had "a1 = a0 - 32768; b1 = b0-32760;".. the small error in the calculation for b1 (turning a 0-65536 from an ADC to a -32768:32767 range) did not cause any noticeable change in the overall performance (because fixed offsets get filtered out downstream in the processing). It is just that kind of typo that someone who is reading the code might find. (and, one might argue, that my style was bad.. it should have been coded a1=a0-BASEOFFSET; b1=b0-BASEOFFSET; using the same symbolic constant for both calculations, for just this kind of typo reason)
But, beyond this kind of bug, you're probably not going to be able to figure out whether the translation from the theoretical underpinnings (e.g. all the differential equations used to model the gas behavior) to the implementation properly covered all the edge cases and boundary conditions. Almost certainly, the "main line" normal function will be correct. It's the off-nominal where the problems arise, and that requires an understanding of the architecture and structure of the software.
For many open source programs (not saying that's the case here), where a very few people created the original program, and who had a clear picture of the architecture in mind (or drawn on the white board, etc.), there aren't sufficient comments or supporting documentation to gain the needed high-level conceptual understanding. There was never any need to "explain" the software to someone else, so such documentation didn't get created (or even thought about). A lot of open source software also doesn't come with all the unit and integration tests: you get the source, and a makefile, and after that you're on your own.
There's also a tendency to "trust the older software" that you've incorporated. (Heartbleed/SSL issues, for instance).
With respect to the OSTC software (for which it's not too hard to find the source code..)
You have to be pretty motivated to dig through 70 or so files in Assembler and figure out how they are arranged. Yes, there's a "readme" that tells you how to assemble/compile/link the software, but there's not a whole lot of other documentation in the repository. Not a whole lot of description about how the algorithms from Buhlmann are arranged in the code, etc. There ARE comments in the C and ASM code which describe what each section is doing, so maybe with a couple day's work you could figure out how it all fits together. There may well be descriptions of how it all fits together in other places (e.g. an online forum, or something)
I'm going to guess that there's probably a half dozen people in the entire world who actually understand this code well enough to make meaningful analysis for logic and implementation errors.
Like much open source code, the repository has only the source code, no test cases, no unit test harnesses to verify the function of individual routines, etc. main.c has a few "end to end" test cases, but doesn't provide a lot of information on what you should see as output (e.g. it calls "print_stops()" and presumably one checks that against some other reference).
One might even argue that a closed source development by a company where there is the threat of significant consequences for a defect might produce a higher quality product. There may well be enough resources in such a development to pay people to do code inspections, independent test, hire outside consultants to review the implementation, etc.. An open source provider could do the same thing, of course, but perhaps the financial and other resources would only be available because of the potential return from a proprietary competitive advantage. Aircraft engine control software is held to a very high performance and reliability standard, and is most decidedly not open-source, since that's one of the big differentiators between engines from company A or company B.
---------- Post added October 25th, 2014 at 09:33 AM ----------
The bottom line is that nothing can protect you from a hidden algorithm fault (or, in this case, a faulty assumption-- diver breathes nitrox during SI-- inside of a valid algorithm).
I guess the best way to check any computer when you first get it is to use tables to plan and make a series of repetitive square profile dives, and see how closely the computer tracks the plan during and after the actual dives including time-to-fly. If the computer is close to the tables, then it should be functioning properly.
Actually, a better way would be to calculate what you think the computer should say, by an independent means (be it tables, or analytical means, or even another program), for a "unusual profile" (multiple bounces or something). Put the computer in a pressure pot, run the profile, and see if it comes up the same as the independent calculation.
Square profiles are the easy case: they are the most likely one that has been tested by the software developer. You want crazy "down to 30m in 1 minute (carrying a big weight), stay 3 minutes, up to 20m and back down to 30m in 30 seconds (oops got distracted, overinflated BC), spend some time, slow ascent to 10m (finally, remembering to dive the plan) , then wait 5 minutes, then drop like a stone to 35m and pop back up to 30m (e.g. an overshoot going back down to get something)." those are the kind of profiles that will really exercise the algorithms and any "mode switching" logic that is supposed to recognize ascents and descents.