I do not want to hi-jack this thread further away from the very clear and accurate post on the limitations of the NEDU deep stops trial (thank you Gozo Diver), and certainly do not want to re-hash all tht was discussed in the several other threads, including a 1000+ post thread on this forum, but since Ross is essentially accusing my co-authors and I of scientific fraud, I just want to correct the several falsehoods on which Ross is basing that argument.
Ross is trying on two different, mutually exclusive, narratives about the midpoint analysis that resulted in the trial ending. The first of his narratives is that we conducted the mid-point analysis during the trial to “salvage” the trial because the A1 schedule was nearing the sequential trial stop-low criteria and the trial was at risk of terminating (posts 17 and 54) – this would have had to have been in 2005-2006 while the diving was underway. The second of his narratives is that we added the mid-point analysis between the 2008 UHMS workshop proceeding which he misrepresents as the "original technical version" and the 2011 NEDU Technical Report which he misrepresents as "re-written for public consumption version" (post 41 and 51).
To address the last bit of this fabrication first. The definitive report is the 2011 NEDU Technical Report (Doolette, Gerth, Gault, Redistribution of decompression stop time from shallow to deep stops increases the incidence of decompression sickness. Panama City (FL): Navy Experimental Diving Unit; 2011 Jul. NEDU TR 11-06). Readers who are familiar with science will know that a conference proceeding is often a preliminary report, as was the case when we invited to present our findings at the 20008 UHMS workshop (Gerth, Doolette, Gault Deep stops and their efficacy in decompression: U.S. Navy research. In Bennett, Wienke, Mitchell eds. UHMS; 2009. P. 165-85) and 2008 DAN conference (Gerth, Doolette, Gault Deep stops and their efficacy in decompression. In Vann, Mitchell, Denoble, Anthony eds. Technical diving conference proceedings. DAN 2009. P138-56). Ross may not know how science works, but the rest of his narratives are willful misrepresentation.
Having claimed to have read the UHMS Workshop, he must know that it presents the methods and findings of the NEDU study very briefly, but mentions the midpoint analysis (On page 178 of the UHMS workshop proceedings. Also on page 152 of the DAN conference proceedings). So Ross's claim that the midpoint analysis was added after these preliminary reports for "public consumption" is false.
Ross's other narrative is that we added the midpoint analysis to "salvage" the trial because the A1 schedule was nearing the sequential trial stop-low criteria and the trial was at risk of terminating. This is incorrect for two reasons, both of which have been explained to Ross in several forum threads. First, as has pointed out in several posts on this thread, one schedule reaching a stop-low criterion would not have terminated the trial. Second, as I have pointed out to Ross in other threads, he is misinterpreting what is admittedly a poor illustration of the sequential trial envelope in the figure he shows (post 17 and 54). In that figure, the envelope is drawn such that the schedule has to CROSS the envelope to hit a stop criterion (in the NEDU Technical the envelope is redrawn such that the trial has to TOUCH the envelope, because that is more intuitive). At the mid-point analysis, the A1 schedule had 3 DCS in 192 man dives (3/192). The stop-low criteria in this vicinity based on 95% confidence the risk of DCS was less than 3% was 3 DCS in 256 man dives (3/256). In the figure Ross shows, the A1 schedule could have touched the envelope line in just a few more DCS-free dives, but the envelope is horizontal at that point, and the stop-low would not occur until the trial CROSSED the envelope by emerging on the other end of that horizontal segment with 3/256. In other words, there needed to be another 64 DCS-free dives before we reached a stop-low for the A1 schedule.
The midpoint analysis was part of the trial design, it was conducted at the midpoint of the trial (in 2006), the trial was nowhere near another termination criteria, and the midpoint analysis was reported in the preliminary conference proceedings in 2008 and 2009, and in the final report in 2011.
David Doolette
Hello David,
I do not deny the mid point analysis was planned or occurred.... It's shown in both reports, and the presentations. So part of your post is responding to a non-existent argument. The point I would like to to see explained is when the stopping condition was added (see below).
Your post does not deny that the A1 was scraping the bottom reject line. The post merely points out exactly where it was going to fall through and cancel the test. That is kinda splitting hairs here.
***********
But the question still stands: When did the 2011 extra stopping conditions get added to the test?
In the 2008 reports and presentations, the ONLY published or spoken preset / planned criteria for stopping, is the 3 to 7% reject lines. At the mid point analysis per all indications, it seems an intelligent decision was made to stop. But was that to satisfy a preset rule? Or because it was the smart thing to do based on observed and potential results to come?
In the 2011 version of the report, this extra preset criteria condition for stopping is added to the report:
"The trial was also to be concluded if a midpoint analysis after completion of
approximately 188 man-dives on each dive profile found a significantly greater
incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile
than for the shallow stops dive profile"
Surprise, surprise, it's a perfect match to the events and decision made to stop mid way. Notice also it's a one sided condition. I think if it existed before the test, the planning board and reviewers would have corrected that mistake.
David, please show to us the extra rule above existed in 2005, before the test started.
*************
You did not address why the VVAL18 (A1) was so far off from its predicted / calibrated / baseline / control value of ~5% pDCS . The A1 was the control, and it should have been able to reproduce the intended ~5% pDCS rate. But it did not. Can you explain why it was so far off? Why is the test allowed to continue when the control baseline is proving invalid?
Or conversely, did the VVAL18 model get some risk re-calibration done after these 188 data points where added?
Thank you
.
Last edited: