Discussion of the statistics of the NEDU study on the redistribution of decompression stop time from

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I do not want to hi-jack this thread further away from the very clear and accurate post on the limitations of the NEDU deep stops trial (thank you Gozo Diver), and certainly do not want to re-hash all tht was discussed in the several other threads, including a 1000+ post thread on this forum, but since Ross is essentially accusing my co-authors and I of scientific fraud, I just want to correct the several falsehoods on which Ross is basing that argument.

Ross is trying on two different, mutually exclusive, narratives about the midpoint analysis that resulted in the trial ending. The first of his narratives is that we conducted the mid-point analysis during the trial to “salvage” the trial because the A1 schedule was nearing the sequential trial stop-low criteria and the trial was at risk of terminating (posts 17 and 54) – this would have had to have been in 2005-2006 while the diving was underway. The second of his narratives is that we added the mid-point analysis between the 2008 UHMS workshop proceeding which he misrepresents as the "original technical version" and the 2011 NEDU Technical Report which he misrepresents as "re-written for public consumption version" (post 41 and 51).

To address the last bit of this fabrication first. The definitive report is the 2011 NEDU Technical Report (Doolette, Gerth, Gault, Redistribution of decompression stop time from shallow to deep stops increases the incidence of decompression sickness. Panama City (FL): Navy Experimental Diving Unit; 2011 Jul. NEDU TR 11-06). Readers who are familiar with science will know that a conference proceeding is often a preliminary report, as was the case when we invited to present our findings at the 20008 UHMS workshop (Gerth, Doolette, Gault Deep stops and their efficacy in decompression: U.S. Navy research. In Bennett, Wienke, Mitchell eds. UHMS; 2009. P. 165-85) and 2008 DAN conference (Gerth, Doolette, Gault Deep stops and their efficacy in decompression. In Vann, Mitchell, Denoble, Anthony eds. Technical diving conference proceedings. DAN 2009. P138-56). Ross may not know how science works, but the rest of his narratives are willful misrepresentation.

Having claimed to have read the UHMS Workshop, he must know that it presents the methods and findings of the NEDU study very briefly, but mentions the midpoint analysis (On page 178 of the UHMS workshop proceedings. Also on page 152 of the DAN conference proceedings). So Ross's claim that the midpoint analysis was added after these preliminary reports for "public consumption" is false.

Ross's other narrative is that we added the midpoint analysis to "salvage" the trial because the A1 schedule was nearing the sequential trial stop-low criteria and the trial was at risk of terminating. This is incorrect for two reasons, both of which have been explained to Ross in several forum threads. First, as has pointed out in several posts on this thread, one schedule reaching a stop-low criterion would not have terminated the trial. Second, as I have pointed out to Ross in other threads, he is misinterpreting what is admittedly a poor illustration of the sequential trial envelope in the figure he shows (post 17 and 54). In that figure, the envelope is drawn such that the schedule has to CROSS the envelope to hit a stop criterion (in the NEDU Technical the envelope is redrawn such that the trial has to TOUCH the envelope, because that is more intuitive). At the mid-point analysis, the A1 schedule had 3 DCS in 192 man dives (3/192). The stop-low criteria in this vicinity based on 95% confidence the risk of DCS was less than 3% was 3 DCS in 256 man dives (3/256). In the figure Ross shows, the A1 schedule could have touched the envelope line in just a few more DCS-free dives, but the envelope is horizontal at that point, and the stop-low would not occur until the trial CROSSED the envelope by emerging on the other end of that horizontal segment with 3/256. In other words, there needed to be another 64 DCS-free dives before we reached a stop-low for the A1 schedule.

The midpoint analysis was part of the trial design, it was conducted at the midpoint of the trial (in 2006), the trial was nowhere near another termination criteria, and the midpoint analysis was reported in the preliminary conference proceedings in 2008 and 2009, and in the final report in 2011.

David Doolette


Hello David,


I do not deny the mid point analysis was planned or occurred.... It's shown in both reports, and the presentations. So part of your post is responding to a non-existent argument. The point I would like to to see explained is when the stopping condition was added (see below).

Your post does not deny that the A1 was scraping the bottom reject line. The post merely points out exactly where it was going to fall through and cancel the test. That is kinda splitting hairs here.


***********

But the question still stands: When did the 2011 extra stopping conditions get added to the test?


In the 2008 reports and presentations, the ONLY published or spoken preset / planned criteria for stopping, is the 3 to 7% reject lines. At the mid point analysis per all indications, it seems an intelligent decision was made to stop. But was that to satisfy a preset rule? Or because it was the smart thing to do based on observed and potential results to come?

In the 2011 version of the report, this extra preset criteria condition for stopping is added to the report:

"The trial was also to be concluded if a midpoint analysis after completion of
approximately 188 man-dives on each dive profile found a significantly greater
incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile
than for the shallow stops dive profile"



Surprise, surprise, it's a perfect match to the events and decision made to stop mid way. Notice also it's a one sided condition. I think if it existed before the test, the planning board and reviewers would have corrected that mistake.

David, please show to us the extra rule above existed in 2005, before the test started.


*************

You did not address why the VVAL18 (A1) was so far off from its predicted / calibrated / baseline / control value of ~5% pDCS . The A1 was the control, and it should have been able to reproduce the intended ~5% pDCS rate. But it did not. Can you explain why it was so far off? Why is the test allowed to continue when the control baseline is proving invalid?

Or conversely, did the VVAL18 model get some risk re-calibration done after these 188 data points where added?


Thank you
.
 
Last edited:
Hello Dan,

Well, perhaps this thread which is about statistical interpretation is not the place to be negotiating your questions in.

There are good answers to all of them which you can find elsewhere on line. If you can't find them, either start a new thread or pm me if you want to discuss it. The problem with doing it here (or starting a new thread for that matter) is that it will inevitably attract Ross's attention, and he clearly sees no problem with attempting to mislead readers by rehashing his uninformed incorrect interpretations even though they have been debunked over and over and over. This will force me into having to do it all again, as I have had to here, and I have no appetite or time for it, and neither does Prof Doolette.

Simon M

Hi Simon,

Fair game.
Thanks for the invitation to pm, I will be happy to follow up there and leave this venue for a discussion on statistics.
 
I implore you not to sink to their level and respond to the trolls. The original post was interesting and useful until Ross and Dan realised that this was yet another opportunity to bash you and David. I don't think that their nefarious questions or accusations deserve a response. Doing so implies their dubious legitimacy.

Homo sum humani a me nihil alienum puto.

I'll be taking my questions elsewhere, but I'll have it said that for my part, I find your post above to be unneccessarily derogatory.

I have serious questions, am looking for serious answers, I maintain an orderly tone and discuss the matter at hand - and I'm neither looking to bash anyone or present nefarious accusations.


Dan
 
Last edited:
Hello David,


I do not deny the mid point analysis was planned or occurred.... It's shown in both reports, and the presentations. So part of your post is responding to a non-existent argument. The point I would like to to see explained is when the stopping condition was added (see below).

Your post does not deny that the A1 was scraping the bottom reject line. The post merely points out exactly where it was going to fall through and cancel the test. That is kinda splitting hairs here.


***********

But the question still stands: When did the 2011 extra stopping conditions get added to the test?


In the 2008 reports and presentations, the ONLY published or spoken preset / planned criteria for stopping, is the 3 to 7% reject lines. At the mid point analysis per all indications, it seems an intelligent decision was made to stop. But was that to satisfy a preset rule? Or because it was the smart thing to do based on observed and potential results to come?

In the 2011 version of the report, this extra preset criteria condition for stopping is added to the report:

"The trial was also to be concluded if a midpoint analysis after completion of
approximately 188 man-dives on each dive profile found a significantly greater
incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile
than for the shallow stops dive profile"



Surprise, surprise, it's a perfect match to the events and decision made to stop mid way. Notice also it's a one sided condition. I think if it existed before the test, the planning board and reviewers would have corrected that mistake.

David, please show to us the extra rule above existed in 2005, before the test started.
.

So you are accusing me of scientific fraud Ross. The appropriate forum for that is to take it up with my institution.
 
Indeed this thread has ended up veering off from the intended aim of the article to which I had originally posted a link. The original motivation, as I believe should be clear enough, was a summary of the statistical underpinnings of the NEDU trial. This was intended to be an informative piece for the diving community by way of explaining in simple terms the (mathematical) tools that those of us in the scientific community regularly use to ascertain statistical significance (or otherwise) of a given result.

I thought it would represent a good opportunity (a) for those who sought a better understanding of the NEDU study itself, including its limitations, and (b) to serve as an educational resource for those interested in acquiring an understanding of (a small subset of) statistical analysis and scientific methodology. The main stimulus was the numerous discussions and chats I had heard between divers (over the years) about the topic. Throughout, it had become apparent to me that certain errors were being repeated and promulgated without consideration. Oftentimes, this was due to a lack of understanding of the mathematics involved. However, despite this unfortunate aspect, what remained clear to me was that there was a genuine interest to better understand what the numbers meant. And I deemed that to be a good thing.

Coming to this thread, I would therefore kindly ask, just as others have done before, to keep any further discussion (should there be any) focussed on this topic. Not only do other discussions derail from this original motivation, but they also end up needlessly alienating readers. Moreover, numerous other aspects have already been discussed elsewhere, so there is simply no need to rehash them here.

I hope and trust that those who read the original article found something of value. I take the opportunity to also thank those who wrote to express their appreciation of the effort. At around 21 pages long it is not a short read, the reason being that it was meant to be a balanced, fair and objective look at the statistics of this study. To achieve that whilst remaining understandable to a general but interested audience, it necessarily became a lengthy piece.

I also hope that those who choose to read this thread first, will quickly come to realise that the topic of discussion strayed off course a number of times, and that the best place to start off is with the (a) NEDU trial report and (b) article themselves.

I remain hopeful that any further discussion will be kept objective and civilised.

Happy & safe diving to all,

Joseph
 
So you are accusing me of scientific fraud Ross. The appropriate forum for that is to take it up with my institution.

David,

No, I'm not accusing you of anything. I'm simply asking questions and stating things and discrepancies that we can see from the public side of this. You seem to be taking it all very personally. If you don't know the answers, or don't want to comment, then please just say that.


This thread is about the experiment design and the math behind it. My questions relate to that.

Thanks for stopping by.
 
Coming to this thread, I would therefore kindly ask, just as others have done before, to keep any further discussion (should there be any) focussed on this topic.

A Friendly Reminder...

As the OP (Original Poster), you kind of own this thread. Please feel free to "report" any post that you feel is off topic, unfriendly or otherwise problematic. You won't get an 'automatic', but being the OP, your opinion counts more than most.

For everyone else in this thread, don't blame others if you and/or your posts are singled out as being trollish. Yeah, there are only two of you, and your actions are obviously "POV Warrior" material. Pointing out such behaviors is not nearly as 'derogatory' as committing them. You aren't being called idiots or the like directly, so the no harm/no foul rule applies. It might be a shock, but users disagreeing with you is not seen as a personal attack by staff. If you want people to stop pointing these behaviors out, then simply stop committing them. Try a different drum for a bit. Stop hijacking the discussions to serve your POV. No matter how important you believe they are to everyone else, we're tired of them. No, it's not that we don't understand them, but we simply find your logic and conclusions fallacious on several different levels. We're especially not going to tolerate you accusing research scientists of fraud and malfeasance any longer. Hopefully, people will report you rather than try to engage you. Yes, that's a sincere request to stop feeding the POV Warriors. Let the mods handle them!

Definition: A POV Warrior is a user possessing a narrow or controversial "point of view" and uses every opportunity to turn, hijack or derail a thread to push their personal or agency agenda (POV). Quite often they have no idea that they are doing this and they should learn to listen when others complain about their behavior. While we expect conversations to wander a bit, the OP (Original Post or Poster) sets the subject for the thread and we kindly ask you to honor their intentions. If/when the OP complains, the mods are sure to take a close look.

PLEASE DON'T ENGAGE POV WARRIORS IN THE THREAD! Instead, use the report button at the bottom of each and every post to simply report them and let the mods deal with it behind the scenes. You will continue to hijack the thread by discussing their boorish behavior within it. That includes this thread. You may discuss this topic more here: What is a POV Warrior?
 
This thread is about the experiment design and the math behind it. My questions relate to that.

I think David Doolette has very adequately responded to your previous statements pertaining to remarks you made over the midpoint analysis. I think it is more than fair to acknowledge that given the nature of some of the remarks made, his contribution was a very appropriate one at that point.

It should be clarified that since you raised doubt over the design of the midpoint analysis, specifically about whether it had been decided beforehand or afterwards, it was very appropriate for David Doolette to reply about this point.
 
:reminder: As the OP (Original Poster), you kind of own this thread. Please feel free to "report" any post that you feel is off topic, unfriendly or otherwise problematic. You won't get an 'automatic', but being the OP, your opinion counts more than most.

Thanks for the heads-up about this.
 
Thanks for this analysis Joseph. It's well done an easily understandable. Good job!
 

Back
Top Bottom