Discussion of the statistics of the NEDU study on the redistribution of decompression stop time from

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Thank you Joseph for this explanation behind the statistics.

I find it quite disturbing that the medical officers were not blinded to the study. The DMO should not need to know which ascent profile was used as they both have the same depth and time. The treatment options are the same for both.

It's the DMO's opinion that ultimately determines the outcome of a test. Given how the A2 profile was such a marked departure from the normal dive style of the USN, its not surprising that a DMO might become extra cautious. If any place needed to be fully protected from bias, its the assessment phase.


The study was approved for twice as many dives, but stopped half way. The A2 results at the half way point were well below the approved tests high-reject limit. The A2 result was in the middle of the permitted testing allowable range. In fact the A2 was the only profile on the desired course.

Conversely the A1 (shallowest) profile was about to trigger an automatic test rejection low failure. If they did 2 more days of testing on the A1 profile without incurring any further injury, it would go into the automatic (reject-low) limit. The test and result could have been forced to cancel and perhaps become invalidated. It was the failure of the baseline A1 VVAL18 profile to live up to established model predictions that cut the test short.

It's my opinion the test stopped early to salvage what they could from a expensive test procedure that was about to be scrapped. This story line we hear today about excess injury cancelling the test, is just not true.


.
 
Last edited:
This story line we hear today about excess injury cancelling the test, is just not true.

This is unmitigated nonsense and can easily be demonstrated as such. The protocol clearly stated:

The trial was also to be concluded if a midpoint analysis after completion of approximately 188 man-dives on each dive profile found a significantly greater incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile
than for the shallow stops dive profile.


That is exactly what happened. To meet the obligations of the IRB approval the trial had to be stopped at the mid point analysis because there was a greater incidence of DCS (or "excess injury" as you put it) in the deep stops profile that met the pre-defined statistical end point. Which bit of this do you not understand?

Conversely the A1 (shallowest) profile was about to trigger an automatic test rejection low failure. If they did 2 more days of testing on the A1 profile without incurring any further injury, it would go into the automatic (reject-low) limit. The test and result could have been forced to cancel and perhaps become invalidated. It was the failure of the baseline A1 VVAL18 profile to live up to established model predictions that cut the test short.
It's my opinion the test stopped early to salvage what they could from a expensive test procedure that was about to be scrapped.

You tried this line in the RBW deep stops threads and were told this by the princpal author of the study .

"We had stopping rules if both schedules had unexpectedly high or low risk, which were likely to result in severe DCS or an inconclusive result, respectively. We never came close to these (the figure presented in an earlier post is misinterpreted)".

The misinterpretation Dr Doolette is referring to was by you, of course. Yet here you are in another forum 5 years later quite happy to peddle previously corrected misinformation in an attempt to predjudice divers' views of one of the most important diving medicine studies in recent history.

I find it quite disturbing that the medical officers were not blinded to the study. The DMO should not need to know which ascent profile was used as they both have the same depth and time. The treatment options are the same for both.

It's the DMO's opinion that ultimately determines the outcome of a test. Given how the A2 profile was such a marked departure from the normal dive style of the USN, its not surprising that a DMO might become extra cautious. If any place needed to be fully protected from bias, its the assessment phase.

For DMO bias to have affected the results in the direction that they fell, two things would have had to happen:

1. The DMOs would have had to ignore the widespread perception that deep stop approaches were likely to be superior.

2. The DMOs would have had to become unfocussed on their fundamental duty to the diver/patient in front of them, and with whom they would continue to serve in the future, in formulating the most appropriate management for the presenting clinical problem.

Unlikely.

Simon M
 
Last edited:
Just two very quick points:

- Midpoint analysis: the protocol that was adopted for this study is described in the report itself.
- Potential bias: I can't really add much to this, as it would mostly be a hypothetical exercise.

Joseph
 
It's my opinion the test stopped early to salvage what they could from a expensive test procedure that was about to be scrapped..

With respect our opinions are meaningless unless we are able to back up our opinions with some proof or facts.

his story line we hear today about excess injury cancelling the test, is just not true..

Again can you support this claim with some verifiable facts or is this just your opinion?
 
This is unmitigated nonsense and can easily be demonstrated as such. The protocol clearly stated:

The trial was also to be concluded if a midpoint analysis after completion of approximately 188 man-dives on each dive profile found a significantly greater incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile
than for the shallow stops dive profile.


That is exactly what happened. To meet the obligations of the IRB approval the trial had to be stopped at the mid point analysis because there was a greater incidence of DCS (or "excess injury" as you put it) in the deep stops profile that met the pre-defined statistical end point. Which bit of this do you not understand?



You tried this line in the RBW deep stops threads and were told this by the princpal author of the study .

"We had stopping rules if both schedules had unexpectedly high or low risk, which were likely to result in severe DCS or an inconclusive result, respectively. We never came close to these (the figure presented in an earlier post is misinterpreted)".

The misinterpretation Dr Doolette is referring to was by you, of course. Yet here you are in another forum 5 years later quite happy to peddle previously corrected misinformation in an attempt to predjudice divers' views of one of the most important diving medicine studies in recent history.



For DMO bias to have affected the results in the direction that they fell, two things would have had to happen:

1. The DMOs would have had to ignore the widespread perception that deep stop approaches were likely to be superior.

2. The DMOs would have had to become unfocussed on their fundamental duty to the diver/patient in front of them, and with whom they would continue to serve in the future, in formulating the most appropriate management for the presenting clinical problem.

Unlikely.

Simon M

I can see you are upset, but my statements hold true.... and I will demonstrate that now below.

nedu_design1.png




The allowable limits of this test, were 3% to 7% pDCS.



nedu_result.png


nedu_result_predicted.png




The predicted rates of injury were from 3.7 to 6.2% pDCS.


We can see, the A2 profile was right down the middle (5%). It was performing as predicted, and was a well behaved test sample.

There was no excessive injury rates present.


Also shown, the A1 profile had a very low return, and is in danger of falling out the bottom of the limits. It is way outside its predicted range, and It was headed towards rejection.


As I said before, I am inclined to think that the mid point analysis was more of an excuse to stop the test that has gone off the rails, than anything else. Obviously It makes sense to stop the test half way and salvage what you can, and that is what I think really happened.


******

The interesting part of this, is that the A1 profile was based on the new VVAL18 (deterministic) model plan, but the predicted pDCS was way off the mark. This model was already in use in some RB tables, and was destined to be the new USN OC table set model.

Conversely the A2 profile, a probabilistic plan, was spot on the predicted pDCS, and yet it gets criticized for behaving properly.


With respect our opinions are meaningless unless we are able to back up our opinions with some proof or facts.

Again can you support this claim with some verifiable facts or is this just your opinion?

Done :D

.
 
It's my opinion the test stopped early to salvage what they could from a expensive test procedure that was about to be scrapped.
Apologies for my ignorance can you show me how your post proves that the "test stopped early to salvage what they could from a expensive test procedure that was about to be scrapped".

.



I can see you are upset, but my statements hold true.... and I will demonstrate that now below.

View attachment 460594



The allowable limits of this test, were 3% to 7% pDCS.



View attachment 460595

View attachment 460596



The predicted rates of injury were from 3.7 to 6.2% pDCS.


We can see, the A2 profile was right down the middle (5%). It was performing as predicted, and was a well behaved test sample.

There was no excessive injury rates present.


Also shown, the A1 profile had a very low return, and is in danger of falling out the bottom of the limits. It is way outside its predicted range, and It was headed towards rejection.


As I said before, I am inclined to think that the mid point analysis was more of an excuse to stop the test that has gone off the rails, than anything else. Obviously It makes sense to stop the test half way and salvage what you can, and that is what I think really happened.


******

The interesting part of this, is that the A1 profile was based on the new VVAL18 (deterministic) model plan, but the predicted pDCS was way off the mark. This model was already in use in some RB tables, and was destined to be the new USN OC table set model.

Conversely the A2 profile, a probabilistic plan, was spot on the predicted pDCS, and yet it gets criticized for behaving properly.




Done :D

.
 
I am not a scientist,so I would like an explanation. As @rossh pointed out, A1 profile was outside of predicted range and in danger of breaching test limits. In my layman's mind, if this profile yields lower percentage of DCS injuries than predicted, that should be GOOD,right?
 
In my layman's mind, if this profile yields lower percentage of DCS injuries than predicted, that should be GOOD,right?

What it means is that somethings wrong with the math. If A1 and A2 both claim to have 5% pdcs, but in reality A1 is 1% pdcs...you are no longer comparing apples to apples. The test is invalid at that point.

In theory it sounds great that A1 greatly reduced the DCS probability over A2....but the deco times were unrealistic, which when you look at distribution of stops, if I double all stops to meet an arbitrary deco time...A2 obviously doesn’t succeed...because it’s stops are skewed deeper. Problem is that nobody does stops for that amount of time at those depths...specifically on a 30 minute dive to 170 ft.

As far as blindness of the study, I wonder if the divers also somehow knew which profile they were diving. That and chit chat could lead to a higher probability of what I like to call “Web MD hypochondria”.
 
Folks, just to avoid confusion, interim midpoint analysis (before completion of data acquisition) is established practice in clinical trials (i.e. not unique to this study) for ethical reasons. For this specific study, one of the protocol conditions for termination was the following (quoting directly):

"The trial was also to be concluded if a midpoint analysis after completion of approximately 188 man-dives on each dive profile found a significantly greater incidence of DCS (Fisher Exact test, one-sided α = 0.05) for the deep stops dive profile than for the shallow stops dive profile."

The one-sided Fisher test (α = 0.05) at this midpoint yielded p=0.0489, which is less than 0.05. On this basis, the trial was stopped at its midpoint, as dictated by the protocol. (Further explanation is available in the article itself.)

Joseph
 

Back
Top Bottom