The fact that one or both profiles did not behave according pre-trial predictions is a legitmate (sic) outcome of the study: it does not invalidate it in any way.
Simon M
According to this 10 Elements of Good Experimental Design
* Includes a control for comparison
* Can be reproduced by other scientists to give similar results
The VVAL18 model profile (A1) was that control. But the control in this test showed to be neither a reliable or a repeatable baseline.
The VVAL18 was a contemporary and well used model within NEDU testing. The VVAL18 has the backing and calibrations of the NEDU's extensive man tested data base, and was an extension to previous model (Thalman) and work . It was used to make the new Mk re-breather tables.
The A1 profile is an ordinary and simple ascent, that was reportedly used exactly as the model intended. The depth / time profile is within the norms of previous NEDU testing and development and table test limits and calibrations and saved datapoints.
I point out that a condition of this test was that both profiles have the same run time and the same risk (isorisk). To achieve that, they manipulated the A2 profile to fit it into the same space of the A1 profile.
So why did the predicted injury rate of the VVAL18 model profile get so far off course and beyond its tested / predicted pDCS range? Was the baseline defective in some way, or the prediction off for some reason? This absence of a reliable baseline calibration on the test needs to be explained.
The question remains, If the NEDU cannot reproduce the baseline control data points (as shown in this test), then that implies the experiment was not the required comparison of iso-risk that they say it is.
.
Last edited: