The 126 page 1994 DSAT RDP document has it's goal clearly stated. Scubalab's article also has its goal clearly stated. (wrt NDLs and taking the computers together on a dive).
DSAT RDP document states very clearly the problem with existing algorithm, the issues driving the development of the new one, the rationale behind their testing schedule, and discussion and the interpretation of the result.
And you don't see any difference between all that and what ScubaLab has? -- ScubaLab: How We Test Dive Computers under Objective test protocol (sic)
(The best rationale I have for those particular profiles is Craig said they're representative of recreational dives.)To gauge the performance of the computers’ algorithms, they were subjected together to a series of four dive simulations in the USC Catalina Hyperbaric Chamber.
Meant to simulate a day of diving, the dive profiles (shown in the four charts) were:
...
At least in 2016 they added "even with the caveat that the data applies only for profiles and conditions like those in our test."
Scubalab's data shows that for their Dive 1 in 2014, 2016 and 2017 your statement is wrong.
Since they weren't testing the same computers, I am not sure what it is that you actually mean. I didn't see a same reference computer showing the same numbers every year. Without a baseline reference you might as well be comparing random numbers.
Admittedly my views on experiment design is heavily influenced by a couple of decades of running computers for scientists. I'm sure I'm being too harsh on the ScubaLab guys, they're just regular hard-working journalists, salt of the earth...