As mentioned, the leading researchers in decompression theory have essentially said that the theory behind bubble models is not in line with the current state of the art in decompression. The Spisni study was DIR divers thinking their brains were smarter than computers, they were proven wrong. The researchers have come out saying that even the Shearwater default of 30/70 for gradient factors has a GF-Lo that is far too low for optimal decompression. If you're doing NDL diving it really doesn't matter, but if you're doing dives where the bubble models have you prioritizing deep stops, well, there is a reason no real technical diving is done with Suunto computers.
I agree with everything you said. Your statement that "bubble models is not in line with the current state of the art in decompression" is perhaps a softer way of saying that bubble models have not been proven safe, or if your glass is half full that bubble models haven't been proven unsafe. Irregardless, I believe that Buhlmann with GF's, assuming those GF's are higher than originally used are more efficient for decompression. But efficiency doesn't necessarily equate with safety. Obviously, if you ascend immediately to the surface from a 170 ft 30 minute dive you're going to be very efficient; your pressure gradient will be the greatest than with deco stops but you'll be bubbling worse than a shaken warm beer on a 100 degree day. (Did you know what S.C.U.B.A. really stands for? Some Come Up Bubbling Alot.)
But, there's another reason why technical divers prefer Buhlmann with GF's over bubble models. It's not as easy to visualize what VPM-B +2 looks like when compared to tissue compartment supersaturation/depth charts. What does +2 mean? Even if you know what the critical bubble size is, how does that affect the graph of supersaturation/depth verses the critical supersaturation line?
[--- open can of worms here]
Some (all?) of the researchers of the "beloved" NEDU study reached a conclusion that went beyond the efficacy of what they were studying: the effect of supersaturation of deep stops versus shallow stops. In regards to the safety of shallow stops over deep ones they proved their point, but there was no mention of bubble size and its effects on decompression. You can read the study yourself (most of you already have). Here is what I wrote on the NEDU thread:
Here are the numbers from the NEDU study (depth = 170 ft, BT = 30 min.):
Shallow stop deco stops (depth/time): 40/9, 30/20, 20/52, 10/93. DT = 174 (neglecting ascent time, DT = decompression time).
Deep stop deco stops (depth/time): 70/12, 60/17, 50/15, 40/18, 30/23, 20/17, 10/72. DT = 174.
Running the dive plan on my Perdix with VPM-B +2 gave this profile for the same depth and BT:
Deco stops (depth/time): 100/1, 90/2, 80/3, 70/4, 60/5, 50/5, 40/7, 30/14, 20/20, 10/34. DT = 95.
Running the dive plan on my Perdix with GF's of 35/82 gave this profile for the same depth and BT:
80/1, 70/2, 60/3, 50/5, 40/6, 30/11, 20/21, 10/43. DT = 95 min.
As you can see the deepest stops for VPM-B are very short compared to the Navy's deep stop schedule (1,2,3,4 min vs. 12,17,15,18 min.). This is in keeping with the idea of limiting slow tissue on-gassing and the resultant super-saturation of those tissues which led to the DCS hits as reported in the Navy study, while limiting bubble growth beyond the critical size.
My question is: why would you be so quick to agree that bubble models in general and VPM in particular are inferior to dissolved gas models in general and Buhlmann GF's in particular, when the NEDU study is testing something quite different from what [non-Navy] bubble models are doing?
[--- close can of worms here]