If you don't mind me asking, what exactly do you mean by
Most models are based on RGBM, Buhlmann, DSAT, the US Navy, or one of a few others.
(Sorry if this should be known, but that's kind of why I posted the thread, to learn more about this stuff )
So, because it is all theoretical, there are different theories on which is correct and which is the best algorithm - in essence, which halftimes and M-values get you bent and which ones keep you safe...
In the beginning, there was the US Navy - essentially the only ones interested in diving with enough resources to come up with estimates for what works and what doesn't. There work was mostly made up the good old fashioned way - using humans for guinea pigs - if someone got bent, it got marked as bad, someone made it, not bad... Then there was NOAA and other organizations that used the Navy stuff as a basis, then they worked from there. Things like nitrogen off-gassing rates were some of the more important variables between the different tables - one of the reasons PADI tables are great and allow for more bottom time on repetitive dives versus the Navy tables, which were designed around decompression diving. Then came this guy named Wienke who did some work on the RGBM models, reduced gradient bubble model. He did lots of work with doppler ultrasound and listening for microbubbles. His tables worked to prevent this microbubble formation.
Because of this, models based on RGBM appear to be much more conservative than all previous models. There haven't been any significant differences in rates of DCS of one algorithm versus the other, at least to my knowledge...