The point is to get the safest profile using machine learning. The idea of this method is to analyze most of the popular software products with same parameters (GF, Last Stop, Mix Type etc.) and pick the safest profile. By analyzing and comparing data from different software products in order to select the safest plan we minimize the probability of an error in a particular software. In the process of analyzing DiveProMe finds errors in profiles (zero ascent time, zero stop time, zero time to surface, value converting errors, zero deco stop time, etc.) and ignores these profiles.
There is still a chance that all the software products might have the same error in the same plan which will result in all the data being wrong. But the probability of suchlike result is severely diminished by comparing data from similar plans. If DiveProMe indicates a certain “leap” of errors, it will use calculations from similar plans and get the average values using the safest method.
User part of DiveProMe contains about 2,500 strings of code. It is not that many and only shows data calculated previously using machine learning. Which makes the probability of an error in the code and your plan even less.
Surely, the plan DiveProMe will give you won't be the most effective, but it will definitely be several hundred times safer than a plan calculated with a separate software.
Okay, this time without being flippant....
How does the machine learning process decide what profile is safest?
I don't see how this can be done without comparing a large number of actual dive results (i.e. did the diver get bent?) against the planned profiles. E.g. one source gives 2 minutes at 50' and 1 minute at 40'. Another source gives 1 minute at 50' and 2 minutes at 40'. How does the machine learning determine which one is "safer"?
And, when the software yields the "safest" profile, is it just picking a profile that was generated by one of the specific sources it consulted? Or is it merging the profiles that were produced by various sources into one "safest" profile? E.g. one source has 2 minutes at 50' and 1 minute at 40'. Another source has 1 minute at 50' and 2 minutes at 40'. So, the machine learning outputs a profile that has 2 minutes at 50' and 2 minutes at 40'?
If the output profile is simply verbatim one of the ones that came from one of the source applications, then it seems that the result is EXACTLY AS SAFE as the plan calculated by "a separate software".