Hi Lorenzoid,
We have debated this point several times, but I stand by my analysis. You posited a theory, perhaps logically plausable, for the possible of increased failure in computer mode due to AI. Under the scientific method, this theory must be matched against observable, repeatable real-world observation. There is an enormous amount of observed, real-world data on AI computers, and none of the data supports the presence of AI software or circuits as a failure mode for the computer's dive mode calculations. In fact, the lack of any such failure over an enormous data set is positive evidence that the theory is not a valid predictor of events. Thus, it should not drive decisions.
To use your example, the compass and bluetooth hardware and software have caused nary an eye blink for Shearwater. Even more important, what about all that rebreather hardware, software, and even physical connectivity failure risk? AI risk is a pittance compared to this (since it is non-existent as far as the data shows), but the most dedicated OC tech divers still embrace Shearwater.
There is no need for assurances or guarantees that the AI software or code is "walled off" from the computer functions so as to be independent and isolated. All AI computers either are coded or operate like this and it seems to be working as intended. Another question, the Shearwater runs two algorithms, Buhlman and VPM. Are you satisfied that these are "walled off" and that the computer will not cross-link these so as to mess up your deco? I don't think you or anyone else has this concern. AI software has been solved by many manufacturers, as has dual altorithms, all without interfering with the computer's functioning.
I do not say this lightly, for example if these were the first days of AI, and there was only logic or theory to go by and little to no real world observation, then perhaps a concern might be justified. I think theoretical concerns are good reason not to migrate "brand new" technology into tech diving, when there is no real-world data to evaluate the technology. That time has long passed, however, with regard to AI.
Given the current state of affairs and known data, there is no basis to conclude that the theoretical risk is in actuality a real one.
What I do find interesting, though, is that apparently formal PADI tech standards state that AI carries a risk of you not only the gas info, but the dive data as well, if the AI fails, thus requiring you to go to your back-up dive plan (either second computer or written plan). Interestingly, PADI tech does not prohibit AI computers, only notes this as a possible risk to account for. This was quoted by Andy (Devon Diver) in the thread "AI computers for tech diving" in the Tech Diving forum.
No AI computer I have ever seen actually works like this. How does PADI come up with this? They, to be sure, have an "in" with the diving industry and more access to information than I could ever have. Do they actually know something, or have real data, or are they going by the theoretical concern? I would like someone from PADI's development of these standards to weigh in on this because, as you know, this topic is of serious interest for a lot of people.
Best Regards