No air integration in high-end and tech DCs . Why ?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

How are the answers you've received incomplete?

I perceive the side questions that appeared as part of it. And, if not, I still see them as just as interesting as the OQ. The OQ is fully answered. The side stuff has been answered to some extent already. To sum up they are :
- is a minimal modification of the SPG with a non-intrusive digital HP gauge, possibly wireless, feasible without loss ?
- is a common platform based on a chip that will be used on both tec and rec DC likely in the future given the players and visible evolutions ?
- is it a good idea for me to get an A100 ? I like the looks plus it is cheap and a fairly recent model.

---------- Post added June 6th, 2014 at 03:50 PM ----------

I guess anyone being accused of communicating like a politician would not take it as a compliment :wink:

I'm going to guess you're Dutch by your writing and say that the only thing worse that communicating like a politician is to communicate like a Dutch politician....LOL. They first say nothing and then spend hours getting total agreement about it :) .... with one exception, of course, who we can't mention on the internet :)
Wim Kok ? I liked that one. He did a decent work and did not cling to the seat when the buggers would annoy him for stg he had no responsibility in. I aint dutch but I enjoyed the years I lived there, and loved the straightforwardness of the usual citizens. The unusual ones being in my mind those who made a profession of spinning and fleecing, not those pointed out by the one whose name can not be said.

You lost me but I'll agree with the bit I posted.

Just have a read of Richards' post again, have a look at Moore's law, and micro-electronics drivers (like the ITRS roadmap summaries). No doubt the coin will drop if it is explained by someone less messy.

---------- Post added June 6th, 2014 at 04:06 PM ----------

I'm going to guess you're Dutch by your writing and say that the only thing worse that communicating like a politician is to communicate like a Dutch politician
Let us not rush to the Godwin point quite that fast.
Is there anything left of HMS Cressy on the bottom these days ? I still have one of those cute phosphorous sticks from the torpedo heads on my desk.
 
I agree with Kate that this is indeed "a solution looking for a problem." Engineers love to do exactly this kind of inventive dreaming. I think the OP is that kind of engineer. It seems to me that's the reason engineers aren't usually the ones who come up with the initial idea for a product; rather, it's someone who has extensive experience in the field of use of the product--in this case, someone who has a broad (and deep) view of the needs of both tech and rec divers. The idea person then gets the R&D team on board, and the engineering process begins. That's not to say that ordinary engineers never come up with good product ideas, but it's not the norm.

I still disagree that the "common platform" concept is going to gain traction across the spectrum of dive computer manufacturers for the reasons I previously stated--mainly, that those who aim their product at recreational divers don't WANT the same platform as their competitors.

I also still disagree that adding more features, with the idea that some manufacturers will disable some features on some models, is a good idea. Parts left off a device do not cause trouble, whether we're talking malfunction, degradation due to aging or wear, or whatever. I would argue that a dive computer manufacturer would do well to MINIMIZE the number of parts and complexity of the computer. What would attract me as a consumer is the simplest, easiest-to-use/read, least expensive computer that will perform the necessary tasks without fail through many years of hard use.
 
To answer the OPs question:

Technical divers do not have a need for AI. This thread has all the explanations covered as to how a Tech diver looks at their dive planning and how AI adds no value for the additional costs. Once you add in the cost of multiple transmitter, that are needed to fully use AI, it really makes no sense. Given this it's clear why the Tech dive computer manufactures don't add it.

For Rec diving, AI is a nice for lazy mans dive logging. It's also a feature that gives additional safety alarms and visuals for a underestimated ATR snap shot as well as underestimated pressure reading.

I find some of the comments funny that when AI temporarily loosed wifi or BT connectivity, in between samples, that the dive computer is somehow compromised from that point forward. The computers dive ALGO program and functions are independent of the AI and the ATR ALGO calculations. You can dive with or without AI turned on and the computer functions the same.
 
I also still disagree that adding more features, with the idea that some manufacturers will disable some features on some models, is a good idea. Parts left off a device do not cause trouble, whether we're talking malfunction, degradation due to aging or wear, or whatever. I would argue that a dive computer manufacturer would do well to MINIMIZE the number of parts and complexity of the computer. What would attract me as a consumer is the simplest, easiest-to-use/read, least expensive computer that will perform the necessary tasks without fail through many years of hard use.

I found your previous answer regarding why players from mass market are likely to reject a common platform insightful.

Do you have any example that good products do not come from engineers ? I believe that they often do, but are attributed to the "sellers". Seems history has favored the louders, not the builders.

Granted, a simpler system has less parts, but you can understand that adding a feature is not equal to adding a part. That may be the way it works if you build gas distribution system, or a java beans accountancy applications, but it is not the case for microelectronics. IT systems crumble when you add features because they have poor integration checks and test, apart from some "live" checks on the finished product. Mechanical systems become brittle when you add parts because each part may fail at one time and take whole the system down. The failure versus integration (complexity) on chips does not work like that. By adding more feature you reach a spot where you improve reliability and reduce price at the same time.

I would even say that a chip with billions of transistors is more simple than a clusterf* of FPGAs and application processors that totals a lower active transistor count.

It is not that obvious how to reach simplicity, it is only obvious in the end result (If you saw an xray of a DC with single chip compared to another you would see right away which one is most simple and clean).

Adding enough features to an electronic product means that you are going to ASIC types of designs and that you are replacing multiple chip by just one. Plus you end up having enough budget for development to make all the features truly custom, instead of relying on outside not-well controlled modules with fluff.

Anyhow, in the end, it is the profit estimates done by DC makers shareholders that will make it happen or not. Let us happily wait and wait until we can vote with our wallets...
 
. . .
I find some of the comments funny that when AI temporarily loosed wifi or BT connectivity, in between samples, that the dive computer is somehow compromised from that point forward. The computers dive ALGO program and functions are independent of the AI and the ATR ALGO calculations. You can dive with or without AI turned on and the computer functions the same.

There's nothing funny about the possibility that a failing sensor or some other part of the interface between the gas measurement system and the deco computer system can fail in a way that was not tested by the manufacturer and cause some problem that the engineers did not foresee. You THINK the AI system and deco computation system are isolated, and no doubt so do the engineers who designed the computer. It's clearly a REMOTE possibility that AI failure could affect something else, given that AI computers have been around for a long time now, and as far as anyone has been able to detect, this issue has never actually occurred. But it theoretically could occur, especially with a computer that has not been as thoroughly tested as others. I doubt this kind of failure mode would present as great a safety hazard as others, but I don't see why the possibility should be completely dismissed.
 
There's nothing funny about the possibility that a failing sensor or some other part of the interface between the gas measurement system and the deco computer system can fail in a way that was not tested by the manufacturer and cause some problem that the engineers did not foresee. You THINK the AI system and deco computation system are isolated, and no doubt so do the engineers who designed the computer. It's clearly a REMOTE possibility that AI failure could affect something else, given that AI computers have been around for a long time now, and as far as anyone has been able to detect, this issue has never actually occurred. But it theoretically could occur, especially with a computer that has not been as thoroughly tested as others. I doubt this kind of failure mode would present as great a safety hazard as others, but I don't see why the possibility should be completely dismissed.

There are some software static analysis techniques, and more importantly some methods of development/project that can garanty that.
That is a slow and expensive process, even by aerospace standards, so I highly doubt that eCCR makers use it, even less for the massmarket rec DC makers.
 
There's nothing funny about the possibility that a failing sensor or some other part of the interface between the gas measurement system and the deco computer system can fail in a way that was not tested by the manufacturer and cause some problem that the engineers did not foresee. You THINK the AI system and deco computation system are isolated, and no doubt so do the engineers who designed the computer. It's clearly a REMOTE possibility that AI failure could affect something else, given that AI computers have been around for a long time now, and as far as anyone has been able to detect, this issue has never actually occurred. But it theoretically could occur, especially with a computer that has not been as thoroughly tested as others. I doubt this kind of failure mode would present as great a safety hazard as others, but I don't see why the possibility should be completely dismissed.

I find it funny, because it's taking the (here let me use CAPS and add spaces) R E M O T E possibility argument, that AI losing connectivity will effect the functions of the computer that was designed and tested to have these features work mutually independent of each other's and still function, to attempt to validate the point that AI is 'inset why one doesn't like it'.

There is a remote possibility that your dive computer, with or without AI, is not calculating correctly the minute it hits the water, even though it appears like everything is correct. (which is why many dive a plan and use two computers to monitor the plan) Those pesky engineers, always with their designing and testing what could go wrong before killing people. I suggest you ditch the computer and go back to dive tables. But wait there is a remote chance that the printer that printed your dive tables was set up wrong and your calculations would be off. Those pesky type setters and there poor proof reading skills.

What I find even more funny is the remote chance argument being taken to the absurd and used to attempt to validate the point that AI isn't a valid choose as an option available on a dive computer.

While this argument has little to do with the OPs original question and the simple answer that "the tech market doesn't see AI as a value and thus won't pay for it", still warrants my "I find it funny" comment.

Given a long enough time line one can always find an example of one's point, but to use this as the basis for reasons to not use AI is funny to me.

AI has a place, and until it is proven to be 100% accurate and priced to the point that it makes financial sense, I don't see it being an option that gets widely adopted by technical divers.
 


Wim Kok ? I liked that one. He did a decent work and did not cling to the seat when the buggers would annoy him for stg he had no responsibility in. I aint dutch but I enjoyed the years I lived there, and loved the straightforwardness of the usual citizens. The unusual ones being in my mind those who made a profession of spinning and fleecing, not those pointed out by the one whose name can not be said.

I liked Wim Kok too. He was one of the old school guys who got to be the PM because he earned it and deserved it, not because he was the "least of all evils". I'm really a fan of Neelie Kroes, though. She should stop wasting her time in Europe and run for PM this time around so the Dutch get a real leader in "het torentje" at least once in a generation.


Just have a read of Richards' post again, have a look at Moore's law, and micro-electronics drivers (like the ITRS roadmap summaries). No doubt the coin will drop if it is explained by someone less messy

I have some idea of this. Around 2005 there was a movement afoot within the ITRS to scale and diversify the hardware architecture in order to meet changing market demands and make it easier for chip manufacturers to keep up with manufactures of smart integrated devices like smart phones but also microwaves that automatically detect how much power to pump into a given item in order to cook it optimally, perhaps even taking this information directly from the cloud.

I think what they foresaw happening has started to happen and will continue on for some time, but the issue that the ITRS almost immediately encountered was that with this kind of ... what they called "More than Moore" approach, the roadmapping process itself became exponentially complex, let alone the manufacture of fault free devices.

I don't know if you've ever designed or built a CPU but I have and I can guarantee you that just because it's on a chip there is no guarantee that the logic is solid. It's still human logic. Binary logic is somewhat easier to test (and to automate testing) against a finite state machine than a computer program, but it's still subject to the variable of human fallibility. It's not because it's on a slice of silicone that it's bound to work. It would be nice if the world worked like that but it doesn't.

The point being that my previous point about KISS applies every bit as much to building IC's as it does to building software. Putting more and more functionality into an IC will make the process of development and testing exponentially more difficult (as the ITRS discovered in its roadmapping efforts) and the reliability will not be improved in the process. The *last* step, printing it on silicone, doesn't change in any significant way but that's not the step where the risk is and that's not the step where chip manufacturers make major logic/design decisions.

They've done this before and came back to RISC after the process became unwieldy. I know they're still pursuing it to some extent but I don't think things like building dormant wireless capability into a chip designed for a microwave is really the same order of functionality you envision for AI in dive computers. Can it be done? Of course it can.

What I *do* foresee is integration of smartphones or computers with dive computers (via the cloud) so that important dive information like profiles and deco schedules can be up/downloaded seamlessly without cables and all that "nerdy" work we have to do today. Would it be worth building (latent) wifi capability into new IC's for dive computers to enable manufactures to develop integration strategies. Sure. But even the ITRS roadmap won't include building pointless functionality into a device because it's possible. Somehow there still needs to be a possible need.

That said, recreational divers are far more likely to want AI than technical divers. If you imagine (which I think you do) a single diversified processor that forms the foundation of all dive computers in the future then, sure, building the ability for it to talk to a transmitter is definitely something you would want it to be able to do. I'm not sure that's where this thread started but this conclusion (however we got to it) would be logical.

So, yes. I would agree with you that *IF* we were to build one architecture for all dive computers that it should be able to talk to a transmitter.

Where we differ is that I believe that building one architecture to cover all the bases isn't usually the most productive direction. I can see it from a chip manufacture's perspective. They don't want to have to manufacture 600,000 different chips. From the perspective of a dive computer manufacturer or a diver, however, it's really an esoteric discussion that doesn't make any sense.

R..
 
Whoa, 0002s--you're reading into what I said.

I find it funny, because it's taking the (here let me use CAPS and add spaces) R E M O T E possibility argument, that AI losing connectivity will effect the functions of the computer that was designed and tested to have these features work mutually independent of each other's and still function, to attempt to validate the point that AI is 'inset why one doesn't like it'.

I am not taking the remote possibility that I pointed out and using it "to attempt to validate" anything. I only meant that this possibility exists and therefore it doesn't strike me as "funny" or odd that people are pointing that out. You may disagree.

There is a remote possibility that your dive computer, with or without AI, is not calculating correctly the minute it hits the water, even though it appears like everything is correct. (which is why many dive a plan and use two computers to monitor the plan) Those pesky engineers, always with their designing and testing what could go wrong before killing people. I suggest you ditch the computer and go back to dive tables. But wait there is a remote chance that the printer that printed your dive tables was set up wrong and your calculations would be off. Those pesky type setters and there poor proof reading skills.

I agree and understand what you're saying. That's why I added my last sentence: "I doubt this kind of failure mode would present as great a safety hazard as others, but I don't see why the possibility should be completely dismissed." All kinds of things can theoretically go wrong, and I agree that loss of AI is not one to especially worry about.

What I find even more funny is the remote chance argument being taken to the absurd and used to attempt to validate the point that AI isn't a valid choose as an option available on a dive computer.

This is where you misunderstood. I was not using the "remote chance argument ... to attempt to validate the point that AI isn't a valid choose [sic] as an option available on a dive computer." I think AI is a fine option. Although I don't have an AI computer, my wife loves hers, and it has proven reliable enough for our kind of recreational diving. However, I think there are reasons why a manufacturer of computers aimed at the tech market would not want to include AI. They don't like to open the door to even "remote" chances, and they're not about to adopt aerospace-grade (to borrow from something the OP himself said) design practices to guard against remote chances rearing their ugly heads at the wrong time during a technical dive.

While this argument has little to do with the OPs original question and the simple answer that "the tech market doesn't see AI as a value and thus won't pay for it", still warrants my "I find it funny" comment.

Given a long enough time line one can always find an example of one's point, but to use this as the basis for reasons to not use AI is funny to me.

AI has a place, and until it is proven to be 100% accurate and priced to the point that it makes financial sense, I don't see it being an option that gets widely adopted by technical divers.

It sounds like we are not in disagreement over anything except what strikes us each as "funny." To each his own, I guess.
 
https://www.shearwater.com/products/swift/

Back
Top Bottom