No air integration in high-end and tech DCs . Why ?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I'm really a fan of Neelie Kroes, though. She should stop wasting her time in Europe and run for PM this time around so the Dutch get a real leader in "het torentje" at least once in a generation.
If she manages to save the semicon industry and lets some valid latin IC houses work better with Leuven and Eindhoven, she will definitely have earned her creds to a PM position. Is that even possible with the current european governancy glued in inter-nation bickering about the ants and the krekels and the ex-commies with lack of direct democracy or means of action ? ... let me get back to my technical daydreamings on dive computers. Less depressing.
The point being that my previous point about KISS applies every bit as much to building IC's as it does to building software. Putting more and more functionality into an IC will make the process of development and testing exponentially more difficult (as the ITRS discovered in its roadmapping efforts) and the reliability will not be improved in the process. The *last* step, printing it on silicone, doesn't change in any significant way but that's not the step where the risk is and that's not the step where chip manufacturers make major logic/design decisions.

Yes. The not well thought out More than Moore and the rush to multi CPU looked like desperate attemps to wiggle away when the horizon of atomic variability and thermal wall approaches. When it hits the fan, the boel will have to be picked up by software, which has been running on free fuel for more than 40 years.

What I *do* foresee is integration of smartphones or computers with dive computers (via the cloud) so that important dive information like profiles and deco schedules can be up/downloaded seamlessly without cables and all that "nerdy" work we have to do today.

Ah ha. So you do have some new feature for tec in mind: downloading the tables calculated during planning on PC or phone to the DC. So the DC would act like a backup printed table, or would it still have some reactiveness ( like switching to another precalculated table because of 1m depth or 1mn time skew. Or even entering the "realtime" mode when the profile has gone way off.) ?

So, yes. I would agree with you that *IF* we were to build one architecture for all dive computers that it should be able to talk to a transmitter.

Where we differ is that I believe that building one architecture to cover all the bases isn't usually the most productive direction. I can see it from a chip manufacture's perspective. They don't want to have to manufacture 600,000 different chips. From the perspective of a dive computer manufacturer or a diver, however, it's really an esoteric discussion that doesn't make any sense.

Why is volume a benefit for the wafer pusher only ? The development house will dilute their NRE costs in those volumes too, and their tests and verifications plans need not necessarily explode (exp grow). And for the diver has has in the end cheaper hardware so his money is available to other priorities. Anyway we can not go further on settling this without first putting proper numbers behind.

Integrating wireless on the logic die is tricky, that is true. But normally it is only the RF that will suffer from spurious, not the logic (good new for the deco calculation, bad news for the number of respins).

If you think the discussion is too obscure for most SB readers, we can stop it here. This was a pleasure having it go so far.
 
There are some software static analysis techniques, and more importantly some methods of development/project that can garanty that.

No there aren't. Testing isn't guaranteed to reveal all flaws, especially with a product based on external sensors, running on batteries, with wireless data transfer.

Even if the computer could be guaranteed perfect, it's still unwanted and it fascinates me that you're pushing so hard for something nobody wants.
 
Ah ha. So you do have some new feature for tec in mind: downloading the tables calculated during planning on PC or phone to the DC. So the DC would act like a backup printed table, or would it still have some reactiveness ( like switching to another precalculated table because of 1m depth or 1mn time skew. Or even entering the "realtime" mode when the profile has gone way off.) ?
This would be useful functionality

Why is volume a benefit for the wafer pusher only ? The development house will dilute their NRE costs in those volumes too,
Cheaper chips mean cheaper devices. But the current model is that chips are means to an end. "wafer pushers" are just an enabling technology. It's not the core cost in the business case.

If you think the discussion is too obscure for most SB readers, we can stop it here. This was a pleasure having it go so far.

I suspect that hardly anyone on SB can remotely follow what you're saying.

R..
 
No there aren't. Testing isn't guaranteed to reveal all flaws, especially with a product based on external sensors, running on batteries, with wireless data transfer.

Even if the computer could be guaranteed perfect, it's still unwanted and it fascinates me that you're pushing so hard for something nobody wants.

Communication protocols are beasts that can be garantied too. For the physical channel and analog sensor stuff, the matter is more messy (costly to do properly).

Am I pushing ? I just clearly stated that these methods were out of reach for that industry and would probably not happen in our lifetime (without even knowing for sure). Added another nail to that coffin. Is it possible you are misinterpreting what I am after ?

We do every year take in radical technology changes without ever discussing them or trying to tell them apart from magic for most of us. I do not believe this is a fatality and I also believe tec diver have a sane approach towards that, even if it probably not the only one. I just see it as a sane activity to talk about such things. If that fascinates you, I am at least glad that caught your attention, just don t call the asylum yet.
 
Adding enough features to an electronic product means that you are going to ASIC types of designs and that you are replacing multiple chip by just one. Plus you end up having enough budget for development to make all the features truly custom, instead of relying on outside not-well controlled modules with fluff.

I have been in semi-conductor industry for my life, specifically chip design. This also means larger chip, more higher cost of development, more bugs, possibly slower, high power consumption, more like to be jack of all trade, king of none.

This is the kind of thinking of a very typical good engineers tho: a very flexible design that handles all cases. It is an engineering achievement. It is also what pushes the advancement of technology. But a product like this usually don't sell well. It is so called a "product designed by engineers for engineers", an over engineering item. A good product design team will take such piece of engineering, strip out features that 95% of the target users won't bother, focus on making what matter the most the best.
 
If she manages to save the semicon industry and lets some valid latin IC houses work better with Leuven and Eindhoven, she will definitely have earned her creds to a PM position.

If anyone in Europe can rise above the noise, and has enough understanding of the industry to do so, then it has to be Kroes.

R..
 
It sounds like we are not in disagreement over anything except what strikes us each as "funny." To each his own, I guess.

Agreed.

I think it's less of a' remote chance failure' and more of no real benefits for the cost. The same 'remote chance' exists with any dive computer in use. Because of this I find it funny to believe that a remote chance failure in one underwater electron piece of equipment is greater than another because one has a wifi or BT connection and the other doesn't. If the product design works and has no inherent issues, then the same remote chance exists between both AI and non.

Unless there is a statistics that can show AI failure caused computer failure is larger than general computer failure, then we have no real measurable past what each person's definition of remote is and how much they fear that remote chance or generally dislike one unit or feature over the other and need justification past "i don't like it".

We know there are tech and non tech computers that have actual issues with battery type discharge rates causing problems with battery metering and computers having power issues during dives. Yet, tech divers continue to purchase those units and come up with their own 'work arounds' to avoid power issues. These are real and not remote. The manufactures even tell you about these issues and suggest work arounds.

If the Petral came with 3 free AI transmitters and had the user option to turn it off. People that want/use the Petral's hardware and features wouldn't say, "oh a remote chance of AI failure now exists, so on to the next non AI tech computer". They'd either use it, turn it off and or sell the transmitters at market rate to those who wanted the features. It's my opinion that if you charge for those 3 transmitters, the tech divers would opt not to purchase them and leave that feature turned off. They add no value.
 
Communication protocols are beasts that can be garantied too. For the physical channel and analog sensor stuff, the matter is more messy (costly to do properly).

No, they cannot, at least not in an absolute, omniscient sense. Consider the following manual language:

WARNING

This computer has bugs. Although we haven’t found them all yet, they are there. It is certain that there are things that this computer does that either we didn’t think about, or planned for it to do something different. Never risk your life on only one source of information. Use a second computer or tables. If you choose to make riskier dives, obtain the proper training and work up to them slowly to gain experience.

This computer will fail. It is not whether it will fail but when it will fail. Do not depend on it. Always have a plan on how to handle failures. Automatic systems are no substitute for knowledge and training.

No technology will keep you alive. Knowledge, skill, and practiced procedures are your best defense (Except for not doing the dive, of course).

 
No, they cannot, at least not in an absolute, omniscient sense. Consider the following manual language:

<snip>

I have rarely seen legalese that looks less like legalese. Even to european standards it look like useful advise and info.

Do not get me wrong: the digital path in a DC can be verified to a very decent extent, even to full extent if that was ever done from scratch and with expensive techniques. The communication protocol to the (useless) HP gauges can be verified too. What I am talking about here is the level of certainty like proving Thales was wrong. For the analog parts (sensors, comm channels) and covering all of their states, that is simply impossible. You will have to make do with a risk chance or ditch it altogether. This warning for a full DC system is perfectly in any of those hypothesis justified IMO. (do they differ from one manufacturer to another?)
 
https://www.shearwater.com/products/perdix-ai/

Back
Top Bottom