Vascular Microbubbles Sensor

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Yes, people attack particular techniques. If you have the system to play with then they can be developed. For reasonable applications where there is little benefit to be had in spoofing them I am not too worried. The bigger issues are pure cilvil liberties and design/implementation bias. There was some fuss recently when it turned out the big new development at Kings Cross had been lending out CCTV to the police for facial recognition purposes.

Back to the OP device, I suspect that signal noise would mean that a long loop back to the sameness point would not be closed. However, as a toy giving a record of the dive it might be fun.
 
The whole machine learning hype is currently in a phase where nobody really knows anything. Maybe it'll mature in a while.

Sort of like cold fusion thing :p
 
Yes, people attack particular techniques. If you have the system to play with then they can be developed. For reasonable applications where there is little benefit to be had in spoofing them I am not too worried.

This is way OT so I'll shut up on the subject after this: the attacks work because correlations picked by the neural net may not have anything to do with real properties of the real objects. The difference from bad old "expert systems" is that those kept the logs of their inference chain that could be examined; neural nets are mostly black box.

Of course the interesting question is how it's different from other models that produce right answers for possibly wrong reasons, like I dunno: decompression models with their "tissue compartments" and "half-times"?
 
The whole machine learning hype is currently in a phase where nobody really knows anything. Maybe it'll mature in a while.

Sort of like cold fusion thing :p
Cold fusion was always going to be a scam, this is an old idea made possible by relatively cheap compute becoming available.

PS and note that your link is to the blog of the vendor looking for customers for their "mere $2K+" video cards. 'Cause us gamerz ain't buying for some strange reason, and miners have turned to ASICs half a decade ago.

I don’t know how much an MRI machine costs but I would guess the compute costs would be minor in comparison. And in the Nvidia example the compute for the training was in AWS, for the evaluation for a given patient an ordinary local CPU would do.

These days all the kids have AI on their CV.
 
the attacks work because correlations picked by the neural net may not have anything to do with real properties of the real objects.

Not all useful applications are as subject to attack as policing, facial recognition, or telling tee shirts from trousers.
 
I don’t know how much an MRI machine costs but I would guess the compute costs would be minor in comparison.

(I happen to work for NMR people.) Computer's pretty much a regular PC/laptop. The tricky part is capturing interference patterns from the atoms hit by a magnetic pulse. MRI is using very weak pulses and is only recording hydrogens. All that computer really does is colour the image by (inferred) hydrogen content, where more of it is presumed to mean more water.

So basically they highlight what's "coloured" like blood that's not in shape and form of blood vessels. It's handy when you need the diagnosis fast and there's no radiologist present. You still need a full-head coil MRI machine, but it could be made portable (probably already is) and probably affordable enough to have one in an ambulance.
 
How does that work? Who does the diagnosis?

Exactly: who gets sued when the AI gets it wrong.

(If you're serious: nvidia's $2K GPU diagnoses the stroke. It's who I meant by "they" in #26. And no, it doesn't work for a general case: only stroke.)
 
Exactly: who gets sued when the AI gets it wrong.

(If you're serious: nvidia's $2K GPU diagnoses the stroke. It's who I meant by "they" in #26. And no, it doesn't work for a general case: only stroke.)
You only need the GPU for training. The bit that takes a single patient scan, rather than thousands of patient scans labelled by experts, and guesses/figures out by black magic whether their head is exploding can be done with a typical cpu in a few 10s of seconds.

Like I don’t know what an MRI machine costs, I don’t know what the cost benefit analysis looks so like for employing fewer radiographers vs (maybe) getting it wrong now and again. I guess that depends on the availability of radiographers vs the price placed on a misdiagnosed patient. This is why heath services have economists.
 
Exactly: who gets sued when the AI gets it wrong.

(If you're serious: nvidia's $2K GPU diagnoses the stroke. It's who I meant by "they" in #26. And no, it doesn't work for a general case: only stroke.)
And this is why AI is such a problem. It does a single thing only at this stage. Unlike a person who does many things - in this case, checks for more than just a stroke. There is no way to understand why the conclusion was reached. There is no second opinion. Just a magic, black box which says 0 or 1 and nothing else.
 
https://www.shearwater.com/products/perdix-ai/

Back
Top Bottom