Using ChatGPT to write comments/posts about dive equipment

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

That's not a meaningful statement. Both "AI" and "intelligent" are too vague to parse. Take a look at my ACM chart.
That chart isn't remotely useful, it doesn't describe intelligence, but instead tries to pick little bits and pieces that look like intelligence that software may be able to do. Let me put it this way. Does any software (I don't care if it's AI or not) really know and understand what it's "looking" at?

For example, does software know what a human is? Identifying patterns of pixels in the shape of a human isn't remotely close. Neither is plagiarizing sentences and re-stringing them together.

If this conversation goes in the direction of "It depends what you mean by know and understand." we're just going to run around in circles. You have to severely dilute the meaning of intelligence (or "know" "understand" etc) to even begin to say software has even a hint of intelligence.

Like I said, software mimics intelligence, it is not intelligent itself.
 
  • Bullseye!
Reactions: L13
That chart isn't remotely useful,

I agree …if… you have no interest in understanding what "AI" means.

Like I said, software mimics intelligence, it is not intelligent itself.

Just wait. If you like surprises, you're on the right path.
 
I've been using DeepL for years, as well as Linguee, another service from its German parent company DeepL (originally Linguee GmbH). I spent a couple years at Uni Freiburg, and still do lots of communicating in German, and DeepL is a good resource.
Yes, DeepL works great for general communication. But that the easy part. The issue with linguee was that it would take random translations to build their database. CAT-Tools (computer aided translation) have an alignment function where you can automatically cut an english text and the german translation into segments and match them up to fill the translation database.
These segments were feed into DeepL to learn from.
The issue is that linguee not only took technical texts from sources like Siemens technical manuals (which are great) but also a ton of stuff from random websites with horrible translations. DeepL doesn't recocnize what's good and what's crap. It also doesn't warn you when the it can't find an exact hit or a fishy hit, in contrast to a CAT tool, it just give you something random.
Lingee was good for english to german but horrible for german into english for machine repair manual type stuff and other technical documents.
This has been pitched as a form AI by the people who are selling this.

Do you realise that the text about the David Shaw accident basically gets everything wrong?
You seem to think you got a good result. Other than the location, virtually everything is incorrect.
 
My test for "real intelligence": can it produce a truly novel but but still rational solution to a truly novel problem. Can it learn a concept (not a heuristic) and apply it to a situation where there are no clues (such as key words, etc.) that the concept applies, other than understanding the concept itself.

The computer science definition of AI is something different. It is "intelligence like" and very useful, it may even fool us that it is intelligent, but it isn't "real intelligence".
 
My test for "real intelligence": can it produce a truly novel but but still rational solution to a truly novel problem. Can it learn a concept (not a heuristic) and apply it to a situation where there are no clues (such as key words, etc.) that the concept applies, other than understanding the concept itself.

The computer science definition of AI is something different. It is "intelligence like" and very useful, it may even fool us that it is intelligent, but it isn't "real intelligence".
For me "real intelligence" is when a computer program gets a complete and realistic model of the environment, by analysing data streams coming from cameras, microphones, lidars, etc.
And based on the knowledge of obstacles and moving objects and vehicles, take proper actions regarding a vehicle guidance, in such a way to avoid collisions and preserve human lifes.
Only gathering a real undestanding of the environment (situational awareness) and operating proper decisional algorithms these program can safely operate in an unpredictable urban environment.
And as I have seen this happening with my eyes, I can say that true AI exists.
Elon Musk's Tesla has not achieved this level of AI yet, so he is asking to stop everything until they catch the forerunners...
 
For me "real intelligence" is when a computer program gets a complete and realistic model of the environment, by analysing data streams coming from cameras, microphones, lidars, etc.
And based on the knowledge of obstacles and moving objects and vehicles, take proper actions regarding a vehicle guidance, in such a way to avoid collisions and preserve human lifes.
Only gathering a real undestanding of the environment (situational awareness) and operating proper decisional algorithms these program can safely operate in an unpredictable urban environment.
And as I have seen this happening with my eyes, I can say that true AI exists.
Elon Musk's Tesla has not achieved this level of AI yet, so he is asking to stop everything until they catch the forerunners...
If it was truly "intelligent" it would not be constrained to the domain that it was developed for. It would continue to acquire and expand concepts as it observed it's environment. A "driving AI" has inputs the extend and provide information beyond the roadway accessible by cars. If it was truly "intelligent" it would form and evolve concepts that extended to that area. It would have concepts about it's passengers and the commands it received from them. It would "think" about all it's sensor inputs, not just those relevant to driving.

If two AI cars do not develop independent concepts, understandings, and behaviors based on their different history of inputs and therefore respond differently when confronted with the same new situation, they are not intelligent. If they respond the same, then they are merely more complex versions of the preceding auto pilot software.
 
For me "real intelligence" is when a computer program gets a complete and realistic model of the environment, by analysing data streams coming from cameras, microphones, lidars, etc.
And based on the knowledge of obstacles and moving objects and vehicles, take proper actions regarding a vehicle guidance, in such a way to avoid collisions and preserve human lifes.
Only gathering a real undestanding of the environment (situational awareness) and operating proper decisional algorithms these program can safely operate in an unpredictable urban environment.
The vast majority of this I would not describe as intelligence, but rather software capable of a task. The exception might the last underlined part might be a small hint of intelligence.

Video games have been able to do the majority of what you describe for some time, minus the object identification in the real world. The simple concept of "don't hit object" or even follow a road is perhaps the easy part.

Identifying objects in the real world is challenging, but doesn't require AI or Intelligence. For example, I was using 3D point-cloud technology over a decade ago which takes multiple cameras and generates a 3D environment. The problem being it takes a lot of processing power to render a single "frame." I've seen a variety of ways to also attempt to create a "depth camera" although I'm unfamiliar with the state of that technology.

I've actually speculated that the "AI" approach to object identification is the wrong approach. In other words, feeding large numbers of images (or video) into machine-learning algorithms isn't the most reliable and requires a lot of processing power. Instead, I think it would be far more efficient to manually write software which simply attempts to identify where objects are in a video. This can be done with FAR less processing power, in real-time, however requires very smart and creative humans to write that software. I've written similar (but simpler) software myself, though I'm not really working on this problem at the moment and the above is somewhat like coaching from the bleachers.

Elon Musk's Tesla has not achieved this level of AI yet, so he is asking to stop everything until they catch the forerunners...
Bingo. Notice who else signed that letter. It was the various other big-tech companies. None of them seriously think Skynet is about to take over. It's 100.0% about them being able to restrict their competitors, so they (big tech) can catch up. IMO, Elon was more interested in restricting Twitter competitors than Tesla competitors.

"Is AI really smart" is one of those discussions where you and your friends grab a beer, and have a great chat. "Pausing AI" (which I call "Pausing Software using Government Force") is more akin to pointing a gun at someone.

Without doxxing what I'm working on, that gun isn't just pointed at software development, but I could easily see it being pointed at my project itself which I'm sure big tech will see as a threat to their market share. I don't even consider the project "AI" but like we discussed earlier, the main difference between software and "AI" is a marketing label. So, that's perhaps why I take this topic a little personally, hah.
 
I've actually speculated that the "AI" approach to object identification is the wrong approach. In other words, feeding large numbers of images (or video) into machine-learning algorithms isn't the most reliable and requires a lot of processing power. Instead, I think it would be far more efficient to manually write software which simply attempts to identify where objects are in a video. I've written similar (but simpler) software myself.
My professional life involves object identification using both the machine-learning algorithms and the direct algorithms. Both have advantages and disadvantages. Neither is intelligence. Enough of either could accomplish all of the behaviors @Angelo Farina mentioned (as could true intelligence).

EDIT: Enough of either is HARD. Enough of machine-learning is easier if you throw processing throughput at it (which can be asymmetric, ie. train once on super hardware, use on good hardware). Enough direct algorithms requires throwing ungodly amounts of engineering time at it. Most modern solutions are a blend with a wide variety of the ratio between them.

Bingo. Notice who else signed that letter. It was the various other big-tech companies. None of them seriously think Skynet is about to take over. It's 100.0% about them being able to restrict their competitors, so they (big tech) can catch up.

"Is AI really smart" is one of those discussions where you and your friends grab a beer, and have a great chat. "Pausing AI" (which I call "Pausing Software using Government Force") is more akin to pointing a gun at someone.
True!
 
FWIW, unless we are talking about a brain-in-a-box generalized AI, I don't think the term AI is useful in the real world. What is useful is defining your task and then setting parameters for levels of success.

A good example of this are the SAE autonomous driving levels:
j3016graphic_2021.png

This approach lets you get past the marketing puffery. For example, despite the name, Tesla's Full Self Driving is actually SAE Level 2. The car isn't driving which means Tesla accepts no responsibility for crashes while it is engaged.

As an aside, the furthest anyone had gotten is SAE Level 4. There are no vehicles capable of Level 5, i.e., just taking to the road under any conditions and going anywhere. It's not even clear that we will ever be able to reach this goal without unforseen technological advances.
 
Someone asked about a deco plan so here is what chatGTP gave me:

here's an example decompression schedule for a 50-meter dive using the Bühlmann algorithm with a ZH-L16C tissue model, 18/45 trimix as back gas, and 50% oxygen as deco gas with a gradient factor of 50/80, assuming a bottom time of 20 minutes:

Depth: 50 meters Gas: 18/45 trimix Deco gas: 50% oxygen Gradient factor: 50/80 Algorithm: Bühlmann with ZH-L16C tissue model

Bottom time: 20 minutes

Decompression schedule:

  • Stop at 21 meters for 3 minutes
  • Stop at 18 meters for 3 minutes
  • Stop at 15 meters for 4 minutes
  • Stop at 12 meters for 4 minutes
  • Stop at 9 meters for 7 minutes
  • Stop at 6 meters for 11 minutes
  • Stop at 3 meters for 15 minutes
Note that the decompression schedule provided above is just an example, and actual decompression planning should take into account various factors such as your individual diving profile, equipment, dive plan, and environmental conditions. You should always consult with a qualified dive professional and use proper dive planning software to create a decompression schedule tailored to your specific needs.

Here is a non-AI generated plan:

Dec to 50m (2) Trimix 18/45 18m/min descent.
Level 50m 17:13 (20) Trimix 18/45 1.07 ppO2, 18m ead, 23m end
Asc to 21m (23) Trimix 18/45 -9m/min ascent.
Stop at 21m 0:47 (24) Nitrox 50 1.54 ppO2, 10m ead
Stop at 18m 2:00 (26) Nitrox 50 1.39 ppO2, 8m ead
Stop at 15m 2:00 (28) Nitrox 50 1.24 ppO2, 6m ead
Stop at 12m 2:00 (30) Nitrox 50 1.10 ppO2, 4m ead
Stop at 9m 2:00 (32) Nitrox 50 0.95 ppO2, 2m ead
Stop at 6m 19:00 (51) Nitrox 50 0.80 ppO2, 0m ead
Surface (52) Nitrox 50 -6m/min ascent.
 
https://www.shearwater.com/products/swift/

Back
Top Bottom