Using ChatGPT to write comments/posts about dive equipment

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

I think everyone is aware of the dangers of confusing correlation with causation. But in the case of AI, correlation is causation. Correlation is the basis of the model/weightings that determine the output. As opposed to say deductive or inductive reasoning or targeted research, much less hypothesis formulation and testing.

This leads to a real danger if people make decisions based on the output of putative AI software. Or worse, automate decisions based on the output of putative AI software.

We already have examples of this. Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks.
 
I think people like Sam Altman and the dude from the article have a vested interest im pushing the 'skynet is going online soon' narrative. It does get a lot of attention and investment money, that's for sure.

Elon Musk is also highly respected and talks about how the new AI is scary.
Anyone pushing the skynet narrative either has zero clue, or an ulterior motive.

In the case of all these big-tech companies signing on to "pause AI" it's BLATANT ulterior motives. They're not ignorant. They employ hundreds of "AI" or "ML" engineers each. This latest round of fear-mongering around AI is entirely around control and anti-competitive practices.

And while I do like a number of things Elon has done, he's NOT getting a free pass from me on this one.
And even if we had skynet, a Reaper drone in not going to get maintenance or repair from a machine.
All the hi-tech stuff we have is dependent on a bunch of people in the backround running and maintaining it. Nevermind manufacturing.
Precisely.
Of course it is a matter of definitions.
But here in my University Campus we have a spinoff company which makes REAL AI, used for autonomous self-driving vehicles, which really understand what's happening in the environment around the vehicle, and take actions which have the potential of killing people.
So in comparison I consider everything lesser to not be TRUE AI.
If interested, this spinoff company is called Vislab and it has been acquired by Ambarella.
Here in Parma we have another spinoff called Henesis, doing even more advanced AI. They are dealing with modelling emotions, not just life-threating decisions...
When it comes to "AI" I have no doubt that companies and people which make AI market it in a way which make it sound like it's actually intelligent and actually understands concepts. People who make "AI" have been saying this for quite a while, and they will continue saying it well into the future. It's good marketing, it impresses people, it makes the "AI" seem like magic.

A simple question is often revealing, for example, "how does the software know it's supposed to avoid obstacles?" The simple answer is "a human told it to avoid those obstacles." "how does the software know what an obstacle is? or that something is an obstacle?" The answers are always the same.

The current real danger is that people make decisions based on the output of putative AI software. Or worse, automate decisions based on the output of putative AI software.
Correct, which means the danger is PEOPLE (and not "AI").
 
Correct, which means the danger is PEOPLE (and not "AI").
Exactly.

AI is a tool and like any tool can be beneficial or harmful depending on how it is used.
 
None of the tools are actually remotely close to AI.
Modern AI is nowhere near smart, it's not intelligent. It's software with a few clever gimmicks…
…here in my University Campus we have a spinoff company which makes REAL AI, used for autonomous self-driving vehicles, which really understand what's happening in the environment…
Or worse, automate decisions based on the output of putative AI software.

There's a lot of over-confident statements here. Too many to address individually, so I'll just say — it's a common mistake to simply pronounce a product/service "not AI." AI is a broad, non-specific term. It's often roughly summarized as- having an ability to process & communicate in ways that seem intelligent. The umbrella term "AI" can also cover machine learning, deep learning, neural nets, etc.

AI can mean different things to different people. To simply say that something is, or is not AI, is a bit like saying a 2023 Mercedes is a car, but a 1989 Camry is not. It depends on your criteria for "car."

I could suggest using the term "AGI" (Artificial General Intelligence) instead. That's a much narrower classification, meaning (roughly) able to replicate any intellectual task that humans or other animals can do. Of course, we don't have that — yet.
 
Handy chart from the ACM (Association for Computing Machinery). It's from a year or so ago & predates mainstream ChatGPT, so the examples may not be the most current, but the "Narrow," "Broad," and "General/AGI" categories are useful for thinking about these issues.

uf1.jpg


>> "Toward a Broad AI"
 
Are you familiar with DeepL? It's a translation tool that uses neural networks to learn too.

I've been using DeepL for years, as well as Linguee, another service from its German parent company DeepL (originally Linguee GmbH). I spent a couple years at Uni Freiburg, and still do lots of communicating in German, and DeepL is a good resource. It used to be better than Google Translate, but I think Google has closed the gap recently. I use both.
 
It used to be better than Google Translate, but I think Google has closed the gap recently. I use both.

I use both too, but I like DeepL way more for some tricky languages like French (vous vs tu and similar other issues that Google simply doesn't take into account well)
 
Modern AI is nowhere near smart, it's not intelligent. It's software with a few clever gimmicks, and if you're clever enough you can fool people that the software is doing something "intelligent."
There's a lot of over-confident statements here.
"over confident" is a subjective statement, implying but not outright saying someone is incorrect. I don't know how to work with that.

Am I incorrect?

There's been a trickle of voices that continues to grow, that supports the whole "pause AI movement," and showing up in all kinds of strange places, confidently promoted by people who don't really understand what AI is or how it works. They all implicitly promote the idea that something like Skynet may be around the corner, ready to kill us all if we don't stop it now. I'd like to ignore it, but given the people who control all the major social-media sites are behind it, and clearly promoting it, we're probably not that many years away from them trying to push some law through congress.

I have a massive problem with that because 1) it's not true and 2) the logical conclusion of "OMG skynet" is to give governments significant power to control all software development, and 3) implicitly ensuring the big-tech monopolies retain their complete control over the tech-market including their competitors. That's not even getting into 4) the massive first-amendment issues with such a law, if it were ever created.

My point about it being near impossible to define "AI" is really that there's almost no distinguishing feature between modern "AI" and "Software," except a marketing label. Which means, any such restrictions will apply to all software, not just "AI."
 
I like DeepL way more for some tricky languages like French (vous vs tu and similar other issues that Google simply doesn't take into account well)

I use DeepL for French, too- can you describe what you mean there? Familiar vs. formal is of course also an issue in German.
 
Modern AI is nowhere near smart, it's not intelligent.

That's not a meaningful statement. Both "AI" and "intelligent" are too vague to parse. Take a look at my ACM chart.
 

Back
Top Bottom