WTF is that corrugated hose for? And what is AI putting on his ear?
LOL Cmon, man, get with the program.
It's called the willing suspension of disbelief. Or artistic license. You know, like many abstract artists have done over the years...
Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.
Benefits of registering include
WTF is that corrugated hose for? And what is AI putting on his ear?
Hello Dr. Mike. That artistic rendition is pretty cool. Please share how you created that image.LOL Cmon, man, get with the program.
It's called the willing suspension of disbelief. Or artistic license. You know, like many abstract artists have done over the years...
View attachment 905297
LOL Cmon, man, get with the program.
It's called the willing suspension of disbelief. Or artistic license. You know, like many abstract artists have done over the years...
View attachment 905297
ChatGPT. Uploaded the photo of me, and said to do it in Picasso style!Hello Dr. Mike. That artistic rendition is pretty cool. Please share how you created that image.
Also, do not hesitate to ask your ENT if he/she is a diver. A lot of what I know about middle ear issues while diving comes from an ENT friend who is also a diver. Obviously, any good ENT will be able to render an opinion, but an ENT diver will have actually put some of these techniques into practice.
I’m not sure that AI understands the “artistic” stuff. It’s giving you a rendition of reality based on your instruction and data available. I find it fascinating how it interprets that reality to produce such a distorted image.
It knows what "Picasso style" means from the training dataset, so the distortion is based on that prompt. Or better said, it accepts "Picasso style" as a valid input, and produces output based on that input, and the reference image. Sort of like how Microsoft Word "understands" what Times New Roman means...
Whether or not it actually understands anything in the human sense is a semantic discussion. We just observe results and infer. For that matter, I couldn't swear that anyone in this thread is not a bot, right? Or even anyone I know in real life. That's what solipsism is.
It knows what "Picasso style" means from the training dataset, so the distortion is based on that prompt. Or better said, it accepts "Picasso style" as a valid input, and produces output based on that input, and the reference image. Sort of like how Microsoft Word "understands" what Times New Roman means...
Whether or not it actually understands anything in the human sense is a semantic discussion. We just observe results and infer. For that matter, I couldn't swear that anyone in this thread is not a bot, right? Or even anyone I know in real life. That's what solipsism is.
Wrong picture. I was referring to the diver with the OPV on his ear…I’m going to MOMA on Tuesday to get my annual Rothko dose.
Brilliant!Oh, got it. I made that for a lecture about ENT problems in diving that I give at Beneath the Sea. Started with a ChatGPT prompt, but that one required a lot of photoshop work. This is a well known frustration about trying to get images out of a large language model. The text processing portion works great, but it doesn't collaborate well with the image generation portion.
Here, I made a video about that problem: