bill_schubert: (Default)
[personal profile] bill_schubert
I'm behind on AI.  That has not bothered me any but I've been intrigued.  Part of the reason I've not used it much is I'm not working and work has so much more to do with using AI than playing pickleball and napping.

BUT when Dana was in the hospital both ChatGPT and Gemini offered some real insights into what was happening to her and what it meant.  I've been, for a while, a proponent of using AI for medical information.

SO I bought a Pixel Ear Bud 2.  Since they come in pairs to match one's ears I guess I bought two.  I'm interested in developing a relationship with an AI to discuss my medical issues.  I don't have many but I'd like to start feeding my particular beast so that a year from now it will have all the information about my medical history and be able to discuss it with me when something happens.

Or something like that. 

I don't know what I don't know yet.  But for $17 a month for a year (the 0% Google loan) I can find out.  It might be a nothing deal for getting good head plugs.  I've been using bone conduction headsets for years from my bike riding days but these seem to have some kind of magic that might be fun.  As much as I walk around with dogs and such it will be interesting.

Hmm ...

Date: 2025-05-06 21:17 (UTC)
ysabetwordsmith: Cartoon of me in Wordsmith persona (Default)
From: [personal profile] ysabetwordsmith
>> I've been, for a while, a proponent of using AI for medical information.<<

What are your thoughts regarding cases where AI has given not just wrong but downright dangerous advice? For example, advising people with eating disorders to use a weight-loss diet.

Re: Hmm ...

Date: 2025-05-06 21:26 (UTC)
ysabetwordsmith: Cartoon of me in Wordsmith persona (Default)
From: [personal profile] ysabetwordsmith
Sure, if you know the information is wrong or the advice is bad. I have that problem all the time with medics, even official sources. It's why I never take them seriously without thoroughly fact-checking whatever they said.

But what about people who are asking because they don't know? They won't easily be able to distinguish what is right or wrong without triangulating across multiple sources, which most people are not trained to do. What about people in a vulnerable condition? They may not be able to think clearly because of (in the cited case) a disorder, or medication, or they just feel like crap and that makes everything harder. Is there potentially a way to make AI work for sharing information, or is it just too risky?

Date: 2025-05-06 21:57 (UTC)
susandennis: (Default)
From: [personal profile] susandennis
ohhhhhh wonderful!! I can't wait to hear (hahaha see what I did there?)

Profile

bill_schubert: (Default)
bill_schubert

May 2025

S M T W T F S
    12 3
4 5 67 89 10
1112 1314 1516 17
18 19 20 21 22 2324
25 26 2728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 28th, 2025 15:09
Powered by Dreamwidth Studios