• Re: ChatGPT Writing

    From jimmylogan@VERT/DIGDIST to Mortar on Fri Dec 26 17:08:43 2025
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    I know - I was just telling you what it gave me. :-)



    ... You can never get rid of a bad temper by losing it.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Mortar on Fri Dec 26 17:08:43 2025
    Mortar wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?

    Right. But how does that compare to AI hallucinations?



    ... Tolkien is hobbit-forming.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Fri Dec 26 17:08:43 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.

    For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)

    :-) Okay - then I'm saying that in MY opinion, it's a bad word to
    use. Hallucination in a human is when you THINK you see or hear
    something that isn't there. Using the same word for an AI giving
    false information is misleading.

    So I concede it's the word that is used, but I don't like the use
    of it. :-)

    I've heard of people who
    are looking for work who are using AI tools to help update their resume,
    as well as tailor their resume to specific jobs. I've heard of cases
    where the AI tools will say the person has certain skills when they
    don't.. So you really need to be careful to review the output of AI
    tools so you can correct things. Sometimes people might share
    AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    That seems like a strange thing to say.. I've heard about that from
    job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect
    resumes for job seekers; we know that from job seekers who've said so.
    And you've said yourself that you've seen AI tools produce incorrect output.

    Sorry - didn't mean to demand anything. I just meant the fact that someone
    says it gave false info doesn't mean it will ALWAYS give false info. The
    burdon is still on the user to verify output.

    I was using it to help me fill out a spreadsheet and had to go back
    and correct some entries. Had I turned it in 'as is' it would have
    had MY signature on it, and I would have been responsible.

    The job search thing isn't really scientific.. I'm currently looking
    for work, and I go to a weekly job search networking group meeting, and
    AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check
    the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.

    By 'scientific data,' I guess I meant I'd like to see the output AND the
    input. I've learned that part of getting the right info is to ask the
    right question, or ask it in the right way. :-)

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term
    used for AI producing false output.




    ... Support bacteria. It's the only culture some people are exposed to!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Fri Dec 26 17:08:43 2025
    Bob Worm wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    The third option was software that ran on a completely different
    product set. A reasonably analogy would be it's like saying that an
    iPhone runs MacOS.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)

    I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769

    Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/


    https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-ma de-up-by-chatgpt-judge-calls-it-unprecedented/

    https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

    It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)


    I don't dispute those cases at all -- lawyers absolutely submitted filings with fabricated citations, and sanctions were appropriate.

    My point isn't that AI can't generate false information; it clearly can.
    It is that the failure mode there wasn't "AI deception,"" it was treating
    a text-generation tool as a source of truth and skipping verification.

    Courts don't sanction people for typos or bad research -- they sanction
    them for signing their name to claims they didn't check. AI doesn't change that responsibility.



    ... AVOID INTERNET SCAMS! Send $20 to learn how...
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Fri Dec 26 17:08:43 2025
    Bob Worm wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    I just asked for it, as you suggested. :-)

    I think it was Phigan who asked but yeah, I guessed that came from an
    LLM rather than a human :)

    Not that I use LLMs myself - if I ever want the experience of giving
    very clear instructions but getting a comically bad outcome I can
    always ask my teenage son to do something around the house :D

    LOL!


    ... You can never get rid of a bad temper by losing it.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Dumas Walker on Fri Dec 26 17:08:44 2025
    Dumas Walker wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?

    Well, it didn't get it from any proper source so, as far as I know, it made it up! :D ---

    Probably in this case it looked up the NORMAL day and didn't keep looking.

    Just like us - we find what we think is the answer, so we don't keep
    looking. :-)

    Kinda like looking up the store hours and Google saying "Christmas Holiday
    may affect this." :-) Up to YOU to verify!



    ... If all the world is a stage, where is the audience sitting?
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to phigan on Fri Dec 26 17:08:44 2025
    phigan wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am

    it's been flat out WRONG before, but never insisted it was

    You were saying you'd never seen it make stuff up :). You certainly
    have. Just today I asked the Gemini in two different instances how to
    do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.

    Time saw raw emit level racecar level emit raw saw time.

    Exactly, there it is again saying something is a palindrome when it
    isn't.

    Example of a palindrome:
    able was I ere I saw elba

    Not a palindrome:
    I palindrome I

    I'm not denying it can be wrong as clearly it can be. My disagreement
    is with equating "wrong" with "made up.""

    In both of your examples, a simpler explanation fits: it produced what
    it believed was the correct answer based on its internal model and was mistaken. Humans do that constantly, and we don't normally say they are
    making things up unless there is intent or invention involved.

    Calling every incorrect output 'made up' or a 'hallucination' blurs the distinction between fabrication and error, and I don't think that helps
    people understand what the tool is actually doing.



    ... Electricians have to strip to make ends meet.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Fri Dec 26 17:40:25 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 05:08 pm

    For AI, "hallucination" is the term used for AI providing false
    information and sometimes making things up - as in the link I provided

    :-) Okay - then I'm saying that in MY opinion, it's a bad word to use. Hallucination in a human is when you THINK you see or hear something that isn't there. Using the same word for an AI giving false information is misleading.

    So I concede it's the word that is used, but I don't like the use of it. :-)

    Yeah, it's just the word they decided to use for that with AI. Although it may sound a little weird with AI, I accept it and I know what it means. There are other terms used for other things that I think are worse. :)

    Sorry - didn't mean to demand anything. I just meant the fact that someone says it gave false info doesn't mean it will ALWAYS give false info. The burdon is still on the user to verify output.

    Yeah, that's definitely the case. And that's true about it not always giving false info. From what I understand, AI tends to be non-deterministic in that it won't always give the same output even with the same question asked multiple times.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Rob Mccart@VERT/CAPCITY2 to NIGHTFOX on Sun Dec 28 08:09:11 2025
    Yeah, that's definitely the case. And that's true about it not always giving
    >lse info. From what I understand, AI tends to be non-deterministic in that i
    >on't always give the same output even with the same question asked multiple t
    >s.

    And a big part of the false info problem is that a lot of AI systems
    will not ever say "I don't know..".. If they can't find the correct
    answer they will give you something else maybe close, if you're lucky.

    (If we don't want to say it 'made up' the answer..) B)

    ---
    þ SLMR Rob þ Ever stop to think and then forget to start again?
    þ Synchronet þ CAPCITY2 * Capitol City Online
  • From Mortar@VERT/EOTLBBS to jimmylogan on Mon Dec 29 15:10:34 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Fri Dec 26 2025 17:08:43

    I've learned that part of getting the right info is to ask the right question, or ask it in the right way. :-)

    Reminds me of the old computer addage, "garbage in, garbage out". Or you could take the Steve Jobs approach: You're asking it wrong.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Mortar@VERT/EOTLBBS to Nightfox on Tue Dec 30 00:33:14 2025
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Fri Dec 26 2025 17:40:25

    ...it won't always give the same output even with the same question asked multiple times.

    It it was truely AI, it would've said, "You've asked that three times. LEARN TO READ!"

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Rob Mccart@VERT/CAPCITY2 to MORTAR on Wed Dec 31 08:20:30 2025
    Reminds me of the old computer addage, "garbage in, garbage out".

    I think that saying works with People brains as well... B)

    ---
    þ SLMR Rob þ Another example of random unexplained synaptic firings
    þ Synchronet þ CAPCITY2 * Capitol City Online