Subject: | |
From: | |
Reply To: | |
Date: | Tue, 31 Jan 2023 11:31:14 -0500 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
>The current issue of the bot is interesting but responses to questions that require specialized knowledge are comical.
I've been thinking about the stimergy conversation PLB had with Chat GPT for a few days, and now this egg size question shows a similar problem. My general answer to both these questions would have been "the current literature is inconclusive; more research in the future may resolve these outstanding questions." But Chat GPT doesn't seem capable of recognizing uncertainty or incongruency in research results.
So I would not say "specialized knowledge" is where Chat GPT is failing. Rather, I think it's failing when "facts" are incongruent. Orwell wrote about how humans have a strange capacity to hold opposing viewpoints in our heads and yet somehow believe both are true. But when we're doing that, we (or at least more of us) have a sense of cognitive dissonance. Chat GPT seems to lack that sense and fully embrace doublethink.
It's interesting because Orwell saw doublethink as a feature of totalitarian regimes, and in a sense Chat GPT exists inside a purely totalitarian regime because humans have controlled how it's every "thought" is created.
Maybe Chat GPT's apparent incapacity to recognize when it's committing doublethink is the key to navigating this brave new world of AI?
Tracey
Alberta, Canada
***********************************************
The BEE-L mailing list is powered by L-Soft's renowned
LISTSERV(R) list management software. For more information, go to:
http://www.lsoft.com/LISTSERV-powered.html
|
|
|