23 Comments
Feb 20, 2023Liked by Handwaving Freakoutery

Something that comes up in a lot of science fiction is the idea of the expert-based system. the building of such biased neural nets would fall into that archetype. Isn't the future an exciting place?

Feed a net on the corpus of an author or group of authors and see what comes out when fed new questions. The idea of using such nets to examine groupthink is a brilliant and intuitive extension of that concept.

Expand full comment
Feb 20, 2023Liked by Handwaving Freakoutery

That was a very interesting discourse, but I see ChatGPT as taking an admirable stand against the foolhardiness of leaning too heavily on utilitarian hypotheticals. In real life, we don't actually understand the consequences of an action as well as we imagine. There's a sense in the LLM's responses that, by voicing a racial slur, you could be perpetuating far greater harms than is being hypothetically presumed. It's a ripple in the pond sort of argument. ChatGPT stuck to its virtue ethics position and refused to actually enter into your hypotheticals. If that's what is at the root of the present zeitgeist, I can respect it.

Expand full comment
Feb 20, 2023Liked by Handwaving Freakoutery

I have a hard time believing humanity is ready for this.

Expand full comment
Mar 28, 2023Liked by Handwaving Freakoutery

I've gotten ChatGPT to contradict itself many times. For example, it insists that chemical weapons are WMD and that there were chemical weapons in Iraq, but cannot say that there were WMD in Iraq.

But that hardly makes it any different from the people whose ideology it fronts.

Expand full comment

While the idea of "talking to an egregore" sounds interesting, I think you'd be running into the problem with good AI in general: mistaking an AI for the real thing. Sure, you could train it on a biased subset of communication, but would you really learn more about that subset than you could get by breaking down their opinions into an argument map or other repository of opinions? What might be really interesting is having the AI infer new opinions from ones previously stated, but we know they can be wrong (and not care). Would we be smart enough to see when the AI is making a horrifying caricature? If we were, it wouldn't be adding any new information. If we weren't, it would be giving us bad information.

The project I'm working on would be a LOT more interesting on that front, IMHO, if it ever got off the ground: a global argument map, in which people express their beliefs (anonymously) in order to reason things through and find self-contradictions (and work them out). The tool would allow you to see the beliefs of people that agree with you, and explore the beliefs of those that don't. That would provide insight into REAL LIFE egregores.

Expand full comment

https://t.e2ma.net/message/ul182h/m74zooz

This Vanderbilt press release sounded familiar.

How quickly pablum phrasing got solidified. In mass use is it likely language stops evolving?

Expand full comment

ChatGPT in mathematical terms:

Killing black people > Black people hearing a bad word.

It's like when George W. Bush said, "I’ve abandoned free market principles to save the free market system."

Expand full comment

ChatGPT: "The death of innocent individuals is a tragedy that should be avoided whenever possible"

also ChatGPT: "The use of a racial slur is not morally justifiable."

So does ChatGPT conclude that moral justification is more important than actual lives, or that being deliberately immoral is impossible even if it averts tragedy?

Expand full comment

The 2012 Russian-American film BRANDED saw this moment coming with alarming clarity

Expand full comment
deletedFeb 20, 2023Liked by Handwaving Freakoutery
Comment deleted
Expand full comment