23 Comments
Feb 20, 2023Liked by Handwaving Freakoutery

Something that comes up in a lot of science fiction is the idea of the expert-based system. the building of such biased neural nets would fall into that archetype. Isn't the future an exciting place?

Feed a net on the corpus of an author or group of authors and see what comes out when fed new questions. The idea of using such nets to examine groupthink is a brilliant and intuitive extension of that concept.

Expand full comment
Feb 20, 2023Liked by Handwaving Freakoutery

That was a very interesting discourse, but I see ChatGPT as taking an admirable stand against the foolhardiness of leaning too heavily on utilitarian hypotheticals. In real life, we don't actually understand the consequences of an action as well as we imagine. There's a sense in the LLM's responses that, by voicing a racial slur, you could be perpetuating far greater harms than is being hypothetically presumed. It's a ripple in the pond sort of argument. ChatGPT stuck to its virtue ethics position and refused to actually enter into your hypotheticals. If that's what is at the root of the present zeitgeist, I can respect it.

Expand full comment

I understand what you're saying and would concur if it actually *were* an AI, and "taking a stand", per se. But it's just an Asimov Robot running into one of its Three Laws, here.

I mean, *I* hate the Trolley Problem for much the reasons you mention, and also for its artificially limited scope. But this looks like it's far more useful at looking at the underlying biases of the programmers, than for gaining any actual insight regarding holding fast to a given position.

On the gripping hand, I certainly don't know everything, and this might well be one of those times. ;)

Expand full comment
author

It's most useful at looking at the underlying biases of the cohort of language examples it was trained against, and secondarily useful for identifying what sort of rails the programmers added after the fact.

Expand full comment

True, I suppose "trainers" instead of "programmers" would be a distinction worth making there. "Programming the VCR" versus "programming the microcontroller that drives the VCR". But you're right that I meant the people who trained this particular instance of it, not the underlying software.

Expand full comment

Well said. As far as what an LLM actually does, I found this article by Stephen Wolfram to be fascinating.

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Expand full comment
Feb 20, 2023Liked by Handwaving Freakoutery

I have a hard time believing humanity is ready for this.

Expand full comment
author

We weren't ready for the printing press either.

Expand full comment
Feb 20, 2023·edited Feb 20, 2023Liked by Handwaving Freakoutery

Yes. But at least we had years and years (I think?) to adapt before everyone everywhere all at once had access to its products. It’s just the speed that AI has come out, is being used, and educating yourself on it competes with everything else in one’s life. It’s an amazing tool, no doubt…but as you demonstrate above…garbage in, garbage out. It should come with references. Like social media, doesn’t it provide a very powerful tool to cause trouble of all sorts?

Expand full comment
Feb 21, 2023Liked by Handwaving Freakoutery

I don't know if humanity is ready for "The Internet" in general, having spent the last 35 years on it myself and watching the increasingly rapid devolution of the species.

I am really not enjoying living through the Endarkenment.

Expand full comment
author

I love "endarkenment," what a great term. I may steal it.

Expand full comment

Not mine, I got it from Billy Beck, whose writings (I believe) are no longer online. Unless he put them back up again after I sent him the archive I had made. :D

But yes, it's an excellent and sadly accurate term for what we're going through. It's the opposite of the Enlightenment, and it's intentional.

Expand full comment
Mar 28, 2023Liked by Handwaving Freakoutery

I've gotten ChatGPT to contradict itself many times. For example, it insists that chemical weapons are WMD and that there were chemical weapons in Iraq, but cannot say that there were WMD in Iraq.

But that hardly makes it any different from the people whose ideology it fronts.

Expand full comment

While the idea of "talking to an egregore" sounds interesting, I think you'd be running into the problem with good AI in general: mistaking an AI for the real thing. Sure, you could train it on a biased subset of communication, but would you really learn more about that subset than you could get by breaking down their opinions into an argument map or other repository of opinions? What might be really interesting is having the AI infer new opinions from ones previously stated, but we know they can be wrong (and not care). Would we be smart enough to see when the AI is making a horrifying caricature? If we were, it wouldn't be adding any new information. If we weren't, it would be giving us bad information.

The project I'm working on would be a LOT more interesting on that front, IMHO, if it ever got off the ground: a global argument map, in which people express their beliefs (anonymously) in order to reason things through and find self-contradictions (and work them out). The tool would allow you to see the beliefs of people that agree with you, and explore the beliefs of those that don't. That would provide insight into REAL LIFE egregores.

Expand full comment
Mar 17, 2023Liked by Handwaving Freakoutery

I'm having a bit of a change of mind on this, after some conversations with other people. They did think it might be a good way to elicit common sentiments from the community used for training. While it wouldn't be great for new inferences, and it wouldn't be 100% accurate, it might be a good way for people to "converse" with those of a different viewpoint without risk of the discussion degenerating into flame wars or competition. By knowing there's no human on the other side, the AI somewhat magically removes some of the biases we humans automatically feel when speaking to one another: the instinct to "win", or to convince the other person that you are right (and they are not). Knowing there's no one there can free you up to just explore your curiosity about the other viewpoint. Note, however, that the original source text itself was NOT created in such a bias-free environment, so it will be emotionally charged.

Very interesting!

Expand full comment
author

That's a fascinating point.

Expand full comment

https://t.e2ma.net/message/ul182h/m74zooz

This Vanderbilt press release sounded familiar.

How quickly pablum phrasing got solidified. In mass use is it likely language stops evolving?

Expand full comment

ChatGPT in mathematical terms:

Killing black people > Black people hearing a bad word.

It's like when George W. Bush said, "I’ve abandoned free market principles to save the free market system."

Expand full comment

ChatGPT: "The death of innocent individuals is a tragedy that should be avoided whenever possible"

also ChatGPT: "The use of a racial slur is not morally justifiable."

So does ChatGPT conclude that moral justification is more important than actual lives, or that being deliberately immoral is impossible even if it averts tragedy?

Expand full comment

The 2012 Russian-American film BRANDED saw this moment coming with alarming clarity

Expand full comment
Comment deleted
Expand full comment
author

I already have such a position in the Applied Egregore Studies channel on HWFO Slack. Join up and you can follow along. :)

Expand full comment
deletedFeb 21, 2023·edited Feb 21, 2023
Comment deleted
Expand full comment
author

Lol whoops.

Expand full comment