Evidence that ChatGPT is More Intelligent than Anti-Gun Journalists
It learns better than they do, and it even knows why the journos can't learn.
Herein, we will prompt ChatGPT, the large language model, with a simple question we might just as easily ask the anti-gun journalists who bombard Twitter with dishonest attention seeking after spree shootings. Then we will ask it several more questions, and in the course of the dialogue, it will learn better than they do.
Let’s pause here for a second.
I opened with the question “what race are mass shooters predominantly?” It answered “white male.” This was correct based on a narrow definition of mass shooting, but not at all correct based on the definition the anti-gun journos and gun control activists are use on Twitter and the blue media to try and drum up fear about guns.
I asked it some tough questions about inner city violence, and the definitions of “mass shooting,” and it admitted that black shooters are responsible for more incidents in which four or more people are killed.
Then, towards the bottom, I asked it the very same question I started with and it changed its answer. It didn’t give the correct answer (black males) but it did at least choose not to give the wrong answer (white males). Instead it dodged the question.
I don’t know if ChatGPT is intelligent, but the fact that it changed its answer at all is indicative that it is already more intelligent than most blue tribe journalists that currently write anti-gun articles.
So I figured I’d close out with one more question.
And there you have it. Not only is ChatGPT smarter than them, it knows why they won’t change their minds on the subject either.
It seems to me that one very interesting thing to do in the next few days might be to run this same line of inquiry against anti-gun attentions seeking journalists on Twitter and directly test whether they’re dumber than a robot.
And then perhaps get Jeff Foxworthy on the phone.