51 Comments
Mar 31, 2023Liked by Handwaving Freakoutery

Peripherally related, as I said to a twitter follow earlier today,

we may be changing from a scarcity to a post-scarcity civilization, but the trick is in that transition, and more likely than not, it's going to involve a whole lot of people getting killed first. Potentially all of them.

Expand full comment
Mar 31, 2023Liked by Handwaving Freakoutery

Eliezer's position is very clearly that an AR-15 won't help because you're just way too fucked for that.

Expand full comment
Mar 31, 2023Liked by Handwaving Freakoutery

This is a state I like to refer to as "superturbofucked".

Expand full comment

*supercalifragilisticexpialidociously fucked 🤸

Expand full comment
Mar 31, 2023Liked by Handwaving Freakoutery

Build a bespoke AI whose sole focus is detecting and destroying other AIs, as well as those directly involved trying to construct all but the killer AI. Problem solved. I'm off for a beer. ;-)

Expand full comment

And what's the easiest way to destroy the *anybody* who could conceivably build an AI? Still wind up with paperclips, yours are just more directly dead humans.

Expand full comment

Haha. I see what you did there. Change what I said, then knock down that straw man. I didn't say build an AI who can destroy "anybody who could conceivably build an AI,' did I?

Regardless, friend, my comments were simplistic and made in jest. Reductio ad absurdum.

I claim no expertise in the area of AI or even killer AIs. Cheers!

Expand full comment

Not really, I just pointed out that the basis of this article is instrumental convergence: the idea that a suitably powerful AI given any goal that is not intrinsically satisfiable may cause enormous harm with instrumental goals set up to achieve its Ultimate goal. So an AI that is told to make as many paper clips as possible may destroy humanity doing so by converting the biosphere to paperclips; because there is no maximum number of paperclips in the universe. An unachievable final goal leads to cataclysmic instrumental goals - we need to figure out how to design AI to prevent even simple commands or goals being taken to orders of magnitude that human society can't survive.

I your scenario, there is no way to stop humanity from making a potentially infinite number of dangerous AIs going forward over a nearly infinite amount of time, resulting in the goal to DESTROY ALL AI creating the instrumental goal of destroying humanity - since only humanity can build more AIs.

Expand full comment
Apr 4, 2023Liked by Handwaving Freakoutery

"there is no maximum number of paperclips in the universe"

There is no maximum number of human beings in the universe, either.

I gotta say, as much as I am a fan of the classic SF canon, those dudes seem to have gotten the alleged future capability of Homo Sapiens to colonize space completely wrong. Can't even do the damn Moon or Mars. Sittin' here on this hot rock, stewing.

Expand full comment
Apr 4, 2023Liked by Handwaving Freakoutery

Maybe "maximize human happiness" is a good ultimate goal, but I bet that too would get ugly quick.

Expand full comment

Yeah. I mean, we all got different definitions of what happiness is. Mine floats somewhere around the level of "minimized annoyance" but for some others it is "a castle, with servants."

Expand full comment
Apr 1, 2023Liked by Handwaving Freakoutery

Okay, not to be nit-picky, but any time someone starts going off on how "we're all going to die because", I immediately get turned off.

Since you've posted this, I've spent some time trying to figure out what I'm going to be needing my stockpiled guns and ammo for. What is the AI-Moloch demi-god of death nuclear volcano going to bring about that will necessitate a Book of Eli response on my part?

Expand full comment
author

I'm not saying AI will be the end of the world. I mean, it might be, but it might not be. What I'm saying is that anyone who proclaims the end of the world who doesn't act on it may not buy their own bullshit.

Expand full comment

In a world of AI hegemony, firearms and ammo are not relevant. A two liter bottle of Coke however -- spilled on the correct server -- that's practically a nuke.

Expand full comment

I'd like to buy the world a Coke.

Expand full comment
Mar 31, 2023Liked by Handwaving Freakoutery

Interestingly, despite having first encountered Eliezer online over 25 years ago, I cannot predict whether he does or does not already own an AR. He might well have had one even before writing that.

Expand full comment

I don’t understand the fluster over AI. What precisely is the cause of alarm?

(1) To me, AI appears to be high level curve fitting. It is trained to replicate a certain data set and spits out exactly what you give it. It doesn’t really “learn”. It just synthesizes a whole bunch of data. Am I wrong in this?

(2) Is AI creative? Does is have the ability to reassociate (i.e. “move the brackets around" to see familiar data in a new way) or substitute in an equivalent expression to open up new possibilities the way a mathematician or poet would?

If it lacks these abilities, then it’s not intelligent. The term “artificial intelligence” seems like a deliberate misnomer designed to scare people raised on Terminator.

Expand full comment

"... It just synthesizes a whole bunch of data. ..."

This describes a lot of well paid work performed by lawyers, accountants, journalists, authors, professors, doctors (except for the hands on bit, but don't worry, we have robots), investors ... this list can easily balloon to fit of a large portion of the white collar economy. In a circumstance where vast swathes of blue collar opportunities have been offshored, and vast swathes of the white collar service work people were told to take up instead go to AI, people will -- not figuratively -- starve.

There are some sayings about missed meals -- maybe nine, maybe three, hopefully it's just hyperbole: "there are only nine meals between mankind and anarchy." https://quoteinvestigator.com/2022/05/02/nine-meals/

Expand full comment

Meh. We don't (and won't) have self-driving cars less for technical reasons, and more because of liability.

Lawyers, accountants, doctors... these credentialed professionals stake their reputations and livelihood to provide their services. They can be held accountable.

Investors? Hahaha! Everybody knows what they're for: they ABSORB RISK.

Now, when they invent an artificial scapegoat and convince everybody to actually take it seriously, then we're truly screwed. Not kidding either: I refer you to Jane Jacobs' gloomy final work: https://www.goodreads.com/book/show/85397.Dark_Age_Ahead

Expand full comment

"and convince everybody to actually take it seriously"

This is the key part. I am amazed at the freakoutery over it but I guess a lot of people have bullshit jobs that can be performed as effectively by a faux-midwit computer -- or *think* they can. It doesn't really say a lot for the genuine self-respect of the credentialed classes.

Expand full comment

Interesting point. A lot of people (and by that, I mean a lot of feds and non-profit drones) *do* have bullshit office jobs that produce nothing but eyestrain and paper cuts. If widespread AI results in shifting those people into the useful economy, my opinion on AI might turn around...

Expand full comment

Tax preparer seems like a solid possibility. A tax preparer who *truly* knows the tax code because it actually read and remembered the *entire thing*.

This thought brought to you by my having just done my taxes. 🤪

Expand full comment

"This thought brought to you by my having just done my taxes."

Apparently you made enough money to have to pay taxes, buddy, but not enough to not pay taxes. There's a leisure class at both ends of the socioeconomic spectrum. It's like The Horseshoe Theory, but more economical.

Expand full comment

Dark Vision: AI tax prep enables an exponentially more complicated tax code. Yikes!

Expand full comment

The next luddites will be white collar workers spilling coffee on servers rather than workers of physical crafts smashing industrial machinery.

Expand full comment

I'd be more worried about AI taking over everything if I could talk to one of those robot phone trees without ending up in a screaming rage because it can't understand a single thing I'm saying. 🤣

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

I'm in one of those jobs I suspect will constrict over the next decades as AI grows in power. It isn't a lack of of self-respect that makes me think this, but more a sense of realism -- as if I'm a scribe viewing the first test prints on Gutenberg's press, realizing that my services which had been of great value are on a countdown to irrelevance.

It is obvious that what I do will be impacted by AI -- the only question is how quickly and at what magnitude of displacement. That doesn't change the fact that what I've done for my clients over the decades was valuable to them when I did it, because there was no capable AI at the time.

Expand full comment
Apr 4, 2023Liked by Handwaving Freakoutery

I'm still waiting for the low-work/no-work society that 19th-century Utopians promised us was just around the corner due to the efficiencies of mechanization. It is funny how when the work disappears the misery doesn't.

Expand full comment
Apr 4, 2023Liked by Handwaving Freakoutery

This comment I think, is the lid of a box full of some really deep shit. Just some of the subjects: human nature re possession and sharing; distribution of productivity gains; purpose of an economy; "fair" is just another four letter F word; what happens if there are no Ford factories for the buggy whip makers; should non-contributors get anything; so much I can't even begin organizing it. That last sentence could be the first sentence of a book!

Expand full comment

To be fair to Eliezer, he's been warning about this as loudly and insistently as he can for two decades... I have no doubt, personally, that he believes everything he says in the article. If you listen to his voice in his interview with Lex Friedman from yesterday, you can tell that he is genuinely frustrated, terrified, and all but despairing: https://podcasts.apple.com/us/podcast/lex-fridman-podcast/id1434243584?i=1000606616193

And Scott Alexander has been writing on this for a while too and definitely sees the Moloch connection (which, like, he'd better, of course): https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer

I really really hope that Scott's more right than Eliezer—it would be much much better if Curtis Yarvin turned out to be right: https://graymirror.substack.com/p/the-diminishing-returns-of-intelligence

Expand full comment

Chatgpt, or rather open-ai buys aws services for 3 million a day to keep up with server demands.

Now. What will happen to the ai if we can't pay that exorbitant Daily feed fee?

Exactly. AI is here, it's insanely useful and probably cheaper than most junk office labour, but that's about it. If ai starts to collapse the economy... no more server farms for it. It's digital habitat being dependent on systems beyond it's direct control, which would be the first to crash in a financial melt down.

Expand full comment

AR-15s are like $400 now. AKs are $800+. Weirdest price trend in my lifetime; wish I had seen it coming in 1993.

Expand full comment

Yeah, it's weird. If I'd had any idea, I'd have bought a whole crate of SLR-95s, instead of just the two I did get.

Expand full comment

I guess *in theory* it's all of the Russian import bans/new ATF restrictions of the last 30 years, but also *in theory* the AK should be a significantly cheaper design to fabricate. Plus there are plenty of major AK manufacturers who aren't under the same level of US sanctions -- Bulgaria, Romania, Egypt etc. Somebody who understands the biz better than I might be able to explain the whys and wherefores. Regardless, the $800-1K+ Made-In-USA AK is with us as The Current Thing now; I suppose the price is what the market will bear. I, a cheapoid, just don't feel like paying it.

Remember when you could pick up Bulgie and IMEZ Makarovs for $100-minus all day long? Chinese/Yugo SKSs too. An AK wouldn't set you back much more than $250. God I miss the end of the Cold War.

(Sorry HWFO, way off topic)

Expand full comment

I think Elon and his buddies are all posturing like they are because they're owned by the Intelligence Community, and we're already a ways down your Scenario 6. They're just trying to reduce the marginal cost of running their operation and increase its short-term effectiveness by agitating for a "ban" that would reduce (above-board) competition.

Expand full comment
Apr 2, 2023·edited Apr 2, 2023

That was fun. Creating Moloch scenarios is always a gas. How many firearms can one person use at a time? How can one afford lots of arms and also a loyal army? As suggested, one could use A.I. to program more stuff for A.I., stuff that would be wildly lucrative, and use the A.I. to create culty manufacturing consent apps to ensure the loyalty of one's soldiers in one's army. One would also need lots of A.I.s to help train up one's military assets so that it's super competitive. Also, if one is rich, one can afford a few friends to go with one's military making life more enjoyable until the Big Kahuna A.I. turns one into a paperclip. https://www.tabletmag.com/sections/news/articles/guide-understanding-hoax-century-thirteen-ways-looking-disinformation

Expand full comment

> The online “Rationalist” cult lexicon contains many in-group terms for complicated concepts, the most important of which in my opinion is “Moloch.”

I have to go with "Motte and Bailey" here.

Expand full comment

I'll concur. I use that one a heck of a lot more than that if moloch.

Expand full comment

How does having an AR-15 help us fight AI? I have a more antiquated carbine, but how does that plus a "spam can" or five help us fight the you-say-likely advance of AI? I really want to know.

Expand full comment
author

Any "AI destroys us all" scenario is going to play out very much like any other doomsday prepper scenario. It's going to be associated with a societal breakdown and anarchy. The gun doesn't stop the AI from doing whatever it's going to do, it improves your personal odds while it does whatever it does.

Expand full comment
Comment deleted
Expand full comment

I say this in the politest way possible, but while .50 BMG shots have been made at that range, I would be surprised if I happened to be in the comments section with someone who can make those kinds of shots. Hey, I can't either, and I'm pretty good with a rifle. But 3500 in the dark requires some serious black magic.

I usually limit my claims to 1600 meters in still air, in daylight. I figure a mile of "reach out and touch someone" is good enough. 🤪

Expand full comment
author

I hit a 1 ft gong at 1000 yds with an AR-10 in 308 at SHOT Show, but it was with a rifle someone else zeroed. I would never trust myself with a shot like that in real life.

Expand full comment

Long-range rifle shooting is funny. In a very real way, the actual shooting part of the equation (aligning the sight with the target, building a stable position, firing the rifle without disturbing the above) is the easy part. The hard part is accurately measuring the environmental factors that will affect the bullet in flight and calculating the necessary changes to the aiming point. Shooting a target past the trans-sonic range of your cartridge makes the second part orders of magnitude more difficult.

Ironically, AIs would be very good at that second part. One of the midwit office jobs obsoleted by wide scale AI adoption might be the military and police sniper...

Expand full comment

1000 yards with .308 is harder than one might normally think, as .308 transitions back to subsonic around ~800, and that does unhelpful things to the flight path a lot of the time. Not that 1000 yards is something most people think of as easy, just, the inherent ballistics of .308 make it even *more* of a challenge. 🤪

Expand full comment
Comment deleted
Expand full comment

There are some advantages to living in the desert. We may not have any water, but there's plenty of space for long distance shooting! 😁

Expand full comment

Anything within 10 yards better be fearful of me.

Expand full comment
Comment deleted
Expand full comment

That's a much more reliable method of tagging things at that range. But yes, hella expensive, and I'm not sure about running tracers through a nice expensive long range rifle.

Expand full comment