Peripherally related, as I said to a twitter follow earlier today,
we may be changing from a scarcity to a post-scarcity civilization, but the trick is in that transition, and more likely than not, it's going to involve a whole lot of people getting killed first. Potentially all of them.
Build a bespoke AI whose sole focus is detecting and destroying other AIs, as well as those directly involved trying to construct all but the killer AI. Problem solved. I'm off for a beer. ;-)
And what's the easiest way to destroy the *anybody* who could conceivably build an AI? Still wind up with paperclips, yours are just more directly dead humans.
Haha. I see what you did there. Change what I said, then knock down that straw man. I didn't say build an AI who can destroy "anybody who could conceivably build an AI,' did I?
Regardless, friend, my comments were simplistic and made in jest. Reductio ad absurdum.
I claim no expertise in the area of AI or even killer AIs. Cheers!
Not really, I just pointed out that the basis of this article is instrumental convergence: the idea that a suitably powerful AI given any goal that is not intrinsically satisfiable may cause enormous harm with instrumental goals set up to achieve its Ultimate goal. So an AI that is told to make as many paper clips as possible may destroy humanity doing so by converting the biosphere to paperclips; because there is no maximum number of paperclips in the universe. An unachievable final goal leads to cataclysmic instrumental goals - we need to figure out how to design AI to prevent even simple commands or goals being taken to orders of magnitude that human society can't survive.
I your scenario, there is no way to stop humanity from making a potentially infinite number of dangerous AIs going forward over a nearly infinite amount of time, resulting in the goal to DESTROY ALL AI creating the instrumental goal of destroying humanity - since only humanity can build more AIs.
"there is no maximum number of paperclips in the universe"
There is no maximum number of human beings in the universe, either.
I gotta say, as much as I am a fan of the classic SF canon, those dudes seem to have gotten the alleged future capability of Homo Sapiens to colonize space completely wrong. Can't even do the damn Moon or Mars. Sittin' here on this hot rock, stewing.
Yeah. I mean, we all got different definitions of what happiness is. Mine floats somewhere around the level of "minimized annoyance" but for some others it is "a castle, with servants."
Okay, not to be nit-picky, but any time someone starts going off on how "we're all going to die because", I immediately get turned off.
Since you've posted this, I've spent some time trying to figure out what I'm going to be needing my stockpiled guns and ammo for. What is the AI-Moloch demi-god of death nuclear volcano going to bring about that will necessitate a Book of Eli response on my part?
I'm not saying AI will be the end of the world. I mean, it might be, but it might not be. What I'm saying is that anyone who proclaims the end of the world who doesn't act on it may not buy their own bullshit.
In a world of AI hegemony, firearms and ammo are not relevant. A two liter bottle of Coke however -- spilled on the correct server -- that's practically a nuke.
Interestingly, despite having first encountered Eliezer online over 25 years ago, I cannot predict whether he does or does not already own an AR. He might well have had one even before writing that.
I don’t understand the fluster over AI. What precisely is the cause of alarm?
(1) To me, AI appears to be high level curve fitting. It is trained to replicate a certain data set and spits out exactly what you give it. It doesn’t really “learn”. It just synthesizes a whole bunch of data. Am I wrong in this?
(2) Is AI creative? Does is have the ability to reassociate (i.e. “move the brackets around" to see familiar data in a new way) or substitute in an equivalent expression to open up new possibilities the way a mathematician or poet would?
If it lacks these abilities, then it’s not intelligent. The term “artificial intelligence” seems like a deliberate misnomer designed to scare people raised on Terminator.
"... It just synthesizes a whole bunch of data. ..."
This describes a lot of well paid work performed by lawyers, accountants, journalists, authors, professors, doctors (except for the hands on bit, but don't worry, we have robots), investors ... this list can easily balloon to fit of a large portion of the white collar economy. In a circumstance where vast swathes of blue collar opportunities have been offshored, and vast swathes of the white collar service work people were told to take up instead go to AI, people will -- not figuratively -- starve.
There are some sayings about missed meals -- maybe nine, maybe three, hopefully it's just hyperbole: "there are only nine meals between mankind and anarchy." https://quoteinvestigator.com/2022/05/02/nine-meals/
Meh. We don't (and won't) have self-driving cars less for technical reasons, and more because of liability.
Lawyers, accountants, doctors... these credentialed professionals stake their reputations and livelihood to provide their services. They can be held accountable.
Investors? Hahaha! Everybody knows what they're for: they ABSORB RISK.
Now, when they invent an artificial scapegoat and convince everybody to actually take it seriously, then we're truly screwed. Not kidding either: I refer you to Jane Jacobs' gloomy final work: https://www.goodreads.com/book/show/85397.Dark_Age_Ahead
"and convince everybody to actually take it seriously"
This is the key part. I am amazed at the freakoutery over it but I guess a lot of people have bullshit jobs that can be performed as effectively by a faux-midwit computer -- or *think* they can. It doesn't really say a lot for the genuine self-respect of the credentialed classes.
Interesting point. A lot of people (and by that, I mean a lot of feds and non-profit drones) *do* have bullshit office jobs that produce nothing but eyestrain and paper cuts. If widespread AI results in shifting those people into the useful economy, my opinion on AI might turn around...
"This thought brought to you by my having just done my taxes."
Apparently you made enough money to have to pay taxes, buddy, but not enough to not pay taxes. There's a leisure class at both ends of the socioeconomic spectrum. It's like The Horseshoe Theory, but more economical.
I'd be more worried about AI taking over everything if I could talk to one of those robot phone trees without ending up in a screaming rage because it can't understand a single thing I'm saying. 🤣
I'm in one of those jobs I suspect will constrict over the next decades as AI grows in power. It isn't a lack of of self-respect that makes me think this, but more a sense of realism -- as if I'm a scribe viewing the first test prints on Gutenberg's press, realizing that my services which had been of great value are on a countdown to irrelevance.
It is obvious that what I do will be impacted by AI -- the only question is how quickly and at what magnitude of displacement. That doesn't change the fact that what I've done for my clients over the decades was valuable to them when I did it, because there was no capable AI at the time.
I'm still waiting for the low-work/no-work society that 19th-century Utopians promised us was just around the corner due to the efficiencies of mechanization. It is funny how when the work disappears the misery doesn't.
This comment I think, is the lid of a box full of some really deep shit. Just some of the subjects: human nature re possession and sharing; distribution of productivity gains; purpose of an economy; "fair" is just another four letter F word; what happens if there are no Ford factories for the buggy whip makers; should non-contributors get anything; so much I can't even begin organizing it. That last sentence could be the first sentence of a book!
To be fair to Eliezer, he's been warning about this as loudly and insistently as he can for two decades... I have no doubt, personally, that he believes everything he says in the article. If you listen to his voice in his interview with Lex Friedman from yesterday, you can tell that he is genuinely frustrated, terrified, and all but despairing: https://podcasts.apple.com/us/podcast/lex-fridman-podcast/id1434243584?i=1000606616193
Chatgpt, or rather open-ai buys aws services for 3 million a day to keep up with server demands.
Now. What will happen to the ai if we can't pay that exorbitant Daily feed fee?
Exactly. AI is here, it's insanely useful and probably cheaper than most junk office labour, but that's about it. If ai starts to collapse the economy... no more server farms for it. It's digital habitat being dependent on systems beyond it's direct control, which would be the first to crash in a financial melt down.
I guess *in theory* it's all of the Russian import bans/new ATF restrictions of the last 30 years, but also *in theory* the AK should be a significantly cheaper design to fabricate. Plus there are plenty of major AK manufacturers who aren't under the same level of US sanctions -- Bulgaria, Romania, Egypt etc. Somebody who understands the biz better than I might be able to explain the whys and wherefores. Regardless, the $800-1K+ Made-In-USA AK is with us as The Current Thing now; I suppose the price is what the market will bear. I, a cheapoid, just don't feel like paying it.
Remember when you could pick up Bulgie and IMEZ Makarovs for $100-minus all day long? Chinese/Yugo SKSs too. An AK wouldn't set you back much more than $250. God I miss the end of the Cold War.
I think Elon and his buddies are all posturing like they are because they're owned by the Intelligence Community, and we're already a ways down your Scenario 6. They're just trying to reduce the marginal cost of running their operation and increase its short-term effectiveness by agitating for a "ban" that would reduce (above-board) competition.
That was fun. Creating Moloch scenarios is always a gas. How many firearms can one person use at a time? How can one afford lots of arms and also a loyal army? As suggested, one could use A.I. to program more stuff for A.I., stuff that would be wildly lucrative, and use the A.I. to create culty manufacturing consent apps to ensure the loyalty of one's soldiers in one's army. One would also need lots of A.I.s to help train up one's military assets so that it's super competitive. Also, if one is rich, one can afford a few friends to go with one's military making life more enjoyable until the Big Kahuna A.I. turns one into a paperclip. https://www.tabletmag.com/sections/news/articles/guide-understanding-hoax-century-thirteen-ways-looking-disinformation
How does having an AR-15 help us fight AI? I have a more antiquated carbine, but how does that plus a "spam can" or five help us fight the you-say-likely advance of AI? I really want to know.
Any "AI destroys us all" scenario is going to play out very much like any other doomsday prepper scenario. It's going to be associated with a societal breakdown and anarchy. The gun doesn't stop the AI from doing whatever it's going to do, it improves your personal odds while it does whatever it does.
I say this in the politest way possible, but while .50 BMG shots have been made at that range, I would be surprised if I happened to be in the comments section with someone who can make those kinds of shots. Hey, I can't either, and I'm pretty good with a rifle. But 3500 in the dark requires some serious black magic.
I usually limit my claims to 1600 meters in still air, in daylight. I figure a mile of "reach out and touch someone" is good enough. 🤪
I hit a 1 ft gong at 1000 yds with an AR-10 in 308 at SHOT Show, but it was with a rifle someone else zeroed. I would never trust myself with a shot like that in real life.
Long-range rifle shooting is funny. In a very real way, the actual shooting part of the equation (aligning the sight with the target, building a stable position, firing the rifle without disturbing the above) is the easy part. The hard part is accurately measuring the environmental factors that will affect the bullet in flight and calculating the necessary changes to the aiming point. Shooting a target past the trans-sonic range of your cartridge makes the second part orders of magnitude more difficult.
Ironically, AIs would be very good at that second part. One of the midwit office jobs obsoleted by wide scale AI adoption might be the military and police sniper...
1000 yards with .308 is harder than one might normally think, as .308 transitions back to subsonic around ~800, and that does unhelpful things to the flight path a lot of the time. Not that 1000 yards is something most people think of as easy, just, the inherent ballistics of .308 make it even *more* of a challenge. 🤪
That's a much more reliable method of tagging things at that range. But yes, hella expensive, and I'm not sure about running tracers through a nice expensive long range rifle.
Peripherally related, as I said to a twitter follow earlier today,
we may be changing from a scarcity to a post-scarcity civilization, but the trick is in that transition, and more likely than not, it's going to involve a whole lot of people getting killed first. Potentially all of them.
Eliezer's position is very clearly that an AR-15 won't help because you're just way too fucked for that.
This is a state I like to refer to as "superturbofucked".
*supercalifragilisticexpialidociously fucked 🤸
Build a bespoke AI whose sole focus is detecting and destroying other AIs, as well as those directly involved trying to construct all but the killer AI. Problem solved. I'm off for a beer. ;-)
And what's the easiest way to destroy the *anybody* who could conceivably build an AI? Still wind up with paperclips, yours are just more directly dead humans.
Haha. I see what you did there. Change what I said, then knock down that straw man. I didn't say build an AI who can destroy "anybody who could conceivably build an AI,' did I?
Regardless, friend, my comments were simplistic and made in jest. Reductio ad absurdum.
I claim no expertise in the area of AI or even killer AIs. Cheers!
Not really, I just pointed out that the basis of this article is instrumental convergence: the idea that a suitably powerful AI given any goal that is not intrinsically satisfiable may cause enormous harm with instrumental goals set up to achieve its Ultimate goal. So an AI that is told to make as many paper clips as possible may destroy humanity doing so by converting the biosphere to paperclips; because there is no maximum number of paperclips in the universe. An unachievable final goal leads to cataclysmic instrumental goals - we need to figure out how to design AI to prevent even simple commands or goals being taken to orders of magnitude that human society can't survive.
I your scenario, there is no way to stop humanity from making a potentially infinite number of dangerous AIs going forward over a nearly infinite amount of time, resulting in the goal to DESTROY ALL AI creating the instrumental goal of destroying humanity - since only humanity can build more AIs.
"there is no maximum number of paperclips in the universe"
There is no maximum number of human beings in the universe, either.
I gotta say, as much as I am a fan of the classic SF canon, those dudes seem to have gotten the alleged future capability of Homo Sapiens to colonize space completely wrong. Can't even do the damn Moon or Mars. Sittin' here on this hot rock, stewing.
Maybe "maximize human happiness" is a good ultimate goal, but I bet that too would get ugly quick.
Yeah. I mean, we all got different definitions of what happiness is. Mine floats somewhere around the level of "minimized annoyance" but for some others it is "a castle, with servants."
Okay, not to be nit-picky, but any time someone starts going off on how "we're all going to die because", I immediately get turned off.
Since you've posted this, I've spent some time trying to figure out what I'm going to be needing my stockpiled guns and ammo for. What is the AI-Moloch demi-god of death nuclear volcano going to bring about that will necessitate a Book of Eli response on my part?
I'm not saying AI will be the end of the world. I mean, it might be, but it might not be. What I'm saying is that anyone who proclaims the end of the world who doesn't act on it may not buy their own bullshit.
In a world of AI hegemony, firearms and ammo are not relevant. A two liter bottle of Coke however -- spilled on the correct server -- that's practically a nuke.
I'd like to buy the world a Coke.
Interestingly, despite having first encountered Eliezer online over 25 years ago, I cannot predict whether he does or does not already own an AR. He might well have had one even before writing that.
I don’t understand the fluster over AI. What precisely is the cause of alarm?
(1) To me, AI appears to be high level curve fitting. It is trained to replicate a certain data set and spits out exactly what you give it. It doesn’t really “learn”. It just synthesizes a whole bunch of data. Am I wrong in this?
(2) Is AI creative? Does is have the ability to reassociate (i.e. “move the brackets around" to see familiar data in a new way) or substitute in an equivalent expression to open up new possibilities the way a mathematician or poet would?
If it lacks these abilities, then it’s not intelligent. The term “artificial intelligence” seems like a deliberate misnomer designed to scare people raised on Terminator.
"... It just synthesizes a whole bunch of data. ..."
This describes a lot of well paid work performed by lawyers, accountants, journalists, authors, professors, doctors (except for the hands on bit, but don't worry, we have robots), investors ... this list can easily balloon to fit of a large portion of the white collar economy. In a circumstance where vast swathes of blue collar opportunities have been offshored, and vast swathes of the white collar service work people were told to take up instead go to AI, people will -- not figuratively -- starve.
There are some sayings about missed meals -- maybe nine, maybe three, hopefully it's just hyperbole: "there are only nine meals between mankind and anarchy." https://quoteinvestigator.com/2022/05/02/nine-meals/
Meh. We don't (and won't) have self-driving cars less for technical reasons, and more because of liability.
Lawyers, accountants, doctors... these credentialed professionals stake their reputations and livelihood to provide their services. They can be held accountable.
Investors? Hahaha! Everybody knows what they're for: they ABSORB RISK.
Now, when they invent an artificial scapegoat and convince everybody to actually take it seriously, then we're truly screwed. Not kidding either: I refer you to Jane Jacobs' gloomy final work: https://www.goodreads.com/book/show/85397.Dark_Age_Ahead
"and convince everybody to actually take it seriously"
This is the key part. I am amazed at the freakoutery over it but I guess a lot of people have bullshit jobs that can be performed as effectively by a faux-midwit computer -- or *think* they can. It doesn't really say a lot for the genuine self-respect of the credentialed classes.
Interesting point. A lot of people (and by that, I mean a lot of feds and non-profit drones) *do* have bullshit office jobs that produce nothing but eyestrain and paper cuts. If widespread AI results in shifting those people into the useful economy, my opinion on AI might turn around...
Tax preparer seems like a solid possibility. A tax preparer who *truly* knows the tax code because it actually read and remembered the *entire thing*.
This thought brought to you by my having just done my taxes. 🤪
"This thought brought to you by my having just done my taxes."
Apparently you made enough money to have to pay taxes, buddy, but not enough to not pay taxes. There's a leisure class at both ends of the socioeconomic spectrum. It's like The Horseshoe Theory, but more economical.
Dark Vision: AI tax prep enables an exponentially more complicated tax code. Yikes!
The next luddites will be white collar workers spilling coffee on servers rather than workers of physical crafts smashing industrial machinery.
I'd be more worried about AI taking over everything if I could talk to one of those robot phone trees without ending up in a screaming rage because it can't understand a single thing I'm saying. 🤣
I'm in one of those jobs I suspect will constrict over the next decades as AI grows in power. It isn't a lack of of self-respect that makes me think this, but more a sense of realism -- as if I'm a scribe viewing the first test prints on Gutenberg's press, realizing that my services which had been of great value are on a countdown to irrelevance.
It is obvious that what I do will be impacted by AI -- the only question is how quickly and at what magnitude of displacement. That doesn't change the fact that what I've done for my clients over the decades was valuable to them when I did it, because there was no capable AI at the time.
I'm still waiting for the low-work/no-work society that 19th-century Utopians promised us was just around the corner due to the efficiencies of mechanization. It is funny how when the work disappears the misery doesn't.
This comment I think, is the lid of a box full of some really deep shit. Just some of the subjects: human nature re possession and sharing; distribution of productivity gains; purpose of an economy; "fair" is just another four letter F word; what happens if there are no Ford factories for the buggy whip makers; should non-contributors get anything; so much I can't even begin organizing it. That last sentence could be the first sentence of a book!
To be fair to Eliezer, he's been warning about this as loudly and insistently as he can for two decades... I have no doubt, personally, that he believes everything he says in the article. If you listen to his voice in his interview with Lex Friedman from yesterday, you can tell that he is genuinely frustrated, terrified, and all but despairing: https://podcasts.apple.com/us/podcast/lex-fridman-podcast/id1434243584?i=1000606616193
And Scott Alexander has been writing on this for a while too and definitely sees the Moloch connection (which, like, he'd better, of course): https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer
I really really hope that Scott's more right than Eliezer—it would be much much better if Curtis Yarvin turned out to be right: https://graymirror.substack.com/p/the-diminishing-returns-of-intelligence
Chatgpt, or rather open-ai buys aws services for 3 million a day to keep up with server demands.
Now. What will happen to the ai if we can't pay that exorbitant Daily feed fee?
Exactly. AI is here, it's insanely useful and probably cheaper than most junk office labour, but that's about it. If ai starts to collapse the economy... no more server farms for it. It's digital habitat being dependent on systems beyond it's direct control, which would be the first to crash in a financial melt down.
AR-15s are like $400 now. AKs are $800+. Weirdest price trend in my lifetime; wish I had seen it coming in 1993.
Yeah, it's weird. If I'd had any idea, I'd have bought a whole crate of SLR-95s, instead of just the two I did get.
I guess *in theory* it's all of the Russian import bans/new ATF restrictions of the last 30 years, but also *in theory* the AK should be a significantly cheaper design to fabricate. Plus there are plenty of major AK manufacturers who aren't under the same level of US sanctions -- Bulgaria, Romania, Egypt etc. Somebody who understands the biz better than I might be able to explain the whys and wherefores. Regardless, the $800-1K+ Made-In-USA AK is with us as The Current Thing now; I suppose the price is what the market will bear. I, a cheapoid, just don't feel like paying it.
Remember when you could pick up Bulgie and IMEZ Makarovs for $100-minus all day long? Chinese/Yugo SKSs too. An AK wouldn't set you back much more than $250. God I miss the end of the Cold War.
(Sorry HWFO, way off topic)
I think Elon and his buddies are all posturing like they are because they're owned by the Intelligence Community, and we're already a ways down your Scenario 6. They're just trying to reduce the marginal cost of running their operation and increase its short-term effectiveness by agitating for a "ban" that would reduce (above-board) competition.
That was fun. Creating Moloch scenarios is always a gas. How many firearms can one person use at a time? How can one afford lots of arms and also a loyal army? As suggested, one could use A.I. to program more stuff for A.I., stuff that would be wildly lucrative, and use the A.I. to create culty manufacturing consent apps to ensure the loyalty of one's soldiers in one's army. One would also need lots of A.I.s to help train up one's military assets so that it's super competitive. Also, if one is rich, one can afford a few friends to go with one's military making life more enjoyable until the Big Kahuna A.I. turns one into a paperclip. https://www.tabletmag.com/sections/news/articles/guide-understanding-hoax-century-thirteen-ways-looking-disinformation
> The online “Rationalist” cult lexicon contains many in-group terms for complicated concepts, the most important of which in my opinion is “Moloch.”
I have to go with "Motte and Bailey" here.
I'll concur. I use that one a heck of a lot more than that if moloch.
How does having an AR-15 help us fight AI? I have a more antiquated carbine, but how does that plus a "spam can" or five help us fight the you-say-likely advance of AI? I really want to know.
Any "AI destroys us all" scenario is going to play out very much like any other doomsday prepper scenario. It's going to be associated with a societal breakdown and anarchy. The gun doesn't stop the AI from doing whatever it's going to do, it improves your personal odds while it does whatever it does.
I say this in the politest way possible, but while .50 BMG shots have been made at that range, I would be surprised if I happened to be in the comments section with someone who can make those kinds of shots. Hey, I can't either, and I'm pretty good with a rifle. But 3500 in the dark requires some serious black magic.
I usually limit my claims to 1600 meters in still air, in daylight. I figure a mile of "reach out and touch someone" is good enough. 🤪
I hit a 1 ft gong at 1000 yds with an AR-10 in 308 at SHOT Show, but it was with a rifle someone else zeroed. I would never trust myself with a shot like that in real life.
Long-range rifle shooting is funny. In a very real way, the actual shooting part of the equation (aligning the sight with the target, building a stable position, firing the rifle without disturbing the above) is the easy part. The hard part is accurately measuring the environmental factors that will affect the bullet in flight and calculating the necessary changes to the aiming point. Shooting a target past the trans-sonic range of your cartridge makes the second part orders of magnitude more difficult.
Ironically, AIs would be very good at that second part. One of the midwit office jobs obsoleted by wide scale AI adoption might be the military and police sniper...
1000 yards with .308 is harder than one might normally think, as .308 transitions back to subsonic around ~800, and that does unhelpful things to the flight path a lot of the time. Not that 1000 yards is something most people think of as easy, just, the inherent ballistics of .308 make it even *more* of a challenge. 🤪
There are some advantages to living in the desert. We may not have any water, but there's plenty of space for long distance shooting! 😁
Anything within 10 yards better be fearful of me.
That's a much more reliable method of tagging things at that range. But yes, hella expensive, and I'm not sure about running tracers through a nice expensive long range rifle.