Over the last 15 months HWFO has slowly but surely migrated the best stuff from the old Medium publication over to Substack. I would not characterize this article as “best stuff,” mostly because it’s So. God. Damn. Long. Like an hour read. Won’t fit in an email. But it’s best to relay in full and might be fun for the nerds in the back. It appears Jordan and I may cross paths on another podcast in a few weeks, so it seemed a good time to migrate it.
In November of last year, I entered into a very long discussion with Jordan Hall, of DIVX and Game B fame. The dialog ranged across multiple platforms, and ended with a podcast. There are a lot of people who followed bits and pieces of it from here to there, and I thought it might be nice to consolidate it. It all flowed from this HWFO article, which would be good to read as a primer:
And the majority of the juice was kept on Medium. We begin with Jordan’s response to what I’m calling “Game Ant.”
Jordan:
Broadly speaking, we seem to be in alignment. You end your “We are all apes behaving like ants” post with “the third option is to quit playing the ant game. If you can figure out how we can do that, let us ants know”. This, of course, lines up nicely with my “we need to quite playing Game A and move to some quite different game -> game~b”.
So let’s dig into the specifics and see where the conversation gets richer.
The first place that I find my attention focused is on the differences between what we (humans) are doing and what our friends the ants are doing. When you say “we are apes behaving like ants,” I want to shift this to “we are humans behaving a lot like ants” and focus on the word “behaving”. My sense is that there is a real distinction between humans and other primates; that this distinction began to coalesce on the order of a million years ago; and that we can point to the most relevant aspect of it thus: we invented culture. Which is to say that we landed on the niche “general purpose hardware running special purpose software to determine behavior”. [My guess is that none of this is controversial.]
Ants *are* Game Ant. To be an ant and to play Game Ant (behave in the Game Ant manner) are the same thing. Importantly, while Game Ant can ‘evolve,’ it evolves in biological time. Thus, the ants settled on this winning game 50 or so million years ago and have been sitting there with relatively small modifications ever since. This coupling of being and behavior is true also of apes (broadly speaking).
By contrast, for us humans, our being and our behavior is (meaningfully) decoupled. Humans are simultaneously evolving our “hardware” in ‘biological time’ (relatively unchanged for the past 100,000 years) and evolving our “software” in ‘culture time’ (changing a whole lot over the past 100,000 years). Hence, while the ants, as amazing as they are, have been sitting more or less unchanged for tens of millions of years, we’ve gone from cracking acorns to cracking atoms in about 25,000 years. This is why it is possible to say that we are ‘humans behaving a lot like ants’. Our biological hardware can run a lot of different kinds of cultural software. One of these (which I call Game A) is awfully close to the biologically bound behavior strategy of social insects (which you point to with Game Ant).
However Human is different to Ant in a few ways that really matter.
First is that the Human hardware comes pre-installed with a “firmware” that I sometimes call “Game Zero” or “Game Dunbar”. This is the “baseline proto-culture” that is always running underneath any given culture. Things like “language in general” are part of this firmware. Which is to say that you don’t have to teach a kid language — we are hard-wired to develop language. If a child simply hangs around adults with language, they will pick it up “naturally”. The specific language we learn is dependent on the languages we are exposed to but “language in general” is part of the “Human” firmware. A whole lot of stuff lives here and it is always running, particularly in the depths of our “relevance realization”. This is why, for example, we always find kinship “clans” inside of any given “Game A” society.
This firmware is important because it means that anytime that we suffer a “software reboot” (i.e., civilization collapse), we can (and do) always reorganize around / by means of the firmware (and whatever cultural elements happen to survive the collapse). This is a big deal when you compare it to Game Ant. For Game Ant to suffer a “reboot” would mean something catastrophic at the level of the genome. Because we are running most of our “evolutionary” exploration in software, we can fail a lot and not suffer existential consequence (at least not until recently). So when we look at the historical record, the collapse of the Bronze Age hurt the actual humans running “Hittite Culture” pretty bad, but was *terminal* for Hittite Culture itself. The humans had to endure a hard reboot. Nasty but not existential.
Second is that cultural transmission works more like bacterial genetics than like multi-cellular genetics. Any favorable adaptations developed by ant species A is limited to the lineage of species A. Species B is unable to “learn” those adaptations. By contrast, the Akkadians were able to absorb massive amounts of cultural material from the Sumerians and the Goths were able to absorb big chunks from the Romans. Among many other things, this means that when a given culture suffers a reboot the hard won learnings of those cultures need not necessarily go away (they might be absorbed by neighboring cultures or even relearned by subsequent cultures via artifacts).
It also means that subject to certain constraints, the entire population of Game Human is in a semi-collaborative exploration of the possibility of culture in general. When one culture “invents” atomic weapons, all other cultures “get” this invention — even when the inventing culture really, really doesn’t want them to.
Third, there is a big P / NP thing here. Discovering some big new innovation (like say writing or arithmetic) can be very hard. But, once it has been discovered, copying it can be much much easier. It is very hard to invent calculus, it is relatively easy to teach it (and it is even relatively easy to steal or re-discover it via artifacts). This means that there is an “innovation ratchet”. Every generation gets to more or less “stand on the shoulder of giants” by using “learning” to absorb the innovations of the past and then spend their expensive “exploration” time on pushing the frontier.
Combined, these differences make Game Human an innovation machine. Firmware reboot allows us to play with culture looking for new innovations with relatively high “risk seeking” bias. Horizontal transmission means that all new innovations will “percolate out” into the larger set of all cultures. And the innovation ratchet means that every generation will tend to get the possibility space of “the adjacent possible” for free and spend its time looking for new ways to expand that space.
The second place that I find my attention focused is on the Nash Equilibrium of war. I’d like to expand on this area of inquiry before diving in.
More or less, any given Game A society is faced with three distinct challenges that must be resolved at all times. First, it must maintain its “ecological” integrity. Which is to say that it must maintain the capacity to extract from its environment everything that is necessary to maintain the embodiment of that particular society (embodiment here includes both the physical bodies of the humans involved in the society and the cultural artifacts involved in the society). This challenge is true for any given Ant colony. Although the “innovation machine” of human culture makes the specifics differ meaningfully. Most notably, the rate of change in ants tends to be roughly symmetrical with the rate of change in their overall niche, and, therefore, ants can and often do come to some level of homeostatic equilibrium with their environment. By contrast, the innovation machine of Humanity means that the rate of change in humans can become enormously asymmetric to our overall niche. And, therefore, humans can (and are) coming far out of equilibrium with our environment. Tainter’s exploration of the Collapse of Complex Societies covers a lot of ground here.
Second, any given Game A society must maintain its “sub-cultural” integrity. It needs to maintain the integrity of ‘the whole’ against various fractures and defections by parts (often sub-cultures or deeper structures built around “human scale” like clans). The defection rate of “drones” on “nests” is roughly equivalent to the defection rate of human cells on human bodies. Evolution has spent a lot of time solving for the solution to individual/group equilibrium in the context of ants (and apes). Game Zero (Game Dunbar) is also highly capable of maintaining “integrity” against individual defection- but it can’t scale beyond about 150 people. Evolution was only able to get this far using human hardware. Game A is running software to deal with the constant pressure of sub-cultures (many of which are bound to “Dunbar level” clans) defecting against the larger Game A society. But, so far at least, no Game A society has managed to develop an “Integrity system” in software that comes close to the strength of the hardware solution of Game Ant. And, given that there hasn’t been a Game A society in history that hasn’t developed some significant hierarchical inequality, there is a constant tension between the “winners” of the particular game and the “losers” of that particular game.
Third, every given Game A society must maintain its integrity vis a vis all other Game A societies. At last we arrive at the Nash Equilibrium of war. And this piece is crucial. Game Dunbar had to deal with the first two issues (and did so well). But it was precisely the challenge of the limits to Dunbar (scalability of the interior) that called for Game A as a solution. When the only way to deal with a clan getting bigger than 150 people is to split into two clans and expand our territory, what do we do when we run out of room? We invent a new approach to social organization that consumes Dunbar scale clans as its primary food source. Thus began that Nash Equilibrium of war. Throughout history there have been broad times of peace — but always against the background context of (potential) war. This is particularly clear when you reflect on the fact that in culture war, physical violence is only one means of waging war. If you can get your neighbor to adopt your enculturation (for example watch your TV shows and build malls just like yours), then you really don’t need to wage “physical war”.
Now, in this context, any given Game A society is faced with a conundrum more or less captured in your conservative/liberal dichotomy (but expanded with these three constraints). Any innovation/change presents risk of disruption, particularly in the second category. But not enough innovation presents the risk of being out-competed (possibly from the outside by a “competitive culture” or from the inside by an “insurgent sub-culture” or by reaching the limits to nature within your existing technical capability). Shift from landed feudalism to early capitalism and you get those increasingly rich merchants clamoring for political power. Fail to make that shift and find your borders prey to the increasingly richer (and more populous) neighbors who did.
So Game A is a function that is seeking a fine balance around the disruptive capacity of the Game Human innovation machine.
This, then, finally gets us into the interesting part. The Nash Equilibrium of war isn’t an equilibrium! It is a stumble — in a general direction. The evolution of Game A has been a series of steps along a path paved by the question “how do I structure myself to gain maximum advantage from innovation whilst maintaining the integrity of my societies relationship with nature and the hierarchy of our social structure”? By and large, each step along this path was taken reluctantly and often as a consequence of some kind of crisis (frequently active or threatened war). But once one player took a relatively stable step, more or less everyone else had to take that or an equivalent step. Thus dragging the whole of humanity along the story of history into our current state.
BJ:
First off, I want to say how pleased I am that we appear to be speaking almost exactly the same language, and that your analysis seems to line up very similarly with mine from the last half decade. I’m still curious if you’ve landed on the solution set I have, but I don’t want to jump that far ahead for fear of scaring you off.
we can point to the most relevant aspect of it thus: we invented culture. Which is to say that we landed on the niche “general purpose hardware running special purpose software to determine behavior”. [My guess is that none of this is controversial.]
Yep, agreed. Been with you on that for a while.
Ants *are* Game Ant. To be an ant and to play Game Ant (behave in the Game Ant manner) are the same thing. Importantly, while Game Ant can ‘evolve,’ it evolves in biological time. Thus, the ants settled on this winning game 50 or so million years ago and have been sitting there with relatively small modifications ever since. This coupling of being and behavior is true also of apes (broadly speaking).
Yep, agreed, etc.
By contrast, for us humans, our being and our behavior is (meaningfully) decoupled. Humans are simultaneously evolving our “hardware” in ‘biological time’ (relatively unchanged for the past 100,000 years) and evolving our “software” in ‘culture time’ (changing a whole lot over the past 100,000 years).
Yep, but I’d like to add one thing to that statement that I don’t think is covered enough, and is contrary to some stuff I’ve seen Bret Weinstein say. Our two parallel evolutionary tracks are cross-connected in a way many people don’t realize. Follow me here.
Ant behavior is a bucket that is 100% nature. Human behavior is a bucket that is (let’s pretend for the sake of discussion) 50% nature 50% nurture. But what makes humans successful is our ability to run software. The software is Evolutionary Track 2, but the ability to run that software is a hardware thing. So if there are any biological groups of humans more capable of running software, they succeed over the ones that don’t, and tend to rub those ones out. We might say that the last 10,000 years of human evolution has been about the humans with the bigger and more accessible hard drives out competing the ones with smaller and less accessible ones. In particular, the humans capable of easily adopting indoctrinated principles into their brains (what you’ve called “enculturation” elsewhere) out compete the ones who can’t, because the playing field for competition is no longer about killing mastodons, it’s about working in groups.
So when Pinker and such talk about how our brains are wired for religion, this is the evolutionary path that has led to this wiring. And I’m going to go out on a ledge here and say our brains are also wired for war and genocide, which is something I’ve heard Bret Weinstein and Heather Heying mention in podcasts before. War and genocide, at a root level, function just like breeding programs. A culture of war kills off a culture of peace, and a culture with more available white space in their heads kills off a culture with less available white space. And that 50%/50% balance between nature and nurture in human brains is slowly and continually shifting towards nurture with this mode.
Bret had a big twitter thread on how “biology is upstream from culture” a while back, and I simply think that’s currently wrong. While it was certainly true at the beginning, I think today they’re both very tangled up. Everybody loves talking about Hitler, and Bret got semi famous by pointing out that Hitler’s actions were rational from a genetics point of view. Well that’s an obvious example of culture being upstream of biology, because he was leveraging cultural effects to make massive changes in the human gene pool.
The fact that there are cross connections between these two parallel evolutionary paths is something that must be on Game B’s radar, because those Game A effects (war, genocide) aren’t just a product of cultural Darwinism, they’re probably also a product of genetic Darwinism. You’re fighting Darwin. You’re going to need a sharp knife for that fight.
This ties in with your next passage:
However Human is different to Ant in a few ways that really matter.
First is that the Human hardware comes pre-installed with a “firmware” that I sometimes call “Game Zero” or “Game Dunbar”. This is the “baseline proto-culture” that is always running underneath any given culture. Things like “language in general” are part of this firmware. Which is to say that you don’t have to teach a kid language — we are hard-wired to develop language.
I haven’t seen this breakdown before, I’ve been calling it all Game Ant, but I think it’s worth looking at this in some detail. By my analysis, Game Ant is partly firmware partly software, you seem to be making a distinction, so let’s run with your terms.
I think there’s a good case that genocide and war may need to be reclassified from Game A to Game Dunbar, at least in part. They probably didn’t used to be, but the fact that war and genocide are cultural things that can affect the gene pool means they’ve had probably around 10,000 years to bake themselves into the firmware. You’re going to need a sharp knife.
This firmware is important because it means that anytime that we suffer a “software reboot” (i.e., civilization collapse), we can (and do) always reorganize around / by means of the firmware (and whatever cultural elements happen to survive the collapse). This is a big deal when you compare it to Game Ant. For Game Ant to suffer a “reboot” would mean something catastrophic at the level of the genome. Because we are running most of our “evolutionary” exploration in software, we can fail a lot and not suffer existential consequence (at least not until recently). So when we look at the historical record, the collapse of the Bronze Age hurt the actual humans running “Hittite Culture” pretty bad, but was *terminal* for Hittite Culture itself. The humans had to endure a hard reboot. Nasty but not existential.
I buy all that, but you can use that as a controlled experiment for what counts as Game A and Game Dunbar. War and genocide survived, while bronze manufacturing largely didn’t.
So did war and genocide reappear because they’re Game Dunbar, or did they reappear because they’re Nash Equilibria in the Prisoner’s Dilemma of resource harvesting in groups so Game A tuned itself for these to reemerge? Or both? I tend to think a bit of both.
Among many other things, this means that when a given culture suffers a reboot the hard won learnings of those cultures need not necessarily go away (they might be absorbed by neighboring cultures or even relearned by subsequent cultures via artifacts).
Sure, if those learnings are useful. I don’t think we’ve got much by way of Aztec culture floating around in our memeset right now. So in that way, hard reboots are actually quite useful in shaking out the bad code.
Combined, these differences make Game Human an innovation machine. Firmware reboot allows us to play with culture looking for new innovations with relatively high “risk seeking” bias. Horizontal transmission means that all new innovations will “percolate out” into the larger set of all cultures. And the innovation ratchet means that every generation will tend to get the possibility space of “the adjacent possible” for free and spend its time looking for new ways to expand that space.
Here’s a horrifying but necessary question: are hard reboots within the Game B tool set? I’m not a fan of this tool, but you do need a sharp knife.
I appreciate your outlay of Game B, and am enjoying it because it mirrors quite a lot of my own thoughts, but I’m to the point over here in my thought experiments where I’m knife shopping.
Second, any given Game A society must maintain its “sub-cultural” integrity. It needs to maintain the integrity of ‘the whole’ against various fractures and defections by parts (often sub-cultures or deeper structures built around “human scale” like clans). The defection rate of “drones” on “nests” is roughly equivalent to the defection rate of human cells on human bodies. Evolution has spent a lot of time solving for the solution to individual/group equilibrium in the context of ants (and apes). Game Zero (Game Dunbar) is also highly capable of maintaining “integrity” against individual defection- but it can’t scale beyond about 150 people. Evolution was only able to get this far using human hardware. Game A is running software to deal with the constant pressure of sub-cultures (many of which are bound to “Dunbar level” clans) defecting against the larger Game A society. But, so far at least, no Game A society has managed to develop an “Integrity system” in software that comes close to the strength of the hardware solution of Game Ant.
I want to step sideways and present to you the Boogeyman.
The Boogeyman that Game B must watch for is the potential that a modern culture solves the Dunbar problem and better emulates Game Ant. This would be disastrous for anyone who thinks in terms of Game B, but it is the very thing the Game A evolutionary track is selecting for right now. And I think China’s getting close to nailing it, with internet control, media control, social credit system, leveraging capitalist tendrils to force international cultural alignment, etc. Google is literally helping them refine Game A, in the same way Watson of IBM helped Hitler sort Jews onto rail cars.
Beating Game A is going to be hard enough. We may have a short clock to beat it before Game A evolves into something even tougher to beat.
Third, every given Game A society must maintain its integrity vis a vis all other Game A societies. At last we arrive at the Nash Equilibrium of war. And this piece is crucial. Game Dunbar had to deal with the first two issues (and did so well). But it was precisely the challenge of the limits to Dunbar (scalability of the interior) that called for Game A as a solution. When the only way to deal with a clan getting bigger than 150 people is to split into two clans and expand our territory, what do we do when we run out of room? We invent a new approach to social organization that consumes Dunbar scale clans as its primary food source. Thus began that Nash Equilibrium of war. Throughout history there have been broad times of peace — but always against the background context of (potential) war. This is particularly clear when you reflect on the fact that in culture war, physical violence is only one means of waging war. If you can get your neighbor to adopt your enculturation (for example watch your TV shows and build malls just like yours), then you really don’t need to wage “physical war”.
I think the first half of this is sloppy, but I’m okay with the second half by way of semantics.
War is simpler than you’re making it out to be. War is just what tribes of any size (above or below Dunbar limit) do when they bump into another tribe in a resource scarce environment. Peace, generally speaking, happens when there are enough resources. War is just the Nash Equilibrium for (tribes)+(resource scarcity), and happens at all scales. This is why we see it in the ants. There’s no need to go any deeper than Malthus here. Ethnic genocide operates on exactly the same mode, but within a national border. (resource scarcity neurons activate)>(genocide ensues).
The second half, the idea that culture war can be waged by hijacking and displacing memes, is important. Is this a knife in Game B’s knife block? I’ll add it to the potential list, alongside hard reboots and literal war.
This, then, finally gets us into the interesting part. The Nash Equilibrium of war isn’t an equilibrium! It is a stumble — in a general direction. The evolution of Game A has been a series of steps along a path paved by the question “how do I structure myself to gain maximum advantage from innovation whilst maintaining the integrity of my societies relationship with nature and the hierarchy of our social structure”? By and large, each step along this path was taken reluctantly and often as a consequence of some kind of crisis (frequently active or threatened war). But once one player took a relatively stable step, more or less everyone else had to take that or an equivalent step. Thus dragging the whole of humanity along the story of history into our current state.
Maybe I just don’t get your perspective, and maybe it’s worth unpacking further, but I really just don’t buy this one lick. War isn’t a stumble, it is a crucial element of Game A. It is a blatant Nash Equilibrium because if my culture doesn’t do it and your culture does, my culture gets rubbed out. It has a genetic cross connection because my genes also get rubbed out and the resources I used to be using to propagate my genes are redirected into the propagation of your genes. (this is what “life” is by the way, a giant soup of DNA consuming other DNA) War exists both on the Game A layer and the Game Zero layer, and the longer we go, the more it gets baked into our firmware. Every war bakes it into the firmware further, by one bundle of humans with a better mental capability of waging it erasing the genes of another bunch of humans who aren’t as capable of waging it.
This cannot be disregarded. Fixing this requires a very, very, very sharp knife. We can continue to talk about war if you like, and hash out our disagreements about how fundamental it is to not only Game A, but to both of our evolutionary tracks themselves, but mostly I showed up in the Game B group to talk specifically about our toolkit.
I’m going to grab some other snippets from the Facebook stuff and drop them here, so they’re all in one place, which are more toolkit related.
I said elsewhere…
“ don’t understand how this would be possible without a successful global revision in enculturation.
Which is another way of saying you have to win a culture war against every other culture on the entire planet.
Probably mostly by infiltrating and coopting them, piggybacking GameB enculturation into the existing cultural network in the same way the Christians piggybacked on Euro Paganism.
Is my analysis wrong?”
you responded…
It certainly looks mostly right. Lets test. My sense is that the key insight is that gameB must relate to Game A cultures, not as “another culture” on the same playing field, but must represent another playing field altogether. Much like “multi-cellularity” is a “portal pathway” on the landscape of “single cell” and doesn’t compete with any given single cell but with the entire niche “single cell”.
gameb doesn’t wage culture war on other cultures, gameb wages war on “culture war” as a category.
gameb represents a “(much) higher fitness peak” on the landscape and our job is to provide a legible path for existing Game A players (and their cultures) into increasingly symbiotic relationship with that peak. Some might really transform into things like somatic cells. Others might be more like the gut biome — still entirely autonomous single cells but so connected to their symbiotic context as to be continuous with it.
I’m going to be blunt. That sounds great and all, but we’re dealing with Walmart shoppers here.
I’d like to draw a parallel to Sam Harris. Sam is this famously rigorous atheist, who maintains that mankind doesn’t need religion because they can all just do like he did, and spend a night on a canoe in Nepal dosing DMT and contemplating secular humanism and the fundamental interconnectedness of nature and thought. He completely fails to understand that while that may have worked for him, that solution is not scalable.
If Game B is going to be a thing at all, that thing has to (A) be better than Game A at all the things that Game A does, holistically speaking, at every step along the evolutionary path as Jim Rutt pointed out, and also (B) spread to the majority of mankind, which includes the Walmart shoppers.
I like your idea of “waging war on culture war as a category,” because it has a nice ring to it, but the mode of that must necessarily flow from one of our three tools in our toolkit:
Hard Reboot (aka “Vote for Giant Meteor 2020”)
Literal War
Culture War (defined above as the hijacking of memes and viral enculturation)
Are there any other tools available? My most recent piece, The Human Tool, (which sparked our social media dialogue that landed us here) was specifically trying to unpack the third tool in the toolkit:
..because I’m not a fan of the first two.
I have deep ideas on how to use the third tool. Whether we call that “culture war” or we call it “waging war on culture war by using the modes of culture war” is in the end, I think, semantic. If our goal is to hijack the dominant memes and spread Game B virally, then we need to become very good at understanding how culture war works, and become better players of it than the current cultures waging it today. Once we admit that’s the program, then we can shift directly into strategy and tactics. Sharpen our knife. Pick generals. Develop weapons. Establish fronts. Which is all perhaps scary language, but I think it works. If we want to rebrand the language I’m ok with that.
I have ideas about how to do all of this, but I don’t want to share them until we handshake over the nature of the project.
But I will say as someone who walks in prepper circles, the idea of posturing Game B within the fabric that might emerge from a disaster (hard reboot) is an intriguing idea I hadn’t considered before. I’m going to have to mull that over. It’s a tool I hadn’t considered until today. It’s a dangerous path, though, because one misstep on that path leads you to the New Zealand Shooter Manifesto, or worse.
Unpacking this stuff is dangerous, and my read on the Game B community is they largely don’t understand the dangers.
Jordan:
So it is to be war between us. Very well, war is my native tongue. ;)
We are going to need to go far beyond a merely sharp knife. We are going to need a Vorpal blade. Let’s dive in and see what happens.
Genocide is deeper than Game A. It comes from the primal forces of genetics that seek only to dominate all spellings. But even brute nature knows that actively spending the time and energy to erase a rival spelling is a crude way to achieve domination. Time and entropy will slaughter all. If you seek to win the evolution game, the simple rule is: if in doubt, my spelling wins. Otherwise, mate, eat and keep an eye out. If there are three apples and I need one and you need one, fighting would be a extinction choice. If there are two apples we might eye each other and test for the larger apple, but no more. When there is one apple — now it is time for slaughter. Genocide is part of the code. But It is a dependent part of a deeper mandate, the complex totality of which is War.
Game A rarely prosecutes genocide in the crudest sense. Domination, yes. Always domination in some fashion. If in doubt, my spelling wins. But far more useful to enslave (however subtly) than to slaughter. Even more useful to enlist.
But here is where we come to my proposition that the Nash Equilibrium of War is a stumble. I certainly don’t mean to say that “War” is a chance or random element of Game A. Far from it. The opposite in fact. What I am saying is that the presence and dynamics of War in the context of a species that is an innovation machine *forces* a very specific (though meandering and largely unwilling) directionality to the evolution of culture. A practical teleology of Power.
It is a stumble because of the three distinct tensions I mentioned earlier (integrity of society vis a vis nature, itself, and other societies). The conservative impulse knows that change is disruptive and risky. This is particularly true for those at the top of the ‘internal’ domination hierarchy in a society that is itself at the top of the ‘external’ domination hierarchy (Egyptian aristocrats in 1200 BC, Roman aristocrats in 200 AD, British aristocrats in 1600 and 1900, American aristocrats right now). If in doubt, so long as nothing changes, their spelling wins. But the asymmetry of innovation *requires* change. And not just change in a random direction. Change towards ever greater Power. Ever greater capacity to work your will on the world.
Pity the conservative impulse. Innovation stands on the shoulders of giants. It is a curve, not a straight line. This is because in the game of Power innovation in ‘what’ is far less Powerful than innovation in ‘how’. And that is far less powerful than innovation in ‘who’. Culture eats strategy for lunch. Moving from catapults to cannons is one thing and profoundly shifts the Feudal equilibrium. But once the meta-game of ‘innovation on innovation’ sometimes known as Capitalism gets going, Feudalism as a whole is done. Now the conservative impulse is in trouble (and in the West at least there have been no true conservatives for several hundred years). We are no longer using Power to compete on the landscape of spellings. We are using an increasingly conscious intent to compete on the landscape of Power. Acceleration. This dynamic is self-ramifying. No matter how much ground you gained in the last war, in a landscape of accelerating power, if you don’t run faster than your neighbors, the Barbarians will be breaking down the gates in a few decades. Or years. This process results in increasingly heedless acceleration into increasing Power (see the current AI race). And that, of course, is inexorably self-terminating.
But herein lies the rub. Each move in the accelerating game of Power *must* be a move towards a “who” that is more capable of a “how” that is more capable of a “what”. And here is the thing that we have known since the beginning of our species: collaboration is always more innovative than domination. In many ways, the essence of Game A has been the myriad efforts of domination to maximize the innovative capacity of collaboration while maintaining the context of domination. Hence its increasing subtlety and ‘softening’. From Egyptian slavery to Feudal serfdom to the Liberal Republic to the modern welfare state. game~b is simply the result of conscious inquiry into the fundamentals of this dynamic and a realization that if you can remove the constraint of ‘conservation of domination,’ if you can move from the anti-rivalrous always in service to the rivalrous to the rivalrous always in service to the anti-rivalrous, you simultaneously innovate at a level that Game A can’t possibly (structurally) achieve and you get off the road to self-termination.
And this shift is simple. It involves moving from “spellings” to “values”. If in doubt, my spellings win is fundamentally rivalrous. If in doubt, my values win is fundamentally anti-rivalrous.
BJ:
We are going to need to go far beyond a merely sharp knife. We are going to need a Vorpal blade. Let’s dive in and see what happens.
Genocide is deeper than Game A. (etc)
We definitely see eye to eye on genocide generally speaking, but I’m not convinced it’s deeper than Game A.
I think it’s important to note that genocide does show up in the ants (the goal of ant war is to kill the babies) but it doesn’t show up very many other places in nature, as far as I’m aware. Just the social insects. Possibly in how dolphins treat sharks, although that might be debatable. Perhaps in lion prides?
The ants are the most successful mobile organism, and genocide is part of their plan for the reasons you lay out.
Do monkeys practice genocide? I don’t know the answer to this question. If humans are unique in the primate kingdom in our practice of it, then that would lead me to believe that genocide wasn’t originally baked into our hard code, but rather it exists within Game A enculturation and it appeared there because the enculturation evolutionary track is fast. Whichever culture socially incorporated that bit of ant code first would dominate until that bit of ant code was found in all the surviving cultures.
If genocide is deeper than Game A, then is there anything Game B can do about it?
Game A rarely prosecutes genocide in the crudest sense. Domination, yes. Always domination in some fashion. If in doubt, my spelling wins. But far more useful to enslave (however subtly) than to slaughter. Even more useful to enlist.
Sure. Ants do the slavery thing too.
https://en.wikipedia.org/wiki/Slave-making_ant
I found the second half of your response a bit hard to follow. Instead of trying to respond to it, I’d like to quote it, and interpret it, and for you to tell me if I’m interpreting it correctly.
But here is where we come to my proposition that the Nash Equilibrium of War is a stumble. I certainly don’t mean to say that “War” is a chance or random element of Game A. Far from it. The opposite in fact. What I am saying is that the presence and dynamics of War in the context of a species that is an innovation machine *forces* a very specific (though meandering and largely unwilling) directionality to the evolution of culture. A practical teleology of Power.
Sounds to me like:
Humans as a species are defined by their capacity for innovation, and the ones who innovate the best run out the ones who innovate less. Since war is a way to rub people out and win the cultural Darwinism game, war becomes the most important thing in which to innovate.
But I’m not sure that’s what you meant at all.
It is a stumble because of the three distinct tensions I mentioned earlier (integrity of society vis a vis nature, itself, and other societies). The conservative impulse knows that change is disruptive and risky. This is particularly true for those at the top of the ‘internal’ domination hierarchy in a society that is itself at the top of the ‘external’ domination hierarchy (Egyptian aristocrats in 1200 BC, Roman aristocrats in 200 AD, British aristocrats in 1600 and 1900, American aristocrats right now). If in doubt, so long as nothing changes, their spelling wins. But the asymmetry of innovation *requires* change. And not just change in a random direction. Change towards ever greater Power. Ever greater capacity to work your will on the world.
Sounds to me like:
People at the top of social hierarchies are scared of change because the change might displace them. But change is essential to keep from getting rubbed out, culturally speaking.
Which I buy, but I’d like to throw in some important qualifiers. There are people at the bottom of pyramids who are change averse as well, because change is just as likely to screw them over as it is to screw people over at the top. You mention AI, which in modern times invokes fear of automation, but automation has been screwing folks at the bottom forever, and this gets lost in the Silicon Valley cocktail party talk. I bang on that a bit at the end of this:
I also feel the need to point out that the conservative impulse isn’t just about preservation of ones place within an existing hierarchy. It’s also about preserving a known cultural fitness, in a struggle against other cultures with known cultural fitnesses. I cover this towards the end of “Ants” way up top. If you tinker too much with your car engine and the race car one lane over doesn’t, you might lose in the short term and not get the opportunity to race again. Possibly in dramatic fashion, during something like a war or a genocide.
Pity the conservative impulse. Innovation stands on the shoulders of giants. It is a curve, not a straight line. This is because in the game of Power innovation in ‘what’ is far less Powerful than innovation in ‘how’. And that is far less powerful than innovation in ‘who’. Culture eats strategy for lunch.
I’ll be honest, I’m not sure I understand what you’re saying here at all, and I need you to help me through it. Seems to me that all innovation is innovation in how. Webster says an innovation is a “new idea, method, or device,” and those are all basically things to achieve tasks.
Culture and strategy have each eaten each other’s lunches in the past, historically, so I don’t see how one obviously dominates the other. In fact, strategies to infect of affect cultures are quite common, and very often effective.
Moving from catapults to cannons is one thing and profoundly shifts the Feudal equilibrium. But once the meta-game of ‘innovation on innovation’ sometimes known as Capitalism gets going, Feudalism as a whole is done.
The pathway by which transcendent innovation kills feudalism is not clear to me. I think a decent argument could be made that we’re not far past feudalism now, and that the wealth concentrations we’re seeing develop within those who wield automation (via innovation) are moving us closer to feudalism, not further from it.
There’s a pretty easy argument to be made that we’re basically running the “land owners, serfs, and peasants” program right now, just with a wider definition of what counts as land. That’s certainly how the communists see it, and while I’m not a communist, some of their baseline analysis is pretty sound.
Now the conservative impulse is in trouble (and in the West at least there have been no true conservatives for several hundred years). We are no longer using Power to compete on the landscape of spellings. We are using an increasingly conscious intent to compete on the landscape of Power. Acceleration.
This is self evident to you, but not to me. Can you give me a real world example of what you mean here, outside of the AI race?
Do you mean, for instance, Facebook or Twitter or China wielding technology to curate the media feeds people see, to concentrate power in places they prefer?
This dynamic is self-ramifying. No matter how much ground you gained in the last war, in a landscape of accelerating power, if you don’t run faster than your neighbors, the Barbarians will be breaking down the gates in a few decades. Or years. This process results in increasingly heedless acceleration into increasing Power (see the current AI race). And that, of course, is inexorably self-terminating.
I’m not getting this either. I need you to apply this “inexorably self terminating” prediction to an example of “Power to compete on the landscape of Power,” to show me how you think this is going to happen.
But herein lies the rub. Each move in the accelerating game of Power *must* be a move towards a “who” that is more capable of a “how” that is more capable of a “what”. And here is the thing that we have known since the beginning of our species: collaboration is always more innovative than domination. In many ways, the essence of Game A has been the myriad efforts of domination to maximize the innovative capacity of collaboration while maintaining the context of domination. Hence its increasing subtlety and ‘softening’. From Egyptian slavery to Feudal serfdom to the Liberal Republic to the modern welfare state. game~b is simply the result of conscious inquiry into the fundamentals of this dynamic and a realization that if you can remove the constraint of ‘conservation of domination,’ if you can move from the anti-rivalrous always in service to the rivalrous to the rivalrous always in service to the anti-rivalrous, you simultaneously innovate at a level that Game A can’t possibly (structurally) achieve and you get off the road to self-termination.
Let’s run with this for a second.
Tribe A: anti-rivalrous in service to rivalrous
Tribe B: rivalrous in service to anti-rivalrous
In a shooting war, I’m having a hard time believing that Tribe A doesn’t win. In a corporate war, I’m also having a hard time believing that Tribe A doesn’t win. Seems to me that Tribe B only really out-competes Tribe A within this framework down under the Dunbar number, at village scale or family scale interactions.
I’m an engineer. I need some examples to attach the concepts to, to get me where you’re at conceptually.
More generally speaking, we were talking about tools. I listed a toolkit, and your response appears to have been to issue a concept as your tool. And that’s fine. If I can get the concept, then we can pivot back to talking about tools for spreading that concept, which is really the level at which I’m trying to focus. But I’m a pretty smart dude, and if I can’t get the concept, you’re going to have a really hard time spreading the concept to the Walmart shoppers. And that’s the task in front of you.
Jordan:
Holy smokes this UI is hard to use for this kind of dialogue. Sorry for the relatively slow response rate. I’ve been busy.
It might prove useful to dig into genocide, but for now I think that to do so would prove a digression. The only thing that came up for me that seemed important here was to remember that when we are talking of ants, the individual ants aren’t objects of selection. Properly speaking the *colony* is the individual. So when an ant colony is at war with another ant colony this is comparable to when a single lion is in conflict with another single lion.
Skipping to the part that seems neato.
“I’ll be honest, I’m not sure I understand what you’re saying here at all, and I need you to help me through it. Seems to me that all innovation is innovation in how. Webster says an innovation is a “new idea, method, or device,” and those are all basically things to achieve tasks.”
OK. (And reading through that myself, it is really unclear. Sorry about that.)
Let’s see. How about we time travel to the early 70’s in the tech space. Places like IBM and Bell dominate the current state of the art in computation and telecommunications. But we are on the threshold of an innovation in “what”. In this case, the good old “Personal Computer”.
Now, all things being equal, we should expect someone like IBM to completely own the personal computer. They have the money, the talent, the relationships. In particular, we should expect them to stomp a bunch of fucking hippies into the ground. But they don’t. In fact, we know that none of the lions of the late 60’s tech scene are relevant any longer, while places like Apple managed (for a while) to be the most valuable company in the world. Why?
Because the much deeper innovation was in “how” than in “what”. The East Coast tech giants organizational structure limited their capacity for innovation “in general”. Where things like punching power mattered more than rate of change (like say government contracts), you saw IBM continue to compete. But anywhere else, the West Coast “how” of innovation rapidly took the whole market. IBM became a bit player (and eventually just gave up entire) and variations on the new species (specifically, “WinTel” vs. “Apple”) became the new landscape.
And lest we think that this “how” wasn’t really a thing remember how many times old guard companies struggled to find a way to “emulate” the new approach to innovation. IBM, Westinghouse, Xerox, Bell — they all figured out that their form of “collective intelligence” was getting beat by a different approach and they tried many times to copy the how so that they could compete.
But this didn’t work. Best case study — Xerox’s legendary PARC. Copying (and in many cases simply stealing people and ideas) from SRI ARC, PARC pushed the old innovation envelope. *But* more or less all of their (and SRI’s) innovation ended up being grasped and used (and innovated upon) by West Coast organizations (like Apple) while the suits over at Xerox missed the boat.
Why? Because ‘culture eats strategy’ for lunch. Xerox (and IBM and. . .) thought that they could simply copy the formal aspects of the West Coast model (a strategy), but the essence of the competitive advantage was much deeper. The organization form and approach of the West Coast model emerged from and was intrinsically connected to the different culture on the West Coast. IBM stiffs running Apple organizational models couldn’t get anything done.
I should mention by the way that the cultural identify of the West Coast wasn’t so much given companies (e.g., Apple or HP) but the Bay Area in general. The shift from a vertically organized top down box like IBM or Bell to a co-evolving ecosystem of organizations ranging from the large to the garage and with both capital and ideas flowing quite fluidly between them.
The West Coast figured out a culture that could give rise to a “method of organization” (a form of collective intelligence) that was itself vastly more conducive to innovation. By the 80’s the ability to innovate was so much more valuable than punching power that the mid-century East Coast tech model became essentially extinct.
We can link this back to Feudalism vs. Capitalism. Round about the middle of the 1600’s Feudalism owned nearly everything in the West. But as the Enlightenment began to percolate, the competitive advantage of a form of collective intelligence that could out innovation the Feudal form began to rise. Capitalism traded money for land and traded effective choice making for caste. Feudal structures had power. Masses of people and resources that could be pointed at a given target. Build a castle or a cathedral? Feudalism.
But when in Spain you had to get the royalty to agree to allow you to put together an expedition to the new world (and had to give them the lion’s share of the results) whereas in the Netherlands anyone could give it a shot so long as they could convince enough people to give them their support (and the one’s who were successful reaped the rewards), you saw this tiny upstart nation become the most powerful maritime and economic power in the world.
But, again, as much as places like France tried to copy the strategies of what was eventually to be called ‘capitalism’, the deep competitive advantage was the *culture* that had come together in the Netherlands. A people who have been trained for centuries to bow to their betters and curry their favor just cant run “free enterprise” effectively.
The key here is the relationship between the context (or the niche) and the kind of collective intelligence that is optimal for that niche. For hundreds of years, the “Dutch model” lacked fertile soil. There wasn’t enough potential for innovation and so the competitive advantage of being able to create and thrive in change faster wasn’t meaningful. The punching power of Feudalism was more fit than the innovation capacity of capitalism.
But by the 1600’s the niche was changing. Literacy was beginning to support a faster rate of innovation. Change was heating up. And capitalism was becoming more and more fit to this emerging landscape.
Of course it is critical here to note the feedback loop. Capitalism was more able to respond to change than Feudalism (i.e. could more quickly and effectively exploit new possibilities). But it was also more able to *create* change than Feudalism. So as the 1600’s became the 1700’s became the 1800’s the competitive advantage of the kind of collective intelligence “capitalism” continued to increase. The niche was drifting more and more in the direction of capitalism.
Hopefully this counts as two different real world examples of the form of Power (IBM, Feudalism) shifting to the form of Innovation (Apple, Capitalism).
From me:
“But herein lies the rub. Each move in the accelerating game of Power *must* be a move towards a “who” that is more capable of a “how” that is more capable of a “what”. And here is the thing that we have known since the beginning of our species: collaboration is always more innovative than domination. In many ways, the essence of Game A has been the myriad efforts of domination to maximize the innovative capacity of collaboration while maintaining the context of domination. Hence its increasing subtlety and ‘softening’. From Egyptian slavery to Feudal serfdom to the Liberal Republic to the modern welfare state. game~b is simply the result of conscious inquiry into the fundamentals of this dynamic and a realization that if you can remove the constraint of ‘conservation of domination,’ if you can move from the anti-rivalrous always in service to the rivalrous to the rivalrous always in service to the anti-rivalrous, you simultaneously innovate at a level that Game A can’t possibly (structurally) achieve and you get off the road to self-termination.
Your response:
Let’s run with this for a second.
Tribe A: anti-rivalrous in service to rivalrous
Tribe B: rivalrous in service to anti-rivalrous
In a shooting war, I’m having a hard time believing that Tribe A doesn’t win. In a corporate war, I’m also having a hard time believing that Tribe A doesn’t win. Seems to me that Tribe B only really out-competes Tribe A within this framework down under the Dunbar number, at village scale or family scale interactions.
1600’s Tribe A: Spain, France. Tribe B: the Netherlands, England.
1970’s. Tribe A: IBM, Xerox, Bell. Tribe B: Apple, HP, Microsoft.
Tribe A will have the advantage in bringing to bear the effectiveness of the tools of the past. Tribe B will have the advantage of innovating new tools. Tribe A will have more punching power. Tribe B will have more innovative capacity.
To the degree that Tribe A wants to become more like Tribe B in its ability to thrive in and take advantage of a new level of innovation, it will have to become more like Tribe B at the level of “culture”. Simply trying to copy at the level of technology or strategy won’t do.
Does that make sense?
But of course if we use nuance we can look at the 1960’s. Hippie culture was able to outcompete “the man” in places like innovation technology and media and banking. The straight culture continued to own places like energy and real estate. Society is complex and not everything moves in the same currents.
Moreover, we have the innovator’s dilemma. Once your innovation has been successful, you have a very strong attraction to hardening your culture (optimizing for efficiency) and switching from B to A. Hence, Microsoft was Tribe B in the 80’s but was Tribe A in the 90’s and beyond. Amazon and Google were Tribe B in the 90’s and early 2000’s but are now more or less Tribe A.
By making this shift, you can convert “innovation capacity” to “punching power”. Or, more narrowly, you can increase revenues and (even more) profits at the cost of decreasing innovation capacity. i.e., put the anti-rivalrous in the service of the rivalrous. But this kills the culture. Like Feudalism and IBM before you, you are living off the past. Which can certainly last for some time (its going to take a real upgrade in innovation capacity to storm Castle Google and Amazon), but the story is the same. Once you’ve traded a culture of innovation for a culture of power, you’ve won the battle but lost the War.
More generally speaking, we were talking about tools. I listed a toolkit, and your response appears to have been to issue a concept as your tool. And that’s fine. If I can get the concept, then we can pivot back to talking about tools for spreading that concept, which is really the level at which I’m trying to focus. But I’m a pretty smart dude, and if I can’t get the concept, you’re going to have a really hard time spreading the concept to the Walmart shoppers. And that’s the task in front of you.
I don’t agree. And I’ve noticed this disagreement several times so it might be fundamental. (see indoctrination vs. enculturation). I don’t need (or want or intend) to spread the concepts. Propositional knowing is both weak and low fidelity. The key is not to understand game~b or to either know or spread game~b ideas. The key is to become a game~b player. To embody and live it. Participatory knowing. And to make game~b inviting to other people who might choose to become game~b players.
The suits and marketers and propagandists and strategists can simulate and copy words and ideas. But unless one becomes a game~b player, you can’t do “the thing”. Once you can do the thing, the ideas are grounded in your living and relating.
BJ:
It might prove useful to dig into genocide, but for now I think that to do so would prove a digression. The only thing that came up for me that seemed important here was to remember that when we are talking of ants, the individual ants aren’t objects of selection. Properly speaking the *colony* is the individual. So when an ant colony is at war with another ant colony this is comparable to when a single lion is in conflict with another single lion.
We can skip genocide for bit if you like, but I need to point out that you can’t just think of ant colonies as national entities here. In a genocide, one tribe of humans within a boundary kills off another tribe of humans within a boundary. The national boundary often doesn’t matter. Genocides happen when our resource scarcity neurons activate, and sometimes they activate against our neighbors if they’re identifiably tribally different.
I get the rest of the Silicon Valley vs Big Tech analogy you lay out, but it seems almost tautological. It seems the lesson you’re teaching is “people are more creative when they collaborate as peers instead of reacting to rigorous authoritarian hierarchies.” To that assertion I would probably say something like “duh,” if I wasn’t trying to be polite and engaged. It sounds as if you’ve “stumbled into a truism,” as my dad used to say to me.
And if that’s the case, then the devil is obviously in the Dunbar Details. How do you scale collaborative efforts past the social connection boundary? Communism works great in communes, it only fails once you make it mandatory and you have to take everyone into it. Suddenly Pharma Bro is in in the mashed potato line taking more than his share or whatever.
The answer to scaling past Dunbar boundaries is re-enculturation of a different set of behaviors at a massive scale. This is why Communism “works” in North Korea. They’re uniformly indoctrinated. Of course North Korea’s results aren’t all that great, but they’re about as great as Leninist central planning can be expected to achieve, and they got past the Dunbar number with rigorous thought control.
I don’t agree. And I’ve noticed this disagreement several times so it might be fundamental. (see indoctrination vs. enculturation). I don’t need (or want or intend) to spread the concepts. Propositional knowing is both weak and low fidelity. The key is not to understand game~b or to either know or spread game~b ideas. The key is to become a game~b player. To embody and live it. Participatory knowing. And to make game~b inviting to other people who might choose to become game~b players.
This is where you lose me, and perhaps where all of Game B might lose me. It’s the place where our conversation begins to sound like Scott Alexander’s Universal Love Said the Cactus Person. (If you’re not familiar, go read that and come back) What you said above sounds a hell of a lot like “you can’t be told how to become enlightened, you just have to discover it and then do it.”
And I think there’s real value in that perspective, and I think I hold it to a large degree myself, but if Game B is an unspeakable enlightenment, then what’s in it for me over Zen Buddhism? Game Zb has been around for a while already.
Where I think you and I may differ, and perhaps tremendously so, is this. I acknowledge enlightenment by its nature has no instruction booklet. But there have definitely been wide cultural shifts throughout mankind’s history where we bailed on one set of indoctrinated principles and adopted a different set, and the new set allowed us to beat certain Prisoner’s Dillemas in our older mode of thinking, and then the new indoctrinations became defacto obvious truths because they worked so well. These shifts happened without everyone suddenly becoming per-force enlightened, although perhaps “Woke” might actually be a good and useful term here. The golden rule and the intrinsic value of money rank very highly as necessary indoctrinations for modern life.
Note I’m intentionally going to go back to the “indoctrination” term here, because those things are not natural, they’re beaten into the brains of children by their parents and teachers and pastors so the children can participate in society. Apes don’t have money.
Let’s look closely at religion. 85% of the world is religious. We have space in our brains that religion likes to occupy. This is an evolutionary benefit, because it gives us a pathway to indoctrinate (golden rule) and such into our kids. And perhaps a very devout Christian comes to a deeper understanding of the teachings and allows that to guide their life, and the results of that are positive. But society still works when it includes folks who don’t come to that deeper understanding, because they’re still running the program. And the religions that won out in the game of religious Darwinism where not the most spiritual ones, they were the ones with the most effective program.
The task is to develop the program, and then spread it. If we adopt the position that the program cannot be known except through spirituality, then that doesn’t help us develop the program at all. Best we’ve got is Zen.
The suits and marketers and propagandists and strategists can simulate and copy words and ideas. But unless one becomes a game~b player, you can’t do “the thing”. Once you can do the thing, the ideas are grounded in your living and relating.
I think we have a lot of common ground here, but that common ground needs to be drug out of the weeds.
Let’s go back to my first invocation of the Walmart Shoppers motif, talking about Sam Harris. He’s of the opinion that we don’t need religion to be our moral compass as long as we can get everyone to understand all the important nuances of his preferred flavor of secular humanism. A lot of the New Atheist movement fall into this same trap. This is not scalable. If we’re talking about adjusting the behavior of an ant colony, every ant doesn’t need to understand the colony’s actions from the perspective of a queen, or an entomologist, the ant just needs to understand his behavioral role. This is the primary function of religion in the history of mankind, and remains the primary function today.
(spooky story therefore behave thusly) is a complete end-around to having to explain the actual reasoning behind (behave thusly). The efficacy and robustness of the world’s religions is not tied to the (spooky story), it’s tied to the behavior attached to the story. And through Darwinist action, the religions that had the best set of (behave thusly) won out. I often hear the thought experiment “maybe one of our current religions are right,” but the funner one to me is “what if one of the dead religions was right?” How did that dead religion die? Because its version of (behave thusly) was less optimal than the ones we have today.
Where religion is faltering today, generally speaking, is that the spooky stories can be shown to not be literally true. Allegorically true perhaps, which is where Jordan Peterson is making his money, but the failure of the spooky story to entrap the Walmart shoppers will eventually lead to a failure to propagate healthy memes such as golden rule. This is bad. Not only does that fail to bring us to a better game than Game Ant, it literally breaks the ant hill itself so we might get rubbed out by some other culture running Game Ant better. The Muslims are running Traditional Baby Making Ant way better than we are, and the Chinese are running Authority Ant way better. The conservatives want to reinstate Western Ant to combat this, and the Social Justice folks want to simply Kill All Ants and damn the consequences. It seems to me like the Game B folks want to find the next thing that beats them all and hopefully makes it so we’re not trapped in an ever increasing evolutionary track of ant emulation.
But the funny thing about religion is that anyone can make one. L. Ron Hubbard did it, and he’s not any smarter than you or I am. His books aren’t even that good, but what he cooked up works very well for some people. I think he focused on the right architecture:
spooky story that is believable under the guidelines of science (science fiction anyway)
therefore behave thusly
It seems obvious to me, after years of chewing on this, that what we need is Religion Zero. Religion Zero should be crafted around certain parameters, some of which propagate Game B as I think I understand it, and some which are specifically tailored to ensure that it spreads properly. Here’s a (non-inclusive) list of important qualities of Religion Zero:
The belief that there is in fact an objective reality. This is paramount to ensure postmodernist Chaos Magic cultists don’t hijack the thing.
The belief that our goal should be to discover that reality as best as we can describe it.
The belief that there are elements of that actual reality which are not currently explained by science, but which later might be. This is your (spooky story) platform.
The belief that failing to understand that actual reality will lead us to Nash Equilibria, Prisoners Dilemmas, that are inherently bad. War bad. Genocide bad. Extinction bad. Game B language gets embedded here, where the nerds figure out indoctrinated (encultured?) behaviors which, if adopted by everyone willingly, would avoid these prisoners dilemmas. Insert cultural engineering.
Embrace the idea that this is a reverse doomsday cult. Everything will be terrible unless everyone gets on board and follows Game B behaviors. This is an evangelism tactic. Is it true that the human race is going to die out in less than 500 years unless we get our shit together, as Bret Weinstein often says on Twitter? It would be hard to say for sure, but this is a religion, so we can choose to adopt that belief if that choice is useful to us.
It needs spooky stories (these are a must-have) that could possibly be true given our current understanding of science. These don’t have to be proven, nor even falsifiable in a scientific sense, but the spooky stories must be malleable so they don’t end up outdated by scientific research, while also being effective at enculturing the behaviors identified two bullets up.
Creating a malleable religion is a tough thing, but there are lessons to be learned in how to build it by an objective and calm look at what Social Justice is doing today. Especially on the evangelism front. They’re really good at that.
Issue a guarantee that a Council of Zero will convene once a decade to ensure that the belief structure stays updated to the best available science. Not that it is provable, mind you, just that it is believable and not in conflict. Steal the Social Justice “crowd sourced religion” program and implement it in this layer.
The best possible goal for Religion Zero would be to create a framework that is scientifically believable, but which also didn’t conflict with any major world religions today. Better, it would explain how each of those religions could also be literally scientifically true. This is the holy grail. If you can build that, you can piggyback Religion Zero into the teachings of any and all major world religions, which would not only give you a viral pathway to spread it, but also connect people of different existing religions through the Religion Zero lexiconic pathway.
Imagine the enculturative strength of someone striking up a conversation with a Christian and saying, “I believe your religion is scientifically correct,” and being able to also say that to a Buddhist or a Muslim, and invite them all to lunch.
The scope of Religion Zero may sound too aggressive on first glance, but I think this is absolutely doable.
You’ve expressed the desire to migrate to a forum format instead of Medium as a format, but I rather like Medium so I stuck this here. If you want to pivot over, drop your response on the forum and drop a link here so the two or three people following along can head that way.
Jordan:
Seems reasonable to keep this here now that it has some kind of flow. Will be interesting to see if anyone chooses to follow it (and if Medium can handle this level of nesting!).
I get the rest of the Silicon Valley vs Big Tech analogy you lay out, but it seems almost tautological. It seems the lesson you’re teaching is “people are more creative when they collaborate as peers instead of reacting to rigorous authoritarian hierarchies.” To that assertion I would probably say something like “duh,” if I wasn’t trying to be polite and engaged. It sounds as if you’ve “stumbled into a truism,” as my dad used to say to me.
I agree! And if you can imagine, for me the entire rest of the Game A -> game~b story is equally ‘duh’.
So notice that if the above is tautological, we need to explain why nearly every damn time the “Game A” side of the equation has to be dragged kicking and killing into the future. The answer of course is that “people are more effective” when they are reacting using well understood approaches than trying to invent new ones. So we get a classic “hill climbing” vs. “valley crossing” balance. If the landscape favors the old game then we will find ourselves migrating towards “excellence” “hill climbing” “game ant”. If the landscape favors the new then we will find ourselves migrating towards “remarkableness” “valley crossing” “game b”.
The meta-landscape seems like it started in the “95% of the time the old ways are better”. So most of the time going with “rigorous authoritarian hierarchy” was the best idea. Hence the Bronze Age.
But each time there was a little nudge in valley crossing direction, the meta-landscape shifted a bit in the “valley crossing direction”. So in the 1100’s in the West we were still, say, 80/20 hill climbing. By the 1600’s it had shifted to like 70/30. By the 1800’s to maybe 60/40. By the 1950’s to 50/50. Ish. By the late 70’s we were crossing the Rubicon to a point where (for the first time ever) the meta-landscape fundamentally favored “valley crossing” over “hill climbing” — but the “Great Moderation” (aka Globalist Neo-Liberalism) since then has been a tremendous rearguard action trying to keep the fundamental architecture of Game A in place.
And if that’s the case, then the devil is obviously in the Dunbar Details. How do you scale collaborative efforts past the social connection boundary . . .
The answer to scaling past Dunbar boundaries is re-enculturation of a different set of behaviors at a massive scale. This is why Communism “works” in North Korea. They’re uniformly indoctrinated.
This is the Game A answer. And, of course, it works in the sense that it does scale. But it does so in a fashion that fails for all the various reasons I’ve outlined elsewhere. If we would like to drill down here we can, of course.
This is where you lose me, and perhaps where all of Game B might lose me. It’s the place where our conversation begins to sound like Scott Alexander’s Universal Love Said the Cactus Person. (If you’re not familiar, go read that and come back) What you said above sounds a hell of a lot like “you can’t be told how to become enlightened, you just have to discover it and then do it.”
And I think there’s real value in that perspective, and I think I hold it to a large degree myself, but if Game B is an unspeakable enlightenment, then what’s in it for me over Zen Buddhism? Game Zb has been around for a while already.
Ah. Yes, I get this and it makes sense. I think this is because the “transcendent industry” has had to manufacture a lot of confusion in order to maintain its niche. My experience is that this stuff is simpler. For example, you are jumping all the way from “participatory knowing is the thing” to “unspeakable enlightenment”.
Participatory knowing isn’t == “unspeakable enlightenment”. It is juggling. Or shooting a basketball. Or speaking French. You can read about it in a book. I can talk to you about it ad nauseum. But until you actually do it for yourself (and do that thing where your specific individual way of being in the world conforms to the subtle set of things necessary to actually have this capacity), you aren’t going to make much progress. A good coach doesn’t coach free throws by drawing a diagram. He gives you a ball, points you at the basket, maybe repeats some mantras (“bend your knees”) and tells you to come back when you are hitting the shot.
There is a lot of area under the curve between “disseminating concepts” and “unspeakable enlightenment”. The same goes for the thing you are pointing to as “spirituality”. Spirituality is like how you can learn how to be a better parent or partner by learning how to take responsibility for your own reactions and practice better ways to communicate. Its practical stuff closer to how you make a good house [hint bricks better than straw if you are in a region notably habited by wolves] than what the Burning Man folks traffic in. A truly spiritual person isn’t someone who speaks nonsense and imagines that sage scares away spirits. A truly spiritual person has lived life deeply and has integrated the highest joys and pains into a quality that could be called “wisdom”.
And the religions that won out in the game of religious Darwinism where not the most spiritual ones, they were the ones with the most effective program.
We return to the Big Tech vs. Silicon Valley analogy. In this analogy, Big Tech has the most effective program (strategy). Silicon Valley is “more spiritual” (culture). In each step up the stairway, we go through a phase of “most effective program” working for a while and then a phase when “most spiritual” zips ahead (the Axial age famously is a big example). Again — when the landscape favors hill climbing, the “most effective program” will tend to win. When it favors valley crossing, the “culture most capable of cultivating wisdom” will tend to win.
This is the thing. Bret hits on this a lot. The dead center of game~b is the recognition that we are (whether we like it or not) crossing that rubicon in novelty. As the “stumble” of War continues to move us up the ratchet of accelerating change, Darwinian action is resolutely selecting for cultures that (a) are better at creating change; and (b) are better at thriving in that higher rate of change than the legacy cultures. Capitalism *categorically* outcompetes Feudalism because it changes the landscape into something that Feudalism can’t survive in.
(spooky story therefore behave thusly) is a complete end-around to having to explain the actual reasoning behind (behave thusly). The efficacy and robustness of the world’s religions is not tied to the (spooky story), it’s tied to the behavior attached to the story. And through Darwinist action, the religions that had the best set of (behave thusly) won out. I often hear the thought experiment “maybe one of our current religions are right,” but the funner one to me is “what if one of the dead religions was right?” How did that dead religion die? Because its version of (behave thusly) was less optimal than the ones we have today.
Yep. This is the same dynamic of “local optima” vs. “global optima”. Or “bad parenting”. It is indeed the case that if you smack your kid when they do X, they will stop doing X. And if you are in a situation where X is fatal, that might be a decent approach (i.e., your kids will survive while those without that code die). But you are now on the religio — credo reversal death spiral that Johnny V has been talking about.
For simplicity, I’ll define the approach of “(spooky story behave thusly)” as “morality”. Morality (in this sense) is a strategy that optimizes for achieving particular behaviour [code]. In contexts where very little changes, you can get a long way via Morality. As you say, it is much easier to scale Morality than it is to cultivate Wisdom. And as long as the evolutionary process (Darwinist action) has time to do its work, it will ultimately select for a Morality that is a ‘good’ fit to the context.
But if the context changes, you are in deep trouble — because you have been training your entire society to “run code” rather than to “respond to reality”.
Back to Big Tech vs. Silicon Valley. You are going to see this “duh” dynamic showing up over and over.
It seems obvious to me, after years of chewing on this, that what we need is Religion Zero. [Outline of Religion Zero]
Here is where you lose me. Actually a lot of what you say here seems right to me (particularly the first three bullets), but the essence is off. Perhaps simply practically: we are entering an accelerating phase of what I have been calling “The War on Sensemaking”. In this war, the gloves are off in an total war on the front of “propositional knowing”. The technologies of propaganda, manipulation, etc. are going to go through the same kind of acceleration that the technologies of blowing shit up went through between 1939 and 1945. The entire capacity for “spooky stories” to do anything at all is going to be so much toxic mush. The SJW religion (and its co-conspirator the alt-right religion) will simultaneously fragment and spiral with each other into a vapor of increasingly hyper-potent ideological fixation. [They will be helped of course by any agents playing variants of Game Ant that are still functional — looking at you China.]
This is the ideological part of the endgame of Game A. The Run Code function of the mind that Game A so successfully exapted is tapping out.
This part, however, I think is both possible and good:
The best possible goal for Religion Zero would be to create a framework that is scientifically believable, but which also didn’t conflict with any major world religions today. Better, it would explain how each of those religions could also be literally scientifically true. This is the holy grail. If you can build that, you can piggyback Religion Zero into the teachings of any and all major world religions, which would not only give you a viral pathway to spread it, but also connect people of different existing religions through the Religion Zero lexiconic pathway.
So we are now starting to get very close to the center of this thing.
And if that’s the case, then the devil is obviously in the Dunbar Details. How do you scale collaborative efforts past the social connection boundary . . .
Yep. This is where the OG game~b team sat about nine or so years ago. More specifically, Game A scales — but at an exponent that is below 1 (so you get “decreasing returns to scale”) and with a system design that intrinsically vectors towards systemic fragility. We need to find a way to scale at an exponent greater than 1 and with a system design that is ‘anti-fragile’.
BJ:
(and if Medium can handle this level of nesting!)
It’s certainly a pain in the ass to get here from the beginning, but maybe when we’re done we can just consolidate the meaningful points. Or I can take the time to merge them all into one continuous dialog and throw it up on Medium so it’s easier for the muggles to follow.
I agree! And if you can imagine, for me the entire rest of the Game A -> game~b story is equally ‘duh’.
So notice that if the above is tautological, we need to explain why nearly every damn time the “Game A” side of the equation has to be dragged kicking and killing into the future.
Because Game A is game theoretically maximal under its own ruleset.
Sticking with the tech analogy, nobody can build a better Facebook while Facebook still has its current market share, because it is game theoretically maximal for me as a consumer to go to Facebook because everyone else is on Facebook. In order to beat Facebook you have to either beat it within a world where Facebook already exists, which means you have to hijack it with an app that can sneak its way into the Facebook user base, or you have to exacerbate the collapse of Facebook and build something new and better from the social media debris.
There are analogies to Game A as a whole, or elements of it, in the above statement.
The answer of course is that “people are more effective” when they are reacting using well understood approaches than trying to invent new ones. So we get a classic “hill climbing” vs. “valley crossing” balance. If the landscape favors the old game then we will find ourselves migrating towards “excellence” “hill climbing” “game ant”. If the landscape favors the new then we will find ourselves migrating towards “remarkableness” “valley crossing” “game b”.
The meta-landscape seems like it started in the “95% of the time the old ways are better”. So most of the time going with “rigorous authoritarian hierarchy” was the best idea. Hence the Bronze Age.
It seems to me, based on this passage and a lot of other stuff I’ve read you write, that you think the root problem of Game A is “people collaborating via power hierarchies.” Do I understand that correctly? This is something we need to distill until it’s clear.
If so, I think this might highlight a very fundamental disagreement with your and my thinking. I think social hierarchies are natural and unavoidable, and the main problem is that the intersection of many current systems creates Nash Equilibria that could kill our species. (war, killing the whales, etc) Many of these equilibria are at the government level, so governments can’t be used to fix them.
I think the problems of Game A will get fixed by identifying those Nash Equilibria and making participating in them unethical at the “enculturation” layer. So for example, you can’t tell governments not to war, because war is part of their job, but a global encultration against war might lead us to the hippie solution of “what if they threw a war and nobody showed up.” Fix poverty by “enculturating” charity. Etc.
I will absolutely concede the case that a pivot towards cooperative instead of hierarchical problem solving will help us reach a new global enculturation, but I don’t think elimination of hierarchies will necessarily solve anything on its own, nor do I think it’s even possible.
Is this a major point of disagreement between us? If so, do you think your position is a typical characterization of Game B thinking?
But each time there was a little nudge in valley crossing direction, the meta-landscape shifted a bit in the “valley crossing direction”. So in the 1100’s in the West we were still, say, 80/20 hill climbing. By the 1600’s it had shifted to like 70/30. By the 1800’s to maybe 60/40. By the 1950’s to 50/50. Ish. By the late 70’s we were crossing the Rubicon to a point where (for the first time ever) the meta-landscape fundamentally favored “valley crossing” over “hill climbing” — but the “Great Moderation” (aka Globalist Neo-Liberalism) since then has been a tremendous rearguard action trying to keep the fundamental architecture of Game A in place.
I think your characterization of these ratios may be heavily influenced by your information bubble, being deep in the USA tech sector. I’m a child of construction workers and peanut farmers, and the most technologically savvy portions of my family are Army brass. The landscape in agriculture, outside of the invention of the tractor and the pesticide, favors classical solutions. Those inventions weren’t a result of anti-hierarchical thinking. The landscape in construction, outside of the implementation of critical path method project management, favors classical solutions. Critical Path scheduling is a product of a thought experiment on how to make Game A more Aish. The US Army is probably the most “Game A is Best Game” organization I can think of on earth today, outside of maybe the Catholic Church.
All this to say, “nah dude, it’s like 90/10 Game A right now.” At least from my chair over here as I bang out civil engineering plans.
I want to double quote a bit of dialogue because I think it may drill down to the possible fundamental disagreement I highlighted above.
I said:
The answer to scaling past Dunbar boundaries is re-enculturation of a different set of behaviors at a massive scale. This is why Communism “works” in North Korea. They’re uniformly indoctrinated.
You replied:
This is the Game A answer. And, of course, it works in the sense that it does scale. But it does so in a fashion that fails for all the various reasons I’ve outlined elsewhere. If we would like to drill down here we can, of course.
To me, North Korea only “fails” (a loaded term which may need to be defined) because their nested systems of {religion, governance, economics, ethics} don’t compete well with neighboring systems of {religion, governance, economics, ethics}. It’s a sub-optimal Game A.
I think we all need to be very worried that China has figured out an even better Game A than we have. They have their entire country duped into believing they’re Marxist, which is a great rebellion suppression tool. They have media control, so contra-party narratives can’t spread. They have their social credit score system which works for Orwellian reasons. But they fixed the failures of communism in central resource allocation by rolling out an extremely free market layer at the bottom level, much freer than we have in the USA. (try and get a food truck permit here, QED) And their industry model is ripped straight out of Mussolini fascism / National Socialism. I find everything except the free market layer there highly personally objectionable, but from an analysis perspective I greatly fear that their current system may be Game A2, and it’s spreading fast. In my mind, Game B (or whatever) has a limited window to roll out before A2 wins by emulating Orwell.
Ah. Yes, I get this and it makes sense. I think this is because the “transcendent industry” has had to manufacture a lot of confusion in order to maintain its niche. My experience is that this stuff is simpler. For example, you are jumping all the way from “participatory knowing is the thing” to “unspeakable enlightenment”.
Participatory knowing isn’t == “unspeakable enlightenment”. It is juggling. Or shooting a basketball. Or speaking French. You can read about it in a book. I can talk to you about it ad nauseum. But until you actually do it for yourself (and do that thing where your specific individual way of being in the world conforms to the subtle set of things necessary to actually have this capacity), you aren’t going to make much progress. A good coach doesn’t coach free throws by drawing a diagram. He gives you a ball, points you at the basket, maybe repeats some mantras (“bend your knees”) and tells you to come back when you are hitting the shot.
Emphasis mine.
That makes a lot of sense, but it doesn’t get us to “global reinculturation to end war and stop killing the whales” unless you’ve (A) got that book, and (B) evangelize the book. And the book has to have a bunch of behavioral indoctrinations/enculturations engineered to produce less whale and people killing.
There is a lot of area under the curve between “disseminating concepts” and “unspeakable enlightenment”. The same goes for the thing you are pointing to as “spirituality”. Spirituality is like how you can learn how to be a better parent or partner by learning how to take responsibility for your own reactions and practice better ways to communicate. Its practical stuff closer to how you make a good house [hint bricks better than straw if you are in a region notably habited by wolves] than what the Burning Man folks traffic in. A truly spiritual person isn’t someone who speaks nonsense and imagines that sage scares away spirits. A truly spiritual person has lived life deeply and has integrated the highest joys and pains into a quality that could be called “wisdom”.
I think you’re selling the Burning Man people short here, and also perhaps the Christians and others. The “spirituality” they’re chasing is like a tingle in your brain, like the endorphins released during long distance running or on psilocybin. And our brains are wired to enjoy that tingle, because the apes who had that tingle in their brains more easily adopted religious teachings, which allowed a Darwinism of Religion to manifest. When one of the religions stumbled into baptism (hygiene) and kosher (clean food) it began to spread. When another dude came out with an update that included (golden rule) it won out, generally speaking. That tingle is a potential tool in our toolbox. Don’t disregard it. It is absolutely the most important tool, historically speaking, for this kind of project.
We return to the Big Tech vs. Silicon Valley analogy. In this analogy, Big Tech has the most effective program (strategy). Silicon Valley is “more spiritual” (culture). In each step up the stairway, we go through a phase of “most effective program” working for a while and then a phase when “most spiritual” zips ahead (the Axial age famously is a big example). Again — when the landscape favors hill climbing, the “most effective program” will tend to win. When it favors valley crossing, the “culture most capable of cultivating wisdom” will tend to win.
I’m not sure I believe this at all. I think it might be true inside the tech bubble, but I don’t think it applies universally. Do you think the native Americans got rubbed out because the landscape of the 1800s favored (your analogy) “hill climbing?” If so, I’m not sure there has ever been a period of human history that didn’t favor hill climbing.
This is the thing. Bret hits on this a lot. The dead center of game~b is the recognition that we are (whether we like it or not) crossing that rubicon in novelty. As the “stumble” of War continues to move us up the ratchet of accelerating change, Darwinian action is resolutely selecting for cultures that (a) are better at creating change; and (b) are better at thriving in that higher rate of change than the legacy cultures. Capitalism *categorically* outcompetes Feudalism because it changes the landscape into something that Feudalism can’t survive in.
I’m not sure I buy this either. I think what you’re describing is not an all encompassing shift in paradigm, it’s probably just a curious historical artifact that’s basically local. I think the idea that it’s an all encompassing paradigm shift is one heavily influenced by your perspective. It’s a bubble of creativity inside Game A, from which Game A can extract valuable ideas when they surface, on which to capitalize.
I’m not belittling it, by the way. I think my perspective on it still affords us a lot of opportunity to end war and save the whales, by leveraging new paths to influence Game A in old ways. But if the goal is to pivot the entire planet away from a hierarchical scheme and into a creative cooperative scheme, I think that’s pissing in the wind. I don’t think the collective brains of the planet can run that software. I think the creative bubble needs to build new software and disseminate it, not try and turn all the users of the world into programmers.
You’re in software. You’re a programmer. Programmers (largely) hate users. PEBCAK. Remember, we’re dealing with 8 billion users here.
For simplicity, I’ll define the approach of “(spooky story behave thusly)” as “morality”. Morality (in this sense) is a strategy that optimizes for achieving particular behaviour [code]. In contexts where very little changes, you can get a long way via Morality. As you say, it is much easier to scale Morality than it is to cultivate Wisdom. And as long as the evolutionary process (Darwinist action) has time to do its work, it will ultimately select for a Morality that is a ‘good’ fit to the context.
But if the context changes, you are in deep trouble — because you have been training your entire society to “run code” rather than to “respond to reality”.
Yep. That’s the problem that the world religions are facing now. They’re legacy software with no updates. I’m glad we’re on the same page. Now which task is harder:
Teach 8 billion people to write their own software (wisdom), or
Figure out a way to push them software updates (morality)
What solution have the tech community settled on, when it comes to actual software? Be honest. The tech model is a small group of (wisdom) folks push updates out to the (morality) folks. This is also the global social religious model. It’s the same model.
Do you see how it’s the same model?
Here is where you lose me. Actually a lot of what you say here seems right to me (particularly the first three bullets), but the essence is off.
I think that I lost you around the core premise that I’ve identified here as our disagreement. I want software updates, you want to turn users into programmers. Does this sound like a correct characterization of our divergence?
If that’s a correct characterization, might I offer an olive branch? What if we turn as many users as possible into programmers, (probably something like 1%? 5%?) but then they crowdsource the new operating system to push to everyone else? Because that’s generally my vision. If that sounds amicable, go back and read the Religion Zero bit again through that filter and see if it makes more sense.
This is also, for what it’s worth, the lesson to learn from the Social Justice beta. That’s what they’re doing. Some number of centrally located SJ gurus inside a social influence web push out updates to the users. It works.
Perhaps simply practically: we are entering an accelerating phase of what I have been calling “The War on Sensemaking”. In this war, the gloves are off in an total war on the front of “propositional knowing”. The technologies of propaganda, manipulation, etc. are going to go through the same kind of acceleration that the technologies of blowing shit up went through between 1939 and 1945. The entire capacity for “spooky stories” to do anything at all is going to be so much toxic mush.
What you see as toxic mush, I see as a tremendous opportunity. People like spooky stories, and the sensemaking crisis is going to create a window where new and different ones can be more easily installed, or old ones can be revised. See: Qanon. I see the coming deepfake apocalypse as wave that some ideologies are going to ride, and if we don’t want bad ones to ride it, then we need to put a really good one out there that’s prime to ride it. I see the war on sensemaking as a tool in the toolbox. And I think the Game B people are in a better position to ride that wave because they’re some of the few people who even see the wave coming.
Your idea that Game A is going to collapse under this doesn’t jive with my understanding of the facts though. Game A has survived this sort of thing before, and will again, and if you don’t want Game A to hijack it you need to get on your surfboard and out-surf Game A.
The SJW religion (and its co-conspirator the alt-right religion) will simultaneously fragment and spiral with each other into a vapor of increasingly hyper-potent ideological fixation.
Sure, because they’re built on bad seed crystals, but the people who bail from those are going to want something else to latch onto. This is again an opportunity. They latched onto those things because they were seeking something to fill that part of their brain that likes religion. Once social justice eats itself, they’ll be seeking something else. They’re not programmers, most of them anyway, they’re users. They need something to use. That’s literally why New Atheism pivoted into Atheism+ and such. Those folks in a prior century would have been religious, but they couldn’t abide by the spooky stories because (science) so they gravitated towards atheism. But they still needed some kind of religion, and they found it in SJ. Once SJ eats itself those people will seek out something else.
This is the ideological part of the endgame of Game A. The Run Code function of the mind that Game A so successfully exapted is tapping out.
No no no, heavily disagree. “Run Code” is Game Zero. “Run code” is a biological function inside human brains, flat out. You can call it the NPC meme if you like, but it’s something that almost everyone on all sides of all tribes does.
If you’re building Game B around the idea that all (most, some) users can become programmers, then you’re dead in the water. We go to church because we don’t all have the capacity (or the time) to be a preacher.
Not everyone can be a guru. Religion is the information pathway where
(wisdom) > (morality)
..and most people operate on the morality layer out of personal necessity. Either because they don’t have the mental capacity to develop wisdom, or just as likely they simply don’t have the time.
Yep. This is where the OG game~b team sat about nine or so years ago. More specifically, Game A scales — but at an exponent that is below 1 (so you get “decreasing returns to scale”) and with a system design that intrinsically vectors towards systemic fragility. We need to find a way to scale at an exponent greater than 1 and with a system design that is ‘anti-fragile’.
This sounds interesting to unpack, but I’m not sure it matters if the thing you’re brewing up can’t run on the hardware you have available to work with. 8 billion legacy boxes of different configurations and different capabilities, and you need to install a new OS on all of them.
That’s your actual project. Or it should be.
If that’s not your actual project, then the ceiling of Game B is some Facebook groups sharing stories about green building initiatives, with some occasional cryptic mentions on Joe Rogan.
I think you need to consider the idea that religion may very well be the scaling system you’re looking for, if the religion itself weren’t centralized. Do you understand what I mean by that?
Fin
This went on in some more spots at other places, including a podcast here:
But in the end, it appears that no, in fact Jordan and the Game B crowd have not at all reached the same conclusions as I did, and I don’t think they’re going to, and I don’t think I’m going to come around to their way of thinking about this stuff either.
"Whether we call that “culture war” or we call it “waging war on culture war by using the modes of culture war” is in the end, I think, semantic."
Just creating another culture in the war between cultures.
Reminds me of XKCD #927: https://xkcd.com/927/
Also, being from the Libertarian bent myself and feeling like it is something of a "culture trying to supplant two dominant cultures," such a transcendent culture is also met with as much, if not more hostility from both existent cultures. Don't be surprised if the Game A players put aside their differences to nip Game B in the bud before resuming the already scheduled culture war.
I am intrigued by your ideas and would like to subscribe to your newsletter.
Oh, hey! Look at that! How convenient. 🤪