4

Narrative Tug-of-War

 8 months ago
source link: https://www.notboring.co/p/narrative-tug-of-war
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Narrative Tug-of-War

How to read EA v. e/acc

Welcome to the 408 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 216,703 smart, curious folks by subscribing here:

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ca0acc-35c6-4a27-a1b5-09d48caa37ef_1600x577.png

The sourcing tool for data driven VCs

Harmonic AI is the startup discovery tool trusted by VCs and sales teams in search of breakout companies. It’s like if Crunchbase or CB Insights was built today and without a bunch of punitive paywalls. Accel, YC, Brex and hundreds more use Harmonic to:

  • Discover new startups in any sector, geography, or stage including stealth.

  • Track companies’ performance with insights on fundraising, hiring, web traffic, and more.

  • Monitor their networks for the next generation of founders.

Whether you're an investor or GTM leader, Harmonic is just one of those high-ROI no-brainers to have in your stack. 

Find your next deal on Harmonic!

Hi friends 👋 ,

Happy Tuesday! Hope you all had a great Thanksgiving (or enjoyed the peace and quiet while us Americans were in Turkey comas).

Apologies that this is a little late — once again, the newsletter gods dropped a perfect example of the point I was trying to make in my lap at the last minute, and I’ve been up since 5:30 trying to incorporate it.

We live in a time of extreme narratives. It’s easy to get caught up and worked up when you take the extremes in isolation. Don’t. They’re part of a bigger game, and once you see it, the world makes a lot more sense.

Let’s get to it.

Narrative Tug-of-War

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd992b43-2750-498d-9b4b-f223df4676de_1200x600.png

One of the biggest changes to how I see the world over the past year or so is viewing ideological debates as games of narrative tug-of-war

For every narrative, there is an equal and opposite narrative. It’s practically predetermined, cultural physics. 

One side pulls hard to its extreme, and the other pulls back to its own. 

AI is going to kill us all ←→ AI is going to save the world. 

What starts as a minor disagreement gets amplified into completely opposing worldviews. What starts as a nuanced conversation gets boiled down to catchphrases. Those who start as your opponents become your enemies. 

It’s easy to get worked up if you focus on the extremes, on the teams tugging the rope on each side. It’s certainly easy to nitpick everything they say and point out all of the things they missed or left out. 

Don’t. Focus on the knot in the middle. 

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde8e5baa-7559-4d59-b423-4213bb17dd3a_849x359.png

That knot, moving back and forth over the center line as each team tries to pull it further to their own side, is the important thing to watch. That’s the emergent synthesis of the ideas, and where they translate into policy and action. 

There’s this concept called the Overton Window: the range of policies or ideas that are politically acceptable at any given time. 

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f73ef4b-f790-4ed5-8ddd-5e099baa139b_868x359.png

Since Joseph Overton came up with the idea in the mid-1990s, the concept has expanded beyond government policy. Now, it’s used to describe how ideas enter the mainstream conversation where they influence public opinion, societal norms, and institutional practices. 

The Overton Window is the knot in the narrative tug-of-war. The teams pulling on either side don’t actually expect that everyone will agree with and adopt their ideas; they just need to pull hard enough that the Overton Window shifts in their direction. 

Another way to think about it is like price anchoring, when a company offers multiple price tiers knowing that you’ll land on the one in the middle and pay more for it than you would have without seeing how little you get for the lower price or how much you’d have to pay to get all of the features. 

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F355e232b-f9f8-4f36-856d-1b6bcf82ab56_908x599.png

No one expects you to pay $7,000 for the Super Pro tier (although they’d be happy if you did). They just know that by showing it to you, it will make paying $69 for the Pro tier more palatable. 

The same thing happens with narratives, but instead of one company carefully setting prices to maximize the likelihood that you buy the Pro tier, independent and opposed teams, often made up of people who’ve never met, loosely coordinated through group chats and memes, somehow figure out how to pull hard enough that they move the knot back to what they view as an acceptable place. It’s a kind of cultural magic when you think about it. 

There are a lot of examples I could use to illustrate the idea, many of which could get me in trouble, so I’ll stick to what I know: tech. Specifically, degrowth vs. growth, or EA vs. e/acc. 

EA vs. e/acc

One of the biggest debates in my corner of Twitter, which burst out into the world with this month’s OpenAI drama, is Effective Altruism (EA) vs. Effective Accelerationism (e/acc). 

It’s the latest manifestation of an age-old struggle between those who believe we should grow, and those who don’t, and the perfect case study through which to explore the narrative tug-of-war. 

If you look at either side in isolation, both views seem extreme. 

EA (which I’m using as a shorthand for the AI-risk team), believes that there is a very good chance that AI is going to kill all of us. Given the fact that there will be trillions of humans in the coming millennia, even if there’s a 1% chance AI will kill us all, preventing that from happening will save tens or hundreds of billions of expected lives. We need to stop AI development before we get to AGI, whatever the cost. 

As that team’s captain, Eliezer Yudkowsky, wrote in Time:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

The idea that we should bomb datacenters to prevent the development of AI, taken in a vacuum, is absurd, as many of AI’s supporters were quick to point out. 

e/acc (which I’m using as a shorthand for the pro-AI team) believes that AI won’t kill us all and that we should do whatever we can to accelerate it. They believe that technology is good, capitalism is good, and that the combination of the two, the techno-capital machine, is the “engine of perpetual material creation, growth, and abundance. We need to protect the techno-capital machine at all costs. 

Marc Andreessen, who rocks “e/acc” in his twitter bio, recently wrote The Techno-Optimist Manifesto, in which he makes the case for essentially unchecked technological progress. One section in particular drew the ire of AI’s opponents: 

We have enemies.

Our enemies are not bad people – but rather bad ideas.

Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like “existential risk”, “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”.

This demoralization campaign is based on bad ideas of the past – zombie ideas, many derived from Communism, disastrous then and now – that have refused to die.

The idea that things most people view as good – like sustainability, ethics, and risk management – taken in a vacuum, seems absurd, as many journalists and bloggers were quick to point out. 

What critics of both pieces either missed is that neither argument should be taken in a vacuum. Nuance isn’t the point of any one specific argument. You pull the edges hard so that nuance can emerge in the middle. 

While there are people on both teams who support their side’s most radical views – a complete AI shutdown on one side, unchecked techno-capital growth on the other – what’s really happening is a game of narrative tug-of-war in which the knot is regulation

EA would like to see AI regulated, and would like to be the ones who write the regulation. e/acc would like to see AI remain open and not controlled by any one group, be it a government or a company. 

One side tugs by warning that AI Will Kill Us All in order to scare the public and the government into hasty regulation, the other side tugs back by arguing that AI Will Save the World to stave off regulation for long enough that people can experience its benefits firsthand. 

Personally, and unsurprisingly, I’m on the side of the techno-optimists. That doesn’t mean that I believe that technology is a panacea, or that there aren’t real concerns that need to be addressed. 

It means that I believe that growth is better than stagnation, that problems have solutions, that history shows that both technological progress and capitalism have improved humans’ standard of living, and that bad regulation is a bigger risk than no regulation. 

While the world shifts based on narrative tug-of-wars, there is also truth, or at least fact patterns. Doomers – from Malthus to Ehrlich – continue to be proven wrong, but fear sells, and as a result, the mainstream narrative continues to lean anti-tech. The fear is that restrictive regulation is put in place before the truth can emerge. 

Because the thing about this game of narrative tug-of-war is that it’s not a fair one. 

The anti-growth side needs only to pull hard and long enough to get regulation enacted. Once it’s in place, it’s hard to overturn; typically, it ratchets up. Nuclear energy is a clear example

If they can pull the knot over the regulation line, they win, game over. 

The pro-growth side has to keep pulling for long enough for the truth to emerge in spite of all the messiness that comes with any new technology, for entrepreneurs to build products that prove out the promise, and for creative humans to devise solutions that address concerns without neutering progress. 

They need to keep the tug-of-war going long enough for solutions to emerge in the middle. 

Yesterday, Ethereum co-founder Vitalik Buterin wrote a piece called My techno-optimism in which he proposed one such solution: d/acc. 

The “d,” he wrote, “can stand for many things; particularly, defense, decentralization, democracy and differential.” It means using technology to develop AI in a way that protects against potential pitfalls and prioritizes human flourishing. 

On one side, the AI safety movement pulls with the message: “you should just stop.” 

On the other, e/acc says, “you’re already a hero just the way you are.” 

Vitalik proposes d/acc as a third, middle way: 

A d/acc message, one that says "you should build, and build profitable things, but be much more selective and intentional in making sure you are building things that help you and humanity thrive", may be a winner.

It’s a synthesis, one he argues can appeal to people whatever their philosophy (as long as the philosophy isn’t “regulate the technology to smithereens”): 

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0ccf36a-a175-48c6-9821-7d03ffa1387d_768x670.png
Vitalik Buterin, My techno-optimism

Without EA and e/acc pulling on both extremes, there may not have been room in the middle for Vitalik’s d/acc. The extremes, lacking nuance themself, create the space for nuance to emerge in the middle. 

If EA wins, and regulation halts progress or concentrates it into the hands of a few companies, that room no longer exists. If the goal is to regulate, there’s no room for a solution that doesn’t involve regulation. 

But if the goal is human flourishing, there’s plenty of room for solutions. Keeping that room open is the point. 

Despite the fact that Vitalik explicitly disagrees with pieces of e/acc, both Marc Andreessen and e/acc’s pseudonymous co-founder Beff Jezos shared Vitalik’s post. That’s a hint that they’re less worried about their solution than a good solution. 

Whether d/acc is the answer or not, it captures the point of tugging on the extremes beautifully. Only once e/acc set the outer boundary could a solution that involves merging humans and AI through Neuralinks be viewed as a sensible, moderate take. Ray Kurzweil made that point a couple decades ago and has the arrows to prove it. 

In this and other narrative tug-of-wars, the extremes serve a purpose, but they are not the purpose. For every EA, there is an equal and opposite e/acc. As long as the game continues, solutions can emerge from that tension. 

Don’t focus on the tuggers, focus on the knot.

Thanks to Dan for editing!

That’s all for today. We’ll be back in your inbox with the Weekly Dose on Friday!

Thanks for reading,

Packy


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK