I’m arguing that we won’t be fine. History doesn’t help with that, it’s littered with examples of societies that thought they would be fine. An example I always mention is enclosures in England, where the elite deliberately impoverished most of the country to enrich themselves.
Is the idea here that England didn’t do “fine” after enclosures? But in the century following the most aggressive legislative pushes towards enclosure (roughly 1760-1830), England led the industrial revolution, with large, durable increases in standards of living for the first time in world history—for all social classes, not just the elite. Enclosure likely played a major role in the increase in agricultural productivity in England, which created unprecedented food abundance in England.
It’s true that not everyone benefitted from these reforms, inequality increased, and a lot of people became worse off from enclosure (especially in the short-term, during the so-called Engels’ pause), but on the whole, I don’t see how your example demonstrates your point. If anything your example proves the opposite.
The peasant society and way of life was destroyed. Those who resisted got killed by the government. The masses of people who could live off the land were transformed into poor landless workers, most of whom stayed poor landless workers until they died.
Yes, later things got better for other people. But my phrase wasn’t “nobody will be fine ever after”. My phrase was “we won’t be fine”. The peasants liked some things about their society. Think about some things you like about today’s society. The elite, enabled by AI, can take these things from you if they find it profitable. Roko says it’s impossible, I say it’s possible and likely.
The elite, enabled by AI, can take these things from you if they find it profitable. Roko says it’s impossible,
No, I think that is quite plausible.
But note that we have moved a very long way from “AIs versus humans, like in terminator” to “Existing human elites using AI to harm plebians”. That’s not even remotely the same thing.
Roko says it’s impossible, I say it’s possible and likely.
I’m not sure Roko is arguing that it’s impossible for capitalist structures and reforms to make a lot of people worse off. That seems like a strawman to me. The usual argument here is that such reforms are typically net-positive: they create a lot more winners than losers. Your story here emphasizes the losers, but if the reforms were indeed net-positive, we could just as easily emphasize the winners who outnumber the losers.
In general, literally any policy that harms people in some way will look bad if you focus solely on the negatives, and ignore the positives.
It’s indeed possible that, in keeping with historical trends of capitalism, the growth of AI will create a lot more winners than losers. For example, a trillion AIs and a handful of humans could become winners, while most humans become losers. That’s exactly the scenario I’ve been talking about in this thread, and it doesn’t feel reassuring to me. How about you?
I recognize that. But it seems kind of lame to respond to a critique of an analogy by simply falling back on another, separate analogy. (Though I’m not totally sure if that’s your intention here.)
Is the idea here that England didn’t do “fine” after enclosures? But in the century following the most aggressive legislative pushes towards enclosure (roughly 1760-1830), England led the industrial revolution, with large, durable increases in standards of living for the first time in world history—for all social classes, not just the elite. Enclosure likely played a major role in the increase in agricultural productivity in England, which created unprecedented food abundance in England.
It’s true that not everyone benefitted from these reforms, inequality increased, and a lot of people became worse off from enclosure (especially in the short-term, during the so-called Engels’ pause), but on the whole, I don’t see how your example demonstrates your point. If anything your example proves the opposite.
The peasant society and way of life was destroyed. Those who resisted got killed by the government. The masses of people who could live off the land were transformed into poor landless workers, most of whom stayed poor landless workers until they died.
Yes, later things got better for other people. But my phrase wasn’t “nobody will be fine ever after”. My phrase was “we won’t be fine”. The peasants liked some things about their society. Think about some things you like about today’s society. The elite, enabled by AI, can take these things from you if they find it profitable. Roko says it’s impossible, I say it’s possible and likely.
No, I think that is quite plausible.
But note that we have moved a very long way from “AIs versus humans, like in terminator” to “Existing human elites using AI to harm plebians”. That’s not even remotely the same thing.
Yeah, I don’t think it’ll be like the terminator. In the first comment I said “dispossess all baseline humans” but should’ve said “most”.
That’s just run-of-the-mill history though.
I’m not sure Roko is arguing that it’s impossible for capitalist structures and reforms to make a lot of people worse off. That seems like a strawman to me. The usual argument here is that such reforms are typically net-positive: they create a lot more winners than losers. Your story here emphasizes the losers, but if the reforms were indeed net-positive, we could just as easily emphasize the winners who outnumber the losers.
In general, literally any policy that harms people in some way will look bad if you focus solely on the negatives, and ignore the positives.
It’s indeed possible that, in keeping with historical trends of capitalism, the growth of AI will create a lot more winners than losers. For example, a trillion AIs and a handful of humans could become winners, while most humans become losers. That’s exactly the scenario I’ve been talking about in this thread, and it doesn’t feel reassuring to me. How about you?
Exactly. It’s possible and indeed happens frequently.
As the original post mentioned, the Industrial Revolution wasn’t very good for horses.
I recognize that. But it seems kind of lame to respond to a critique of an analogy by simply falling back on another, separate analogy. (Though I’m not totally sure if that’s your intention here.)