I tried to just strong-downvote this and move on, but I couldn’t. It’s just too bad in too many ways, and from its scores it seems to be affecting too many people.
a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes
This is ad hominem in a nasty tone.
Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.
Et tu quoque? Look at this next bit:
A major hole in the “complete technological determinism” argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward the ascent of an intentional stance. An intentional civilization would, of course, be able to navigate the tech tree. (For a completely opposite argument about the very high chance of a “choice transition,” check https://strangecities.substack.com/p/the-choice-transition).
Maybe “agency at larger scales could change”. I doubt it, and I think your “trendline” is entirely wishful thinking.
But even if it can change, and even if that trendline does exist, you’re talking about an at best uncertain 100 or 500 year change. You seem to be relying on that to deal with a 10 to 50 year problem. The civilization we have now isn’t capable of delaying insert-AI-consequence-here long enough for this “intentional” civilization to arise.
If the people you’re complaining about are saying “Let’s just build this and, what the heck, everything could turn out all right”, then you are equally saying “Let’s just hope some software gives us an Intentional Civilization, and what the heck, maybe we can delay this onrushing locomotive until we have one”.
As for “complete technological determinism”, that’s a mighty scary label you have there, but you’re still basically just name-calling.
On one side are people trying to empower humanity by building coordination technology and human-empowering AI.
Who? What “coordination technology”? How exactly is this “human-empowering AI” supposed to work?
As far as I can see, that’s no more advanced, and even less likely to be feasible, than “friendly godlike ASI”. And even if you had it, humans would still have to adapt to it, at human speeds.
This is supposed to give you an “intentional civilization” in time? I’m sorry, but that’s not plausible at all. It’s even less plausible than the idea that everything will just turn out All Right by itself.
… and that plan seems to be the only actual substance you’re offering.
On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.
This appears to assume that human labor should have value, which I assume to mean that it should be rewarded somehow, thus that performing such labor should accrue some advantage, other than having performed the labor itself… which seems to imply that people who do not perform such labor should be at a comparative disadvantage.
… meaning that other people have to work, on pain of punishment, to provide you and those who agree with you with some inchoately described sense of value.
If we’re going to name-call ideas, that one sounds uncomfortably close to slavery.
It also seems to assume that not having to work is “disempowering”, which is, um, strange, and that being “disempowered” (in whatever unspecified way) is bad, which isn’t a given, and that most people aren’t already “disempowered” right now, which would demand a very odd definition of what it means to be “disempowered”.
I tried to just strong-downvote this and move on, but I couldn’t. It’s just too bad in too many ways, and from its scores it seems to be affecting too many people.
This is ad hominem in a nasty tone.
Et tu quoque? Look at this next bit:
Maybe “agency at larger scales could change”. I doubt it, and I think your “trendline” is entirely wishful thinking.
But even if it can change, and even if that trendline does exist, you’re talking about an at best uncertain 100 or 500 year change. You seem to be relying on that to deal with a 10 to 50 year problem. The civilization we have now isn’t capable of delaying insert-AI-consequence-here long enough for this “intentional” civilization to arise.
If the people you’re complaining about are saying “Let’s just build this and, what the heck, everything could turn out all right”, then you are equally saying “Let’s just hope some software gives us an Intentional Civilization, and what the heck, maybe we can delay this onrushing locomotive until we have one”.
As for “complete technological determinism”, that’s a mighty scary label you have there, but you’re still basically just name-calling.
Who? What “coordination technology”? How exactly is this “human-empowering AI” supposed to work?
As far as I can see, that’s no more advanced, and even less likely to be feasible, than “friendly godlike ASI”. And even if you had it, humans would still have to adapt to it, at human speeds.
This is supposed to give you an “intentional civilization” in time? I’m sorry, but that’s not plausible at all. It’s even less plausible than the idea that everything will just turn out All Right by itself.
… and that plan seems to be the only actual substance you’re offering.
This appears to assume that human labor should have value, which I assume to mean that it should be rewarded somehow, thus that performing such labor should accrue some advantage, other than having performed the labor itself… which seems to imply that people who do not perform such labor should be at a comparative disadvantage.
… meaning that other people have to work, on pain of punishment, to provide you and those who agree with you with some inchoately described sense of value.
If we’re going to name-call ideas, that one sounds uncomfortably close to slavery.
It also seems to assume that not having to work is “disempowering”, which is, um, strange, and that being “disempowered” (in whatever unspecified way) is bad, which isn’t a given, and that most people aren’t already “disempowered” right now, which would demand a very odd definition of what it means to be “disempowered”.
… and the rest is just more ad hominem.