Ah, if things go well, it will be an amazing opportunity to find out how much of our minds was ultimately motivated by fear. Suppose that you are effectively immortal and other people can’t hurt you, what would you do? Would you still want to learn? Would you bother keeping friends? Or would you maybe just simulate million kinds of experience, and then get bored and decide to die or wirehead yourself?
I think I want to know the answer. If it kills me, so be it… the universe didn’t have a better plan for me anyway.
It would probably be nicer to take things slowly. Stop death and pain, and then let people slowly figure out everything. That would keep a lot of life normal. The question is whether we could coordinate on that, because it would be tempting to cheat. If we all voluntarily slow down and try to do “life as normal, but without pain”, a little bit of cheating (“hey AI, give me 20 extra IQ points and all university-level knowledge as of 2023, but don’t tell anyone; otherwise give me the life as normal but without pain”) would keep lot of the benefits of life as normal, but also give one a relative advantage and higher status. It’s not even necessary to give me all the knowledge, just make me run 10 times faster when no one is looking, and I will study it myself.
Maybe humanity will split into different bubbles, depending on how fast they want to take it, with AI keeping the boundaries between them, so that the slower ones are protected from interfering of the faster ones.
Probably the difficult choice will be whether to keep our relative disadvantages, especially if they can be fixed just by asking the AI nicely. We could probably agree on “AI, don’t tell us the Theory of Everything, we want to figure it out ourselves, and now we have enough time to do so”, or at least, to make the rule that anyone who wants to hear the answer from the AI is allowed to, but then is prevented from giving spoilers to others. But it would feel unfair e.g. to require people with low IQ to stay that way. However, the more differences we remove, the less sense remains for the division of labor, which seems like an important part of our relationships. (Not just literally “labor”, but even things like “this person typically makes the jokes, because they are better at making jokes; and this person typically says something empathic; and this person typically makes the decision when the others hesitate...”.)
(Not just literally “labor”, but even things like “this person typically makes the jokes, because they are better at making jokes; and this person typically says something empathic; and this person typically makes the decision when the others hesitate...”.)
Why would it be desirable to maintain this kind of ‘division of labor’ in an ideal future?
Ah, if things go well, it will be an amazing opportunity to find out how much of our minds was ultimately motivated by fear. Suppose that you are effectively immortal and other people can’t hurt you, what would you do? Would you still want to learn? Would you bother keeping friends? Or would you maybe just simulate million kinds of experience, and then get bored and decide to die or wirehead yourself?
I think I want to know the answer. If it kills me, so be it… the universe didn’t have a better plan for me anyway.
It would probably be nicer to take things slowly. Stop death and pain, and then let people slowly figure out everything. That would keep a lot of life normal. The question is whether we could coordinate on that, because it would be tempting to cheat. If we all voluntarily slow down and try to do “life as normal, but without pain”, a little bit of cheating (“hey AI, give me 20 extra IQ points and all university-level knowledge as of 2023, but don’t tell anyone; otherwise give me the life as normal but without pain”) would keep lot of the benefits of life as normal, but also give one a relative advantage and higher status. It’s not even necessary to give me all the knowledge, just make me run 10 times faster when no one is looking, and I will study it myself.
Maybe humanity will split into different bubbles, depending on how fast they want to take it, with AI keeping the boundaries between them, so that the slower ones are protected from interfering of the faster ones.
Probably the difficult choice will be whether to keep our relative disadvantages, especially if they can be fixed just by asking the AI nicely. We could probably agree on “AI, don’t tell us the Theory of Everything, we want to figure it out ourselves, and now we have enough time to do so”, or at least, to make the rule that anyone who wants to hear the answer from the AI is allowed to, but then is prevented from giving spoilers to others. But it would feel unfair e.g. to require people with low IQ to stay that way. However, the more differences we remove, the less sense remains for the division of labor, which seems like an important part of our relationships. (Not just literally “labor”, but even things like “this person typically makes the jokes, because they are better at making jokes; and this person typically says something empathic; and this person typically makes the decision when the others hesitate...”.)
Why would it be desirable to maintain this kind of ‘division of labor’ in an ideal future?
Maybe it won’t, but it seems to me that people today build a lot of their interactions around that.
(For example, I read Astral Codex Ten, because Scott is much better at writing than me.)