And pinning my hopes on AI wouldn’t work very well anymore anyway, since AI now seems more likely to me to lead to dystopian outcomes than utopian ones.
Do you mean s-risks, x-risks, age of em style future, stagnation, or mainstream dystopic futures?
I am suspcious about claims of this sort. It sounds like a case of “x is an illusion. Therefore, the pre-formal things leading to me reifying x are fake too.” Yeah, A and B are the same brightness. No, they’re not the same visually.
I.e. I am claiming that you are probably making the same mistake people make when they say “there’s no such thing as free will”. They are correct that you aren’t “transcendentally free” but make the mistake of treating their feelings which generated that confused statement as confused in themselves, instead of just another part of the world. I just suspect you’re doing a much more sophisticated version of this.
Or maybe I’m misunderstanding you. That’s also quite likely.
EDIT: I just read the wikipedia page you linked to. I was misunderstanding you. Now, I think you are making an invalid inference from our state of knowledge of “Relation R” i.e. psychological connectedness which is quite rough and suprisingly far reaching to coarsening it while preserving those states that are connected. Naturally, you then notice “oh, R is a very indiscriminating relationship” and impose your abstract model of R over R itself, reducing your fear of death.
EDIT 2: Note I’m not disputing your choice not to identify as a transhumanist. That’s your choice and is valid. I’m just disputing the arguement for worrying less about death that I think you’re referencing.
Do you mean s-risks, x-risks, age of em style future, stagnation, or mainstream dystopic futures?
“All of the above”—I don’t know exactly which outcome to expect, but most of them feel bad and there seem to be very few routes to actual good outcomes. If I had to pick one, “What failure looks like” seems intuitively most probable, as it seems to require little else than current trends continuing.
I am suspcious about claims of this sort. It sounds like a case of “x is an illusion. Therefore, the pre-formal things leading to me reifying x are fake too.”
That sounds like a reasonable thing to be suspicious about! I should possibly also have linked my take on the self as a narrative construct.
Though I don’t think that I’m saying the pre-formal things are fake. At least to my mind, that would correspond to saying something like “There’s no lasting personal identity so there’s no reason to do things that make you better off in the future”. I’m clearly doing things that will make me better off in the future. I just feel less continuity to the version of me who might be alive fifty years from now, so the thought of him dying of old age doesn’t create a similar sense of visceral fear. (Even if I would still prefer him to live hundreds of years, if that was doable in non-dystopian conditions.)
You picked an interesting example of an optical illusion, as I maintain that it isn’t one. As noted in the linked comment thread, this can be analogized to philosophical/psychological questions (like the one in the OP)…
Do you mean s-risks, x-risks, age of em style future, stagnation, or mainstream dystopic futures?
I am suspcious about claims of this sort. It sounds like a case of “x is an illusion. Therefore, the pre-formal things leading to me reifying x are fake too.” Yeah, A and B are the same brightness. No, they’re not the same visually.
I.e. I am claiming that you are probably making the same mistake people make when they say “there’s no such thing as free will”. They are correct that you aren’t “transcendentally free” but make the mistake of treating their feelings which generated that confused statement as confused in themselves, instead of just another part of the world. I just suspect you’re doing a much more sophisticated version of this.
Or maybe I’m misunderstanding you. That’s also quite likely.
EDIT: I just read the wikipedia page you linked to. I was misunderstanding you. Now, I think you are making an invalid inference from our state of knowledge of “Relation R” i.e. psychological connectedness which is quite rough and suprisingly far reaching to coarsening it while preserving those states that are connected. Naturally, you then notice “oh, R is a very indiscriminating relationship” and impose your abstract model of R over R itself, reducing your fear of death.
EDIT 2: Note I’m not disputing your choice not to identify as a transhumanist. That’s your choice and is valid. I’m just disputing the arguement for worrying less about death that I think you’re referencing.
“All of the above”—I don’t know exactly which outcome to expect, but most of them feel bad and there seem to be very few routes to actual good outcomes. If I had to pick one, “What failure looks like” seems intuitively most probable, as it seems to require little else than current trends continuing.
That sounds like a reasonable thing to be suspicious about! I should possibly also have linked my take on the self as a narrative construct.
Though I don’t think that I’m saying the pre-formal things are fake. At least to my mind, that would correspond to saying something like “There’s no lasting personal identity so there’s no reason to do things that make you better off in the future”. I’m clearly doing things that will make me better off in the future. I just feel less continuity to the version of me who might be alive fifty years from now, so the thought of him dying of old age doesn’t create a similar sense of visceral fear. (Even if I would still prefer him to live hundreds of years, if that was doable in non-dystopian conditions.)
You picked an interesting example of an optical illusion, as I maintain that it isn’t one. As noted in the linked comment thread, this can be analogized to philosophical/psychological questions (like the one in the OP)…
What would be an example of an optical illusion then?