>I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Well, I’m something of a skeptic. (And what we’re seeing today is definitely not actual intelligence-in-a-box, it’s just a hype bubble being inflated by the usual silicon valley grifters to keep the dollars flowing in from the credulous: I’m pretty certain it’s going to burst in the next few months.)
I’m currently working on a far-future/space opera novel which asks, basically, what if there is no singularity, no mind uploading, no simulation afterlife, and no real route to sAIs (at least, routes accessible to human-grade intelligences), but (a) we get a mechanism for FTL expansion (this is a necessary hand-wave, or I don’t have a space opera, I have a bucket-of-crabs trapped on a single planet), and (b) TESCREAL turns out to be a design pattern for successful evangelical religions among technological civilizations? (There are holy wars. Boy are there holy wars!)
It’s a little overdue—I began it in 2015, then real life got in the way, repeatedly—but hopefully it’ll be published in the next 2-3 years (the wheels of trade fiction publishing grind slow).
Gotcha. Well we’ll see how the next year or so goes.
(I do agree we’re in a bubble, and also that there is something shallow about how the current AIs accomplish most of their problem-solving-tasks. But, seems to me like all the pieces are there for RL training on diverse problem solving to take it from here. And, like, the dotcom bubble crashed, but that doesn’t mean the internet didn’t end up dominating the market later anyway)
But, anyways, thanks for clarifying that stuff about Accelerando and welcome to LessWrong! (It sounds like you’d mostly find AI discourse on LW aggravating, but FYI you can click the gear icon at the top of the posts page, and set various tag-topics to “hidden” or “reduced” and anything else that’s interesting to you)
You said that you believe that the AI bubble is going to burst in the next few months. Could you phrase that as a testable prediction? For example, you’re 80% sure that in 6 months, the stock market price of Nvidia will be 80% of what it is today. This would help me understand your prediction better. Different people have different views on what “pretty certain” and “burst” means. I just want a clarification. I don’t want to argue about the prediction.
>I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Well, I’m something of a skeptic. (And what we’re seeing today is definitely not actual intelligence-in-a-box, it’s just a hype bubble being inflated by the usual silicon valley grifters to keep the dollars flowing in from the credulous: I’m pretty certain it’s going to burst in the next few months.)
I’m currently working on a far-future/space opera novel which asks, basically, what if there is no singularity, no mind uploading, no simulation afterlife, and no real route to sAIs (at least, routes accessible to human-grade intelligences), but (a) we get a mechanism for FTL expansion (this is a necessary hand-wave, or I don’t have a space opera, I have a bucket-of-crabs trapped on a single planet), and (b) TESCREAL turns out to be a design pattern for successful evangelical religions among technological civilizations? (There are holy wars. Boy are there holy wars!)
It’s a little overdue—I began it in 2015, then real life got in the way, repeatedly—but hopefully it’ll be published in the next 2-3 years (the wheels of trade fiction publishing grind slow).
Gotcha. Well we’ll see how the next year or so goes.
(I do agree we’re in a bubble, and also that there is something shallow about how the current AIs accomplish most of their problem-solving-tasks. But, seems to me like all the pieces are there for RL training on diverse problem solving to take it from here. And, like, the dotcom bubble crashed, but that doesn’t mean the internet didn’t end up dominating the market later anyway)
But, anyways, thanks for clarifying that stuff about Accelerando and welcome to LessWrong! (It sounds like you’d mostly find AI discourse on LW aggravating, but FYI you can click the gear icon at the top of the posts page, and set various tag-topics to “hidden” or “reduced” and anything else that’s interesting to you)
You said that you believe that the AI bubble is going to burst in the next few months. Could you phrase that as a testable prediction? For example, you’re 80% sure that in 6 months, the stock market price of Nvidia will be 80% of what it is today. This would help me understand your prediction better. Different people have different views on what “pretty certain” and “burst” means. I just want a clarification. I don’t want to argue about the prediction.