I think you missed a key point which is the narrator is Aineko and Aineko is not a cat. Aineko is an sAI that has figured out that humans are more easily interacted with/manipulated if you look like a toy or a pet than if you look like a Dalek. Aineko is not benevolent: and the human “survivors” in the final chapter aren’t even themselves, they’re simulations Aineko is running for its own reasons.
Yeah I indeed did not get that, or maybe forgot, that is interesting. Will keep it in mind on my current re-read.
I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Also just curious what your actual best guesses are for how things are likely to play out, now that AI is showing up more prominently in real life.
(I vaguely recall an interview where you didn’t really like talking about this with rationalist-types about this sort of thing, so no worries if you don’t want to get into any of that, but, well, you did show up so I figured I’d ask)
>I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Well, I’m something of a skeptic. (And what we’re seeing today is definitely not actual intelligence-in-a-box, it’s just a hype bubble being inflated by the usual silicon valley grifters to keep the dollars flowing in from the credulous: I’m pretty certain it’s going to burst in the next few months.)
I’m currently working on a far-future/space opera novel which asks, basically, what if there is no singularity, no mind uploading, no simulation afterlife, and no real route to sAIs (at least, routes accessible to human-grade intelligences), but (a) we get a mechanism for FTL expansion (this is a necessary hand-wave, or I don’t have a space opera, I have a bucket-of-crabs trapped on a single planet), and (b) TESCREAL turns out to be a design pattern for successful evangelical religions among technological civilizations? (There are holy wars. Boy are there holy wars!)
It’s a little overdue—I began it in 2015, then real life got in the way, repeatedly—but hopefully it’ll be published in the next 2-3 years (the wheels of trade fiction publishing grind slow).
Gotcha. Well we’ll see how the next year or so goes.
(I do agree we’re in a bubble, and also that there is something shallow about how the current AIs accomplish most of their problem-solving-tasks. But, seems to me like all the pieces are there for RL training on diverse problem solving to take it from here. And, like, the dotcom bubble crashed, but that doesn’t mean the internet didn’t end up dominating the market later anyway)
But, anyways, thanks for clarifying that stuff about Accelerando and welcome to LessWrong! (It sounds like you’d mostly find AI discourse on LW aggravating, but FYI you can click the gear icon at the top of the posts page, and set various tag-topics to “hidden” or “reduced” and anything else that’s interesting to you)
Oh, huh. Hello! Didn’t expect you to pop up here!
Yeah I indeed did not get that, or maybe forgot, that is interesting. Will keep it in mind on my current re-read.
I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Also just curious what your actual best guesses are for how things are likely to play out, now that AI is showing up more prominently in real life.
(I vaguely recall an interview where you didn’t really like talking about this with rationalist-types about this sort of thing, so no worries if you don’t want to get into any of that, but, well, you did show up so I figured I’d ask)
>I see where you’re coming from with the “recreate Christianity” thing. I’m curious what you think it’d look like, to be, like, actually trying to model what the future might look like and prepare for it in some kind of sensible way, that didn’t feel that way?
Well, I’m something of a skeptic. (And what we’re seeing today is definitely not actual intelligence-in-a-box, it’s just a hype bubble being inflated by the usual silicon valley grifters to keep the dollars flowing in from the credulous: I’m pretty certain it’s going to burst in the next few months.)
I’m currently working on a far-future/space opera novel which asks, basically, what if there is no singularity, no mind uploading, no simulation afterlife, and no real route to sAIs (at least, routes accessible to human-grade intelligences), but (a) we get a mechanism for FTL expansion (this is a necessary hand-wave, or I don’t have a space opera, I have a bucket-of-crabs trapped on a single planet), and (b) TESCREAL turns out to be a design pattern for successful evangelical religions among technological civilizations? (There are holy wars. Boy are there holy wars!)
It’s a little overdue—I began it in 2015, then real life got in the way, repeatedly—but hopefully it’ll be published in the next 2-3 years (the wheels of trade fiction publishing grind slow).
Gotcha. Well we’ll see how the next year or so goes.
(I do agree we’re in a bubble, and also that there is something shallow about how the current AIs accomplish most of their problem-solving-tasks. But, seems to me like all the pieces are there for RL training on diverse problem solving to take it from here. And, like, the dotcom bubble crashed, but that doesn’t mean the internet didn’t end up dominating the market later anyway)
But, anyways, thanks for clarifying that stuff about Accelerando and welcome to LessWrong! (It sounds like you’d mostly find AI discourse on LW aggravating, but FYI you can click the gear icon at the top of the posts page, and set various tag-topics to “hidden” or “reduced” and anything else that’s interesting to you)