How long until the earth gets eaten? 10th/50th/90th percentile: 3y, 12y, 37y.
Catastrophes induced by narrow capabilities (notably biotech) can push it further, so this might imply that they probably don’t occur[1]. Also, aligned AI might decide not to, it’s not as nutritious as the Sun anyway.
Will we get to this point by incremental progress that yields smallish improvements (=slow), or by some breakthrough that when scaled up can rush past the human intelligence level very quickly (=fast)?
AI speed advantage makes fast vs. slow ambiguous, because it doesn’t require AI getting smarter in order to make startlingly fast progress, and might be about passing a capability threshold (of something like autonomous research) with no distinct breakthroughs leading up to it (by getting to a slightly higher level of scaling or compute efficiency with the old techniques).
Please make no assumptions about those just because other people with some models might make similar predictions or so.
(That’s not a reasonable ask, it intervenes on reasoning in a way that’s not an argument for why it would be mistaken. It’s always possible a hypothesis doesn’t match reality, that’s not a reason to deny entertaining the hypothesis, or not to think through its implications. Even some counterfactuals can be worth considering, when not matching reality is assured from the outset.)
(That’s not a reasonable ask, it intervenes on reasoning in a way that’s not an argument for why it would be mistaken. It’s always possible a hypothesis doesn’t match reality, that’s not a reason to deny entertaining the hypothesis, or not to think through its implications. Even some counterfactuals can be worth considering, when not matching reality is assured from the outset.)
Yeah you can hypothesize. If you state it publicly though, please make sure to flag it as hypothesis.
If you state it publicly though, please make sure to flag it as hypothesis.
Also not a reasonable ask, friction targeted at a particular thing makes it slightly less convenient, and therefore it stops happening in practice completely. ~Everything is a hypothesis, ~all models are wrong, in each case language makes what distinctions it tends to in general.
How long until the earth gets eaten? 10th/50th/90th percentile: 3y, 12y, 37y.
Catastrophes induced by narrow capabilities (notably biotech) can push it further, so this might imply that they probably don’t occur.
No it doesn’t imply this, I set this disclaimer “Conditional on no strong governance success that effectively prevents basically all AI progress, and conditional on no huge global catastrophe happening in the meantime:”. Though yeah I don’t particularly expect those to occur.
The “AI might decide not to” point stands I think. This for me represents change of mind, I wouldn’t have previously endorsed this point, but since recently I think arbitrary superficial asks like this can become reflectively stable with nontrivial probability, resisting strong cost-benefit arguments even after intelligence explosion.
ok edited to sun. (i used earth first because i don’t know how long it will take to eat the sun, whereas earth seems likely to be feasible to eat quickly.)
(plausible to me that an aligned AI will still eat the earth but scan all the relevant information out of it and later maybe reconstruct it.)
Will we get to this point by incremental progress that yields smallish improvements (=slow), or by some breakthrough that when scaled up can rush past the human intelligence level very quickly (=fast)?
AI speed advantage makes fast vs. slow ambiguous, because it doesn’t require AI getting smarter in order to make startlingly fast progress, and might be about passing a capability threshold (of something like autonomous research) with no distinct breakthroughs leading up to it (by getting to a slightly higher level of scaling or compute efficiency with some old technique).
Ok yeah I think my statement is conflating fast-vs-slow with breakthrough-vs-continuous, though I think there’s a correlation.
(I still think fast-vs-slow makes sense as concept separately and is important.)
Catastrophes induced by narrow capabilities (notably biotech) can push it further, so
this might imply that they probably don’t occur[1]. Also, aligned AI might decide not to, it’s not as nutritious as the Sun anyway.AI speed advantage makes fast vs. slow ambiguous, because it doesn’t require AI getting smarter in order to make startlingly fast progress, and might be about passing a capability threshold (of something like autonomous research) with no distinct breakthroughs leading up to it (by getting to a slightly higher level of scaling or compute efficiency with the old techniques).
(That’s not a reasonable ask, it intervenes on reasoning in a way that’s not an argument for why it would be mistaken. It’s always possible a hypothesis doesn’t match reality, that’s not a reason to deny entertaining the hypothesis, or not to think through its implications. Even some counterfactuals can be worth considering, when not matching reality is assured from the outset.)
There was a “no huge global catastrophe” condition on the prediction that I missed, thanks Towards_Keeperhood for correction.
Yeah you can hypothesize. If you state it publicly though, please make sure to flag it as hypothesis.
Also not a reasonable ask, friction targeted at a particular thing makes it slightly less convenient, and therefore it stops happening in practice completely. ~Everything is a hypothesis, ~all models are wrong, in each case language makes what distinctions it tends to in general.
ok thx, edited. thanks for feedback!
No it doesn’t imply this, I set this disclaimer “Conditional on no strong governance success that effectively prevents basically all AI progress, and conditional on no huge global catastrophe happening in the meantime:”. Though yeah I don’t particularly expect those to occur.
The “AI might decide not to” point stands I think. This for me represents change of mind, I wouldn’t have previously endorsed this point, but since recently I think arbitrary superficial asks like this can become reflectively stable with nontrivial probability, resisting strong cost-benefit arguments even after intelligence explosion.
Right, I missed this.
ok edited to sun. (i used earth first because i don’t know how long it will take to eat the sun, whereas earth seems likely to be feasible to eat quickly.)
(plausible to me that an aligned AI will still eat the earth but scan all the relevant information out of it and later maybe reconstruct it.)
Ok yeah I think my statement is conflating fast-vs-slow with breakthrough-vs-continuous, though I think there’s a correlation.
(I still think fast-vs-slow makes sense as concept separately and is important.)