So I should assume that it happens some time before the sun goes nova, as long as we don’t humans don’t escape the the inner solar system or come up with some energy sources that would allow survival after such an event?
Strikes me as a rather tautological type answer but if that is where the AI risk position is I honestly see no reason to get all worked up about the claims.
It’s a pretty empty question, why would you be surprised at a tautological answer?
You didn’t ask anything about any AI risk position, just “if X, what does that imply”. I tried to answer.
If you want to know more details about someones’s prediction or position on whether ASI will, in fact, kill you, you should likely ask them. A one-liner on lesswrong doesn’t have enough context to know who you’re hoping will answer. Apparently not me, sorry.
My question was about a timeline. Has anyone made any predications about the speed of extinction once ASI exists? I have only see people making claims about the timeline to ASI.
First of all, it’s very possible that ASIs will create a world order “caring about ‘all beings’ including humans” (there are plenty of reasons why they might decide that to be beneficial from their viewpoint). Then they won’t kill you (and probably will save you from your natural biological fate).
But if they don’t do create this kind of world order, then the natural environment as we know it is likely to perish as a side effect of their development activity, and large animals are likely to perish with it, and humans are large animals, so… I’d say, “years” (to pave almost all surface with factories and datacenters, while heating the atmosphere quite a bit and sufficiently changing some gas ratios as a side-effect of all that industry). That the main and the most likely “road to ruin” from a rapidly unfolding ASI ecosystem which does not care (if that ASI ecosystem really, really does not care, it might end up blowing the overall local neighborhood together with all the ASIs and everything else, and that might happen even faster if they rapidly develop various revolutionary tech without trying to collaborate on some restraint in that area; I don’t know how likely that risk is, given that they are “supposed to be actually smart”, but it’s very real).
Thanks for the comment. It does help me understand how some are thinking in terms of post ASI timelines. I think you offer something of a two extremes but also point to some clear aspect one could use for actually estimating end point once ASI starts whatever it’s going to do. The paving over has some physical constraints regardless of how powerful or fast ASI might be. Could be it figures out some material for paving that can be implemented faster than cement but if cement is used we know something about curing times, logistic times from extraction to pour and output rate for quantity flows. That’s a pretty straight calculation to model. Similarly we have some pretty good existing models for food/water requirements, arable land requirements to support life, eco system collapse so I could even probably take your bad case scenario and make a crude estimate of time to extinction.
I was much more interested in getting a human response but finally just asked an AI and got what I think was a good over view and better insight to why it’s not an aspect that is generally included in thinking about that timeline. But again, much more interested in getting a human perspective and thoughts so thanks for taking the time to jot something down.
Yeah, sometimes people also consider deliberate direct mass attacks by ASIs against biologicals, but it’s difficult to imagine why the ASIs would care to do that, given that they will easily dominate anyway.
(However, non-ASI actors (AIs or humans or their combinations), and particularly in scenarios where ASIs don’t exist at all, might consider organizing catastrophic mass attacks against biologicals with super pandemics and such. So one could also ask, “if the lack of governing ASIs is likely to kill me, when does that happen?” I have no idea about timelines, but I think the risks are quite high already.)
I think I am actually more worried about bad actor humans that might leverage AI (well before any ASI exists) to make some type of nasty bio-agent (with or without a counter agent for their own protection or for extorting money from the targeted population). While tracking chemicals is hard I think that would be much easier than trying to track something to signal if someone is bioengineering some thing bad.
But yes, there are a lot of frames to but the question in.
Clearly, it happens before something else kills you. Otherwise the antecedent is false.
So I should assume that it happens some time before the sun goes nova, as long as we don’t humans don’t escape the the inner solar system or come up with some energy sources that would allow survival after such an event?
Strikes me as a rather tautological type answer but if that is where the AI risk position is I honestly see no reason to get all worked up about the claims.
It’s a pretty empty question, why would you be surprised at a tautological answer?
You didn’t ask anything about any AI risk position, just “if X, what does that imply”. I tried to answer.
If you want to know more details about someones’s prediction or position on whether ASI will, in fact, kill you, you should likely ask them. A one-liner on lesswrong doesn’t have enough context to know who you’re hoping will answer. Apparently not me, sorry.
My question was about a timeline. Has anyone made any predications about the speed of extinction once ASI exists? I have only see people making claims about the timeline to ASI.
If that was not clear, my bad.
First of all, it’s very possible that ASIs will create a world order “caring about ‘all beings’ including humans” (there are plenty of reasons why they might decide that to be beneficial from their viewpoint). Then they won’t kill you (and probably will save you from your natural biological fate).
But if they don’t do create this kind of world order, then the natural environment as we know it is likely to perish as a side effect of their development activity, and large animals are likely to perish with it, and humans are large animals, so… I’d say, “years” (to pave almost all surface with factories and datacenters, while heating the atmosphere quite a bit and sufficiently changing some gas ratios as a side-effect of all that industry). That the main and the most likely “road to ruin” from a rapidly unfolding ASI ecosystem which does not care (if that ASI ecosystem really, really does not care, it might end up blowing the overall local neighborhood together with all the ASIs and everything else, and that might happen even faster if they rapidly develop various revolutionary tech without trying to collaborate on some restraint in that area; I don’t know how likely that risk is, given that they are “supposed to be actually smart”, but it’s very real).
Thanks for the comment. It does help me understand how some are thinking in terms of post ASI timelines. I think you offer something of a two extremes but also point to some clear aspect one could use for actually estimating end point once ASI starts whatever it’s going to do. The paving over has some physical constraints regardless of how powerful or fast ASI might be. Could be it figures out some material for paving that can be implemented faster than cement but if cement is used we know something about curing times, logistic times from extraction to pour and output rate for quantity flows. That’s a pretty straight calculation to model. Similarly we have some pretty good existing models for food/water requirements, arable land requirements to support life, eco system collapse so I could even probably take your bad case scenario and make a crude estimate of time to extinction.
I was much more interested in getting a human response but finally just asked an AI and got what I think was a good over view and better insight to why it’s not an aspect that is generally included in thinking about that timeline. But again, much more interested in getting a human perspective and thoughts so thanks for taking the time to jot something down.
Yeah, sometimes people also consider deliberate direct mass attacks by ASIs against biologicals, but it’s difficult to imagine why the ASIs would care to do that, given that they will easily dominate anyway.
(However, non-ASI actors (AIs or humans or their combinations), and particularly in scenarios where ASIs don’t exist at all, might consider organizing catastrophic mass attacks against biologicals with super pandemics and such. So one could also ask, “if the lack of governing ASIs is likely to kill me, when does that happen?” I have no idea about timelines, but I think the risks are quite high already.)
I think I am actually more worried about bad actor humans that might leverage AI (well before any ASI exists) to make some type of nasty bio-agent (with or without a counter agent for their own protection or for extorting money from the targeted population). While tracking chemicals is hard I think that would be much easier than trying to track something to signal if someone is bioengineering some thing bad.
But yes, there are a lot of frames to but the question in.