It is if there are considerable roadblocks ahead. I do not see enough evidence to believe that we can be sure that we will be able to quickly develop something that will pose an existential risk. I am of the opinion that even sub-human AI can pose an existential risk. That isn’t what I am trying to depict here. I wanted to argue that there are pathways towards AGI that will not necessarily lead to our extinction.
I am trying to update my estimations by thinking about this topic and provoking feedback from people who believe that current evidence allows us to conclude that AGI research is highlighly likely to have an catastrophic impact.
I wanted to argue that there are pathways towards AGI that will not necessarily lead to our extinction.
Does anyone think that [our extinction is inevitable]? It seems fairly plausible that at least some humans will be kept around for quite a while by a wide range of intelligences on instrumental grounds—in high-tech museum exhibits—what with us being a pivotal stage in evolution and all.
Thanks. I would agree with your position and also make a far stronger claim, particularly with respect to the “pathways towards AGI” detail. I’d possibly say something along the lines of “yeah, all the ones that don’t suck for a start then a few more that do suck despite our continued existence”.
Mind you I fundamentally disagree with what XiXiDu is trying to say by asking the question. At least if I read this bit correctly:
I do not see enough evidence to believe that we can be sure that we will be able to quickly develop something that will pose an existential risk.
It is if there are considerable roadblocks ahead. I do not see enough evidence to believe that we can be sure that we will be able to quickly develop something that will pose an existential risk. I am of the opinion that even sub-human AI can pose an existential risk. That isn’t what I am trying to depict here. I wanted to argue that there are pathways towards AGI that will not necessarily lead to our extinction.
I am trying to update my estimations by thinking about this topic and provoking feedback from people who believe that current evidence allows us to conclude that AGI research is highlighly likely to have an catastrophic impact.
Does anyone think that [our extinction is inevitable]? It seems fairly plausible that at least some humans will be kept around for quite a while by a wide range of intelligences on instrumental grounds—in high-tech museum exhibits—what with us being a pivotal stage in evolution and all.
Yes.
Sorry (and editied) - what I meant was more like: does anyone hold the position this argues against?
Thanks. I would agree with your position and also make a far stronger claim, particularly with respect to the “pathways towards AGI” detail. I’d possibly say something along the lines of “yeah, all the ones that don’t suck for a start then a few more that do suck despite our continued existence”.
Mind you I fundamentally disagree with what XiXiDu is trying to say by asking the question. At least if I read this bit correctly: