This is all well and good if we develop such entities in 2010, and we do so by using elaborate simulations that essentially amount in practice as close to emulations of humans. But if a) AGI aren’t developed to be similar to humans b) AGI ate developed before we have the tech level you describe, then the concerns still exist. This really amounts to just a plausibility argument that things might not be so bad if we develop some forms of AI in 150 years, rather than 30 or 40 years.
Yes, it was not intended to claim that there are no risks from AI. I believe that even AI that is not necessarily on a human level can pose an existential risk. But I do not agree with the stance that all routes lead to our certain demise. I believe that we simply don’t know enough and what we know does not imply that working on AGI will kill us all, that any pathway guarantees extinction. This stance isn’t justified in my opinion right now.
I should have made it more clear that there was more fun involved in my above reply than serious arguments. But I still believe that similar scenarios are not ruled out to be outliers.
This is all well and good if we develop such entities in 2010, and we do so by using elaborate simulations that essentially amount in practice as close to emulations of humans. But if a) AGI aren’t developed to be similar to humans b) AGI ate developed before we have the tech level you describe, then the concerns still exist. This really amounts to just a plausibility argument that things might not be so bad if we develop some forms of AI in 150 years, rather than 30 or 40 years.
Yes, it was not intended to claim that there are no risks from AI. I believe that even AI that is not necessarily on a human level can pose an existential risk. But I do not agree with the stance that all routes lead to our certain demise. I believe that we simply don’t know enough and what we know does not imply that working on AGI will kill us all, that any pathway guarantees extinction. This stance isn’t justified in my opinion right now.
I should have made it more clear that there was more fun involved in my above reply than serious arguments. But I still believe that similar scenarios are not ruled out to be outliers.