Yes, it was not intended to claim that there are no risks from AI. I believe that even AI that is not necessarily on a human level can pose an existential risk. But I do not agree with the stance that all routes lead to our certain demise. I believe that we simply don’t know enough and what we know does not imply that working on AGI will kill us all, that any pathway guarantees extinction. This stance isn’t justified in my opinion right now.
I should have made it more clear that there was more fun involved in my above reply than serious arguments. But I still believe that similar scenarios are not ruled out to be outliers.
Yes, it was not intended to claim that there are no risks from AI. I believe that even AI that is not necessarily on a human level can pose an existential risk. But I do not agree with the stance that all routes lead to our certain demise. I believe that we simply don’t know enough and what we know does not imply that working on AGI will kill us all, that any pathway guarantees extinction. This stance isn’t justified in my opinion right now.
I should have made it more clear that there was more fun involved in my above reply than serious arguments. But I still believe that similar scenarios are not ruled out to be outliers.