What if the effect of AGI development would be our reform instead of our extinction?
There is a burden to prove not only that ‘some’ AGI development will be good for humanity (reforming, to use your words), but that all AGI cannot possibly lead to extinction. If someone creates a reforming-AI today, and then the next day, someone creates an evil AI, we will probably still all die.
There is a burden to prove not only that ‘some’ AGI development will be good for humanity (reforming, to use your words), but that all AGI cannot possibly lead to extinction. If someone creates a reforming-AI today, and then the next day, someone creates an evil AI, we will probably still all die.