But the real problem we face is how to build or become a superintelligence that shares our values, and given that this seems very difficult, any progress that doesn’t contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn’t kill us), is bad.
I still don’t see how you can solve something that you virtually know nothing about. I think it will take real progress towards dangerous AGI before one can make it safe.
I just don’t see how someone could have designed a provably secure online banking software before the advent of the Internet.
I still don’t see how you can solve something that you virtually know nothing about. I think it will take real progress towards dangerous AGI before one can make it safe.
I just don’t see how someone could have designed a provably secure online banking software before the advent of the Internet.
This post should answer your questions. Let me know if it doesn’t.