Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.
Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.
Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.
Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.