I think an interesting lens here is to focus on the meta/structural level. Essentially we need good governance. As you touch on here:
we need a process with an extreme amount of power and wisdom
It is not immediately obvious to me that the required level of power is way out of proportion to the level of wisdom that will be available, if we can find some way of coupling wisdom and intelligence as priorities for models. I don’t really know of anyone doing “wisdom evals” though. Maybe RLHF is in some very weak sense testing for wisdom.
I guess my main point here is that “extreme amount of wisdom” as a prerequisite might not be a deal-breaker when we are talking about manufacturing extreme amounts of power and intelligence
Hanson says “so… you’re against anything ever changing?
This is obviously an oversimplification. I have two main objections.
First, all you can really say is the rate of change must decrease a lot. That sounds a lot more palatable if the overall quality of life is high. (Let’s say you must simulate proposed changes for an aeon in a digital world inhabited by real people before realizing anything in the real world.)
Second, changes where? We’ll inhabit rich digital worlds according to the whims of our imaginations. It’s just the “real” world where change would slow. Maybe this is completely fine?
Even if we ignore the above and take Hanson as stated, if we are talking about inhabiting the Heavenly Abodes, then I might be persuaded to bite the bullet and give up any future changes. “So… you want to destroy heaven and replace it with something you predict might be better?” is a frame that isn’t obviously wrong to me.
I think an interesting lens here is to focus on the meta/structural level. Essentially we need good governance. As you touch on here:
It is not immediately obvious to me that the required level of power is way out of proportion to the level of wisdom that will be available, if we can find some way of coupling wisdom and intelligence as priorities for models. I don’t really know of anyone doing “wisdom evals” though. Maybe RLHF is in some very weak sense testing for wisdom.
I guess my main point here is that “extreme amount of wisdom” as a prerequisite might not be a deal-breaker when we are talking about manufacturing extreme amounts of power and intelligence
This is obviously an oversimplification. I have two main objections.
First, all you can really say is the rate of change must decrease a lot. That sounds a lot more palatable if the overall quality of life is high. (Let’s say you must simulate proposed changes for an aeon in a digital world inhabited by real people before realizing anything in the real world.)
Second, changes where? We’ll inhabit rich digital worlds according to the whims of our imaginations. It’s just the “real” world where change would slow. Maybe this is completely fine?
Even if we ignore the above and take Hanson as stated, if we are talking about inhabiting the Heavenly Abodes, then I might be persuaded to bite the bullet and give up any future changes. “So… you want to destroy heaven and replace it with something you predict might be better?” is a frame that isn’t obviously wrong to me.