Thanks, seeing the claims all there together is useful.
The technical assumptions and reason all seem intuitive (given the last couple of years of background given here). The meta-ethic FAI singleton seems like the least evil goal I can imagine, given the circumstances.
A superintelligent FAI, with the reliably stable values that you mention, sounds like an impossible goal to achieve. Personally, I assign a significant probability to your failure, either by being too slow to prevent cataclysmic alternatives or by making a fatal mistake. Nevertheless, your effort is heroic. It is fortunate that many things seem impossible right up until the time someone does them.
Thanks, seeing the claims all there together is useful.
The technical assumptions and reason all seem intuitive (given the last couple of years of background given here). The meta-ethic FAI singleton seems like the least evil goal I can imagine, given the circumstances.
A superintelligent FAI, with the reliably stable values that you mention, sounds like an impossible goal to achieve. Personally, I assign a significant probability to your failure, either by being too slow to prevent cataclysmic alternatives or by making a fatal mistake. Nevertheless, your effort is heroic. It is fortunate that many things seem impossible right up until the time someone does them.