I’m a lot more worried about making an FAI behave correctly if it encounters a scenario which we thought was very very unlikely.
Also, if the AI spreads widely and is around for a long time, it will eventually run into very unlikely scenarios. Not 1/3^^^3 unlikely, but pretty unlikely.
I’m a lot more worried about making an FAI behave correctly if it encounters a scenario which we thought was very very unlikely.
Also, if the AI spreads widely and is around for a long time, it will eventually run into very unlikely scenarios. Not 1/3^^^3 unlikely, but pretty unlikely.