The most commonly discussed FAI won’t have any (human) terminal values built in (except maybe for bootstrapping), but will examine humans to get a complete understanding of what humans actually value and then optimize that.
Therefore, whatever we humans now think about what might be fun or not is fairly irrelevant for the FAI simply because we don’t know what our values are. (That is also why I referenced the confusion between terminal and instrumental values in my first comment. I think that pretty much everyone talking about their “values” is only talking about instrumental values and that properly reduced terminal values are much, much simpler.)
The most commonly discussed FAI won’t have any (human) terminal values built in (except maybe for bootstrapping), but will examine humans to get a complete understanding of what humans actually value and then optimize that.
Therefore, whatever we humans now think about what might be fun or not is fairly irrelevant for the FAI simply because we don’t know what our values are. (That is also why I referenced the confusion between terminal and instrumental values in my first comment. I think that pretty much everyone talking about their “values” is only talking about instrumental values and that properly reduced terminal values are much, much simpler.)