We can’t use frequentists probability approach to one time events, but as we are Bayesian, we could try to bet. (Betting also don’t work fine, as nobody will get the bet). I prefer defining the probability of remote future events as “share of runs of our best world model, resulting in a given outcome”, as it could be updated as we improve our world model.
My current bet
P(Benevolent superintelligence|superintelligence) = 0.25
How I got it: more or less based on the gut feeling—too much or too less. I know it is the wrong approach, but I hope to improve it when I will get better world model.
We can’t use frequentists probability approach to one time events, but as we are Bayesian, we could try to bet. (Betting also don’t work fine, as nobody will get the bet). I prefer defining the probability of remote future events as “share of runs of our best world model, resulting in a given outcome”, as it could be updated as we improve our world model.
My current bet P(Benevolent superintelligence|superintelligence) = 0.25
How I got it: more or less based on the gut feeling—too much or too less. I know it is the wrong approach, but I hope to improve it when I will get better world model.