One way to convert this probability estimates into something actionable is to convert them into time estimates—how much time we have to find solution for AI Safety. It depends of the shape of probability curve and our lowest acceptable risk estimate.
I would agree that pdfs are nice, but I am not sure my action space has meaningful wiggle room to appreciate the small beans counting around proxy indicators of AI safety markers...
If it was the case that a thing that didn’t happen by the end of 2028 would allow everyone reasonable to say “we are basically fine for the next century” then I would track the estimates with more curiosity.
But if it’s about 1 year shorter or longer until doom and no one will stop the race towards AGI if the predicted thing happens while also no (other) one will stop worrying about x-risk if the predicted thing doesn’t happen, then I don’t really care about that prediction ¯\_(ツ)_/¯
(there are people who’s job is to care even about the small bumps, so I’m not saying it’s not useful for anyone, but if there was a prediction market for “Anthropic will have literally zero human employees by the end of 2028” I would NOT bet on it using Kelly criterion downstream of either 15% or 30% of some technical abstract probability reported by someone who is into timeline predictions, I would just say “nope, I am not into sports betting, thank you”)
One way to convert this probability estimates into something actionable is to convert them into time estimates—how much time we have to find solution for AI Safety. It depends of the shape of probability curve and our lowest acceptable risk estimate.
I would agree that pdfs are nice, but I am not sure my action space has meaningful wiggle room to appreciate the small beans counting around proxy indicators of AI safety markers...
If it was the case that a thing that didn’t happen by the end of 2028 would allow everyone reasonable to say “we are basically fine for the next century” then I would track the estimates with more curiosity.
But if it’s about 1 year shorter or longer until doom and no one will stop the race towards AGI if the predicted thing happens while also no (other) one will stop worrying about x-risk if the predicted thing doesn’t happen, then I don’t really care about that prediction ¯\_(ツ)_/¯
(there are people who’s job is to care even about the small bumps, so I’m not saying it’s not useful for anyone, but if there was a prediction market for “Anthropic will have literally zero human employees by the end of 2028” I would NOT bet on it using Kelly criterion downstream of either 15% or 30% of some technical abstract probability reported by someone who is into timeline predictions, I would just say “nope, I am not into sports betting, thank you”)
Yes, P doom are meaningless until we have some idea how it can be change. If P doom will have absolutely fixed probability, we can just ignore it.
If we have timing, small changes in it are meaningless, but if it is order of magnitude changes, it has implication on how I spend my remaining life.