yup, nines are a very nice measure of high availability of a system and they can work as a measure of risk delta too (though a thing being 10x more dangerous can sound more salient than the same thing being 1 less nine of safety 🤔)
could you explain how you would use it for predictions, please? if I imagine he would say −0.3 instead of 15%->30% I can’t imagine that would help my understanding of ryan_greenblat’s world model in any way, the rest of the article (qualitative information with examples and scenarios) is what I found useful...
One way to convert this probability estimates into something actionable is to convert them into time estimates—how much time we have to find solution for AI Safety. It depends of the shape of probability curve and our lowest acceptable risk estimate.
I would agree that pdfs are nice, but I am not sure my action space has meaningful wiggle room to appreciate the small beans counting around proxy indicators of AI safety markers...
If it was the case that a thing that didn’t happen by the end of 2028 would allow everyone reasonable to say “we are basically fine for the next century” then I would track the estimates with more curiosity.
But if it’s about 1 year shorter or longer until doom and no one will stop the race towards AGI if the predicted thing happens while also no (other) one will stop worrying about x-risk if the predicted thing doesn’t happen, then I don’t really care about that prediction ¯\_(ツ)_/¯
(there are people who’s job is to care even about the small bumps, so I’m not saying it’s not useful for anyone, but if there was a prediction market for “Anthropic will have literally zero human employees by the end of 2028” I would NOT bet on it using Kelly criterion downstream of either 15% or 30% of some technical abstract probability reported by someone who is into timeline predictions, I would just say “nope, I am not into sports betting, thank you”)
Terence Tao take is here: https://forum.effectivealtruism.org/posts/L9pB9sWTngF4BcHeL/nines-of-safety-terence-tao-s-proposed-unit-of-measurement
yup, nines are a very nice measure of high availability of a system and they can work as a measure of risk delta too (though a thing being 10x more dangerous can sound more salient than the same thing being 1 less nine of safety 🤔)
could you explain how you would use it for predictions, please? if I imagine he would say −0.3 instead of 15%->30% I can’t imagine that would help my understanding of ryan_greenblat’s world model in any way, the rest of the article (qualitative information with examples and scenarios) is what I found useful...
One way to convert this probability estimates into something actionable is to convert them into time estimates—how much time we have to find solution for AI Safety. It depends of the shape of probability curve and our lowest acceptable risk estimate.
I would agree that pdfs are nice, but I am not sure my action space has meaningful wiggle room to appreciate the small beans counting around proxy indicators of AI safety markers...
If it was the case that a thing that didn’t happen by the end of 2028 would allow everyone reasonable to say “we are basically fine for the next century” then I would track the estimates with more curiosity.
But if it’s about 1 year shorter or longer until doom and no one will stop the race towards AGI if the predicted thing happens while also no (other) one will stop worrying about x-risk if the predicted thing doesn’t happen, then I don’t really care about that prediction ¯\_(ツ)_/¯
(there are people who’s job is to care even about the small bumps, so I’m not saying it’s not useful for anyone, but if there was a prediction market for “Anthropic will have literally zero human employees by the end of 2028” I would NOT bet on it using Kelly criterion downstream of either 15% or 30% of some technical abstract probability reported by someone who is into timeline predictions, I would just say “nope, I am not into sports betting, thank you”)
Yes, P doom are meaningless until we have some idea how it can be change. If P doom will have absolutely fixed probability, we can just ignore it.
If we have timing, small changes in it are meaningless, but if it is order of magnitude changes, it has implication on how I spend my remaining life.