Also, I think you said on Twitter that Eliezer’s a liar unless he generates some AI prediction that lets us easily falsify his views in the near future? Which seems to require that he have very narrow confidence intervals about very near-term events in AI.
So I continue to not understand what it is about the claims ‘the median on my AGI timeline is well before 2050’, ‘Metaculus updated away from 2050 after I publicly predicted it was well before 2050’, or ‘hard takeoff is true with very high probability’, that makes you think someone must have very narrow contra-mainstream distributions on near-term narrow-AI events or else they’re lying.
Also, I think you said on Twitter that Eliezer’s a liar unless he generates some AI prediction that lets us easily falsify his views in the near future? Which seems to require that he have very narrow confidence intervals about very near-term events in AI.
So I continue to not understand what it is about the claims ‘the median on my AGI timeline is well before 2050’, ‘Metaculus updated away from 2050 after I publicly predicted it was well before 2050’, or ‘hard takeoff is true with very high probability’, that makes you think someone must have very narrow contra-mainstream distributions on near-term narrow-AI events or else they’re lying.