Some AI company employees with shorter timelines than me mostly. I also think that “why I don’t agree with X” is a good prompt to express some deeper aspect of my models/views. It also makes a good reasonably engaging hook for a blog post.
I might write some posts responding arguments for longer timelines that I disagree with if I feel like I have something interesting to say.
My case against long timelines is based on waiting for algorithmic breakthroughs which Kokotajlo on July 28 believed to have a chance of “maybe like 8%/yr”. Seth Herd replied to my case as follows: “You estimate c by looking at how many breakthroughs we’ve had in AI per person year so far. That’s where the 8% per year comes from. It seems low to me with the large influx of people working on AI (italics mine—S.K.), but I’m sure Daniel’s math makes sense given his estimate of breakthroughs to date”
I didn’t interview any AI company employees, but I conjecture that they are overconfident in their ability to make such breakthroughs.
Some AI company employees with shorter timelines than me mostly. I also think that “why I don’t agree with X” is a good prompt to express some deeper aspect of my models/views. It also makes a good reasonably engaging hook for a blog post.
I might write some posts responding arguments for longer timelines that I disagree with if I feel like I have something interesting to say.
My case against long timelines is based on waiting for algorithmic breakthroughs which Kokotajlo on July 28 believed to have a chance of “maybe like 8%/yr”. Seth Herd replied to my case as follows: “You estimate c by looking at how many breakthroughs we’ve had in AI per person year so far. That’s where the 8% per year comes from. It seems low to me with the large influx of people working on AI (italics mine—S.K.), but I’m sure Daniel’s math makes sense given his estimate of breakthroughs to date”
I didn’t interview any AI company employees, but I conjecture that they are overconfident in their ability to make such breakthroughs.