It’s fascinating how Gary Marcus has become one of the most prominent advocates of AI safety, and particularly what he call long-term safety, despite being wrong on almost every prediction he has made to date.
I read a tweet that said something to the effect that GOFAI researchers remain the best ai safety researchers since nothing they did worked out.
Seriously, how did he do that? I think it’s important to understand. Maybe it’s as some people cynically told me years ago—In DC, a good forecasting track record counts for less than a piece of toilet paper? Maybe it’s worse than that—maybe being active on Twitter counts for a lot? Before I cave to cynicism I’d love to hear other takes.
It’s fascinating how Gary Marcus has become one of the most prominent advocates of AI safety, and particularly what he call long-term safety, despite being wrong on almost every prediction he has made to date.
I read a tweet that said something to the effect that GOFAI researchers remain the best ai safety researchers since nothing they did worked out.
Seriously, how did he do that? I think it’s important to understand. Maybe it’s as some people cynically told me years ago—In DC, a good forecasting track record counts for less than a piece of toilet paper? Maybe it’s worse than that—maybe being active on Twitter counts for a lot? Before I cave to cynicism I’d love to hear other takes.
It must be said that he was quite a notable /influential person before I think?
He is a student of Chomsky—knows a lot of the big public intellectuals. He s had a lot of time to build up a reputation.
But yeah I agree it’s remarkable.