Paul Christiano makes a slightly different claim here: https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=AiNd3hZsKbajTDG2J
As I read the two claims:
With GPT-3 + 5 years of effort, a system could be built that would eventually Foom if allowed.
With GPT-3 + a serious effort, a system could be built that would clearly Foom if allowed.
I think the second could be made into a bet. I tried to operationalise it as a reply to the linked comment.
Paul Christiano makes a slightly different claim here: https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=AiNd3hZsKbajTDG2J
As I read the two claims:
With GPT-3 + 5 years of effort, a system could be built that would eventually Foom if allowed.
With GPT-3 + a serious effort, a system could be built that would clearly Foom if allowed.
I think the second could be made into a bet. I tried to operationalise it as a reply to the linked comment.