Tim, that’s an obvious problem but it doesn’t mean I can magically conjure quantitative predictions out of thin air. If I don’t know when AI will go self-improving, should I pretend that I do?
Tim, that’s an obvious problem but it doesn’t mean I can magically conjure quantitative predictions out of thin air. If I don’t know when AI will go self-improving, should I pretend that I do?