burger flipper, ok let’s play the AI box experiment:
However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it?
.
.
.
.
.
.
.
.
.
.
.
.
.
If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn’t have evil intentions, or that he didn’t make a mistake in his math? The only way to be 100% sure is to know enough about the specific GAI he is building.
So what do we do now? Should we oppose the singularity? Is the singularity a good idea after all? Who shall we trust with the future of the universe?
Yes, I know, I know strictly speaking this isn’t the AI-box experiment, but still...
burger flipper, ok let’s play the AI box experiment:
However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it? . . . . . . . . . . . . .
If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn’t have evil intentions, or that he didn’t make a mistake in his math? The only way to be 100% sure is to know enough about the specific GAI he is building.
So what do we do now? Should we oppose the singularity? Is the singularity a good idea after all? Who shall we trust with the future of the universe?
Yes, I know, I know strictly speaking this isn’t the AI-box experiment, but still...