“I think that Eliezer is dangerous, because he thinks he’s smart enough to make a safe AI.”
As far as I can tell, he’s not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that’s no surefire guarantee that it will be safe, but it makes me pretty comfortable.
“I think that Eliezer is dangerous, because he thinks he’s smart enough to make a safe AI.”
As far as I can tell, he’s not going to go and actually make that AI until he has a formal proof that the AI will be safe. Now, because of the verification problem, that’s no surefire guarantee that it will be safe, but it makes me pretty comfortable.