I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere?
The relevant failure mode here is “other optimising”.
A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
The relevant failure mode here is “other optimising”.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.