So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere?
The relevant failure mode here is “other optimising”.
A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.
So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
The relevant failure mode here is “other optimising”.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.