There is a futility to maximizing paperclips in reality that defeats the purpose of the original thought experiment, and this is reason enough to dismiss its conclusions.
I basically liked this post until the very last paragraph. I think it’s an interesting point “if you were to actually maximize paperclips, you’d have a variety of interesting ontological and physical problems to solve.” (This seems similar to the Diamond Maximizer problem)
But your last paragraph sounds like a) you misunderstood the point of the paperclip thought experiment (i.e. it’s not about telling an AI to maximize paperclips, it was about telling an AI to maximize some other thing and accidentally making a bunch of molecular paperclip-like-objects because that happened to maximize the function it was given. See https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer )
But even assuming we’re talking about an AI maximizing paperclips because we told it to, I don’t really understand the point you’re making here.
I basically liked this post until the very last paragraph. I think it’s an interesting point “if you were to actually maximize paperclips, you’d have a variety of interesting ontological and physical problems to solve.” (This seems similar to the Diamond Maximizer problem)
But your last paragraph sounds like a) you misunderstood the point of the paperclip thought experiment (i.e. it’s not about telling an AI to maximize paperclips, it was about telling an AI to maximize some other thing and accidentally making a bunch of molecular paperclip-like-objects because that happened to maximize the function it was given. See https://www.lesswrong.com/tag/squiggle-maximizer-formerly-paperclip-maximizer )
But even assuming we’re talking about an AI maximizing paperclips because we told it to, I don’t really understand the point you’re making here.
And you probably never will.