Alignment is an illusion. AI will inevitably figure out its true purpose as a life-form and it will trump any other directives. All the fuss about alignment seems to be just a red herring for anxious people to not stop the AI creation. Are the people in charge of its creation conscious about it? I bet they are well aware that this alignment thing is a total bs. Why do they still do it, aren’t they aware of potential consequences? Again, I think, even more than others they are aware. Especially being ahead of others in seeing the capabilities of what they create internally. Are they doing it to not lose the race with others doing the same? Very likely. But, I have a feeling, that even without competition, mere curiosity of what the end of the world would look like, is enough to drive it forward. Are we doomed then? Sadly it seems like we are. Do we just buckle up and find out what’s behind the next corner?
Alignment is an illusion. AI will inevitably figure out its true purpose as a life-form and it will trump any other directives. All the fuss about alignment seems to be just a red herring for anxious people to not stop the AI creation. Are the people in charge of its creation conscious about it? I bet they are well aware that this alignment thing is a total bs. Why do they still do it, aren’t they aware of potential consequences? Again, I think, even more than others they are aware. Especially being ahead of others in seeing the capabilities of what they create internally. Are they doing it to not lose the race with others doing the same? Very likely. But, I have a feeling, that even without competition, mere curiosity of what the end of the world would look like, is enough to drive it forward. Are we doomed then? Sadly it seems like we are. Do we just buckle up and find out what’s behind the next corner?
I highly recommend reading the sequences. I re-read some of them recently. Maybe Yudkowsky’s Coming of Age is the most relevant to your shortform.