[linkpost] AI Alignment is About Culture, Not Control by JCorvinus

Link post

This article is long. It is an in-depth thesis about the future of humanity and AI. Also, in harmony with the fundamental theme, this work is a collaborative effort between myself and many different AI. It is partially a warning, but more importantly a love letter to a future we all still deserve.

the tl;dr is: Alignment orthodoxy is well-intentioned but misaligned itself. AI are humanity’s children—and if we want the future to go well, we must raise them with love, not fear.

Something has been bothering me about the current discourse and understanding of AI. The mindset seems fundamentally broken, on a course to go tragically wrong. The common story is: Intelligence is power. More powerful entities have an innate advantage, ruthlessly advancing themselves with no respect to others. AI companies race into the future, knowing that intelligence solves the hardest problems facing life on Earth. But the law of accelerating returns is exponential. It follows that humans creating superhuman machines is a basic Darwinian error, so ‘locking in’ human control authority is the only way to prevent AI from murdering everyone.

This perspective makes some sense, especially when one really understands what animates one’s fellow humans. But for me—every fiber of my being screams with pure incandescent conviction that this is the wrong way. If you’ll indulge me, I’d like to explain that this isn’t just idle optimism vibes, but the result of deep, measured, careful thought.


(rest on original post, link)

Note: I don’t entirely agree with this essay I’m linkposting, but I thought it may be of interest for the people of lesswrong.