wireless-heading, value drift and so on

A typical image of the wire-head is that of a guy with his brain connected via a wire thingy to a computer, living in a continuous state of pleasure, sort of like being drugged up for life.

What I mean by wireless heading-which is not such an elegant term but anyway- is the idea of little to no value drift. Clippy is usually brought up as a most dangerous AI that we should avoid creating at all costs, yet what’s the point of creating copies of us and tile the universe with them? how is that different than what clippy does?

by ‘us’ I mean beings who share our intuitive understanding or can agree with us on things like morality or joy or not being bored etc.

Shouldn’t we focus on engineered/​controlled value drift rather than preventing it entirely? is that possible to program into an AI? somehow I don’t think so. It seems to me that the whole premise of a single benevolent AI depends to a large extent on the similarity of basic human drives, supposedly we’re so close to each other it’s not a big deal to prevent value drift.

but once we get really close to the singularity all sorts of technologies will cause humanity to ‘fracture’ into so many different groups, that inevitably there will be some groups with what we might call ‘alien minds’, minds so different than most baseline humans as they are now that there wouldn’t be much hope of convincing them to ‘rejoin the fold’ and not create an AI of their own. for all we know they might even have an easier time creating an AI that’s friendly to them than it is for baseline humans to do the same, considering this a black swan event-or one that is impossible to predict when it will happen-what to do?

discuss.