if we are able to wirehead in an effective manner it might be morally obligatory to force them into wireheading to maximize utility.
Not interested in this kind of “moral obligation”. If you want to be a hedonistic utilitarian, use your own capacity and consent-based cooperation for it.
I’m confused about OpenAI’s agenda.
Ostensibly, their funding is aimed at reducing the risk of AI dystopia. Correct? But how does this research prevent AI dystopia? It seems more likely to speed up its arrival, as would any general AI research that’s not specifically aimed at safety.
If we have an optimization goal like “Let’s not get kept alive against our will and tortured in the most horrible way for millions of years on end”, then it seems to me that this funding is actually harmful rather than helpful, because it increases the probability that AI dystopia arrives while we are still alive.