Looks like the tide is shifting from the strong “engineering” stance (We will design it friendly.) through the “philosophical” approach (There are good reasons to be friendly.)… towards the inevitable resignation (Please, be friendly).
These “firendly AI” debates are not dissimilar to the medieval monks violently arguing about the number of angels on a needletip (or their “friendliness”—there are fallen “singletons” too). They also started strongly (Our GOD rules.) through philosophical (There are good reasons for God.) up to nowadays resignation (Please, do not forget our god or… we’ll have no jobs.)
after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of “critical theorists”, also quite “religiously” inflamed… but I waited till the end, and got a nice confirmation by that “AI rights” line… looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)
otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, “friendly” AI is not really a rigorous scientific term, rather a journalistic or even “propagandistic” one)
also, it’s quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of “natural stupidity” and DeepAnimal brain parts—having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)
but this “impossibility of uploading” is a tricky thing—who knows what can or cannot be “transferred” and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves… and others will happily upload to this megacheap and gigaperformant universal substrate)
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World