Also, somebody should probably go ahead and state what is clear from the
voting patterns on posts like this, in addition to being implicit in e.g. the
About Less Wrong page: this is not really the place for people to present their
ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence
or futurism per se.
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?