I don’t think the lack of an earth-shattering ka-FOOM changes much of the logic of FAI. Smart enough to take over the world is enough to make human existence way better, or end it entirely.
It’s quite tricky to ensure that your superintelligent AI does anything like what you wanted it to. I don’t share the intuition that creating a “homeostasis” AI is any easier than an FAI. I think one move Eliezer is making in his “Creating Friendly AI” strategy is to minimize the goals you’re trying to give the machine; just CEV.
I think this makes apparent what a good CEV seeker needs anyway; some sense of restraint when CEV can’t be reliably extrapolated in one giant step. It’s less than certain that even a full FOOM AI could reliably extrapolate to some final most-preferred world state.
I’d like to see a program where humanity actually chooses its own future; we skip the extrapolation and just use CV repeatedly; let people live out their own extrapolation.
Does just CV work all right? I don’t know, but it might. Sure, Palestinians want to kill Israelis and vice versa; but they both want to NOT be killed way more than they want to kill, and most other folks don’t want to see either of them killed.
Or perhaps we need a much more cautious, “OK, let’s vote on improvements, but they can’t kill anybody and benefits have to be available to everyone...” policy for the central guide of AI.
CEV is a well thought out proposal (perhaps the only one—counterexamples?), but we need more ideas in the realm of AI motivation/ethics systems. Particularly, ways to get from a practical AI with goals like “design neat products for GiantCo” or “obey orders from my commanding officer” to ensure that they don’t ruin everything if they start to self-improve. Not everyone is going to want to give their AI CEV as its central goal, at least not until it’s clear it can/will self improve, at which point it’s probably too late.
Well, yes; it’s not straightforward to go from brains to preferences. But for any particular definition of preference, a given brain’s “preference” is just a fact about that brain. If this is true, it’s important to understanding morality/ethics/volition.