It’s just that I don’t believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)
On top of that, if you fear WBEs self improving—don’t we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can’t predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there’s the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don’t think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don’t think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.
It’s just that I don’t believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)
On top of that, if you fear WBEs self improving—don’t we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can’t predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there’s the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don’t think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don’t think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.