I think a lot of people can’t think at the right level of abstraction for understanding Yudkowsky. Some things are overdetermined because of the high level structure of reality. Start with physical reality as we best understand it, then derive what is possible eventually, then derive what is possible soon, then derive the incentives, then derive categories of what will happen. This completely top-down way of drawing conclusions is perhaps tricky to get right, but gives broad predictions far into the future. At no point till now have any facts about AI development contradicted Yudkowsky and Bostrom’s arguments from this basis. At no point do these arguments rely on anything one has personally seen. It relies on scientific principles taken on trust, and careful argument hashed out through debate. I believe these arguments and I put a high level of faith in conclusions drawn this way, but some people just don’t get it. I don’t know why Will MacAskill appears to be among them.
It is clear on this basis that the local incentives of global society point in the direction of increasing technological development and automation. In the limit of that direction is a machine economy which comes at the cost of human existence.
To avoid that outcome, there needs to be some kind of complete, enduring global coordination. It needs to prevent anyone from ever creating an artificial agent powerful enough to successfully replicate even after we try to stop it.
Understanding an argument and agreeing with it are different things. So you might be right that there is some legible reason for the majority of misunderstandings, but it doesn’t follow that understanding the argument (overcoming that reason for misunderstanding) implies agreement. Some reasons for disagreement are not about misunderstanding of the intended meaning.
I think a lot of people can’t think at the right level of abstraction for understanding Yudkowsky. Some things are overdetermined because of the high level structure of reality. Start with physical reality as we best understand it, then derive what is possible eventually, then derive what is possible soon, then derive the incentives, then derive categories of what will happen. This completely top-down way of drawing conclusions is perhaps tricky to get right, but gives broad predictions far into the future. At no point till now have any facts about AI development contradicted Yudkowsky and Bostrom’s arguments from this basis. At no point do these arguments rely on anything one has personally seen. It relies on scientific principles taken on trust, and careful argument hashed out through debate. I believe these arguments and I put a high level of faith in conclusions drawn this way, but some people just don’t get it. I don’t know why Will MacAskill appears to be among them.
It is clear on this basis that the local incentives of global society point in the direction of increasing technological development and automation. In the limit of that direction is a machine economy which comes at the cost of human existence.
To avoid that outcome, there needs to be some kind of complete, enduring global coordination. It needs to prevent anyone from ever creating an artificial agent powerful enough to successfully replicate even after we try to stop it.
Understanding an argument and agreeing with it are different things. So you might be right that there is some legible reason for the majority of misunderstandings, but it doesn’t follow that understanding the argument (overcoming that reason for misunderstanding) implies agreement. Some reasons for disagreement are not about misunderstanding of the intended meaning.