I thoroughly enjoyed it and think it was really well done. I can’t perfectly judge how accessible it would be to those unfamiliar with x-risk mitigation and AI, but I think it was pretty good in that respect and did a good job of justifying the value alignment problem without seeming threatening.
I like how he made sure to position the people working on the value alignment problem as separate from those actually developing the potentially-awesome-but-potentially-world-ending AI so that the audience won’t have any reason to not support what he’s doing. I just hope the implicit framing of superintelligent AI as an inevitability, not a possibility, isn’t so much of an inferential leap that it takes people out of reality-mode and into fantasy-mode.
I thoroughly enjoyed it and think it was really well done. I can’t perfectly judge how accessible it would be to those unfamiliar with x-risk mitigation and AI, but I think it was pretty good in that respect and did a good job of justifying the value alignment problem without seeming threatening.
I like how he made sure to position the people working on the value alignment problem as separate from those actually developing the potentially-awesome-but-potentially-world-ending AI so that the audience won’t have any reason to not support what he’s doing. I just hope the implicit framing of superintelligent AI as an inevitability, not a possibility, isn’t so much of an inferential leap that it takes people out of reality-mode and into fantasy-mode.