I really enjoyed reading the full essay. I think there many interesting points in it, and I’m grateful the author took the time to write and share it. I agree that a lot of issues regarding how to deploy/develop AI are being handled by people who’s skillsets or perspectives are maybe more narrow than is ideal.
However. I understand the main thrust of the piece to be about how people can, while making locally sensible decisions, end up in disaster, with the presumption that that’s where things are headed. While I thought that main point was really well elaborated, I’d like to talk about the presumption. First off I can’t help but get the impression that the author’s community is engaged in some of what he’s accusing others of: He discusses how others are “possessed” by capabilities (and I don’t entirely disagree, how could one not be captivated with all that AI is capable of today), but could it be his group is “possessed” by doom? It reminds me of Neumann’s quote on Oppenheimer: “Sometimes someone confesses a sin in order to take credit for it”.
Getting more specific, I wish I could hear more on his opinion about that safety meeting he mentioned. I thought it interesting that he brings it up as a seemingly negative experience. I’d love to know why he thinks that. Maybe I’m missing something, but AFAIK on net nothing bad has happened, which makes me think the product people in the meeting were right?
I really enjoyed reading the full essay. I think there many interesting points in it, and I’m grateful the author took the time to write and share it. I agree that a lot of issues regarding how to deploy/develop AI are being handled by people who’s skillsets or perspectives are maybe more narrow than is ideal.
However. I understand the main thrust of the piece to be about how people can, while making locally sensible decisions, end up in disaster, with the presumption that that’s where things are headed. While I thought that main point was really well elaborated, I’d like to talk about the presumption. First off I can’t help but get the impression that the author’s community is engaged in some of what he’s accusing others of: He discusses how others are “possessed” by capabilities (and I don’t entirely disagree, how could one not be captivated with all that AI is capable of today), but could it be his group is “possessed” by doom? It reminds me of Neumann’s quote on Oppenheimer: “Sometimes someone confesses a sin in order to take credit for it”.
Getting more specific, I wish I could hear more on his opinion about that safety meeting he mentioned. I thought it interesting that he brings it up as a seemingly negative experience. I’d love to know why he thinks that. Maybe I’m missing something, but AFAIK on net nothing bad has happened, which makes me think the product people in the meeting were right?