To the extent that you’re saying “I’d like to have more conversations about why creating powerful agentic systems might not go well by default; for others this seems like a given, and I just don’t see it”, I applaud you and hope you get to talk about this a whole bunch with smart people in a mutually respectful environment. However, I do not believe analogizing the positions of those who disagree with you with luddites from the 19th century (in particular when thousands of pages of publicly available writings, with which you are familiar, exist) is the best way to invite those conversations.
Quoting the first page of a book as though it contained a detailed roadmap of the central (60,000-word) argument’s logical flow (which to you is apparently the same as a rigorous historical account of how the authors came to believe what they believe) — while it claims to do nothing of the sort — simply does not parse at all. If you read the book (which I recommend, based on your declared interests here), or modeled the pre-existing knowledge of the median book website reader, you would not think “anything remotely like current techniques” meant “we are worried exclusively about deep learning for deep learning-exclusive reasons; trust us because we know so much about deep learning.”
If you find evidence of Eliezer, Nate, or similar saying “The core reason I am concerned about AI safety is [something very specific about deep learning]; otherwise I would not be concerned”, I would take your claims about MIRI’s past messaging very seriously. As is, no evidence exists before me that I may consider in support of this claim.
Based on what you’ve said so far, you seem to think that all of the cruxes (or at least the most important ones) must either be purely intuitive or purely technical. If they’re purely intuitive, then you dismiss them as the kind of reactionary thinking someone from the 19th century might have come up with. If they’re purely technical, you’d be well-positioned to propose clever technical solutions (or else to discredit your interlocutor on the basis of their credentials).
Reality’s simply messier than that. You likely have both intuitive and technical cruxes, as well as cruxes with irreducible intuitive and technical components (that is, what you see when you survey the technical evidence is shaped by your prior, and your motivations, as is true for anyone; as was true for you when interpreting that book excerpt).
I think you’re surrounded by smart people who would be excited to pour time into talking to you about this, conditional on not opening that discussion with a straw man of their position.
I do not believe analogizing the positions of those who disagree with you with luddites from the 19th century (in particular when thousands of pages of publicly available writings, with which you are familiar, exist) is the best way to invite those conversations.
To clarify, I am not analogizing the positions of those who disagree with me with luddites from the 19th century. This is not my intention, nor was it my argument.
I think we’re talking past each other here, so I will respectfully drop this discussion.
To the extent that you’re saying “I’d like to have more conversations about why creating powerful agentic systems might not go well by default; for others this seems like a given, and I just don’t see it”, I applaud you and hope you get to talk about this a whole bunch with smart people in a mutually respectful environment. However, I do not believe analogizing the positions of those who disagree with you with luddites from the 19th century (in particular when thousands of pages of publicly available writings, with which you are familiar, exist) is the best way to invite those conversations.
Quoting the first page of a book as though it contained a detailed roadmap of the central (60,000-word) argument’s logical flow (which to you is apparently the same as a rigorous historical account of how the authors came to believe what they believe) — while it claims to do nothing of the sort — simply does not parse at all. If you read the book (which I recommend, based on your declared interests here), or modeled the pre-existing knowledge of the median book website reader, you would not think “anything remotely like current techniques” meant “we are worried exclusively about deep learning for deep learning-exclusive reasons; trust us because we know so much about deep learning.”
If you find evidence of Eliezer, Nate, or similar saying “The core reason I am concerned about AI safety is [something very specific about deep learning]; otherwise I would not be concerned”, I would take your claims about MIRI’s past messaging very seriously. As is, no evidence exists before me that I may consider in support of this claim.
Based on what you’ve said so far, you seem to think that all of the cruxes (or at least the most important ones) must either be purely intuitive or purely technical. If they’re purely intuitive, then you dismiss them as the kind of reactionary thinking someone from the 19th century might have come up with. If they’re purely technical, you’d be well-positioned to propose clever technical solutions (or else to discredit your interlocutor on the basis of their credentials).
Reality’s simply messier than that. You likely have both intuitive and technical cruxes, as well as cruxes with irreducible intuitive and technical components (that is, what you see when you survey the technical evidence is shaped by your prior, and your motivations, as is true for anyone; as was true for you when interpreting that book excerpt).
I think you’re surrounded by smart people who would be excited to pour time into talking to you about this, conditional on not opening that discussion with a straw man of their position.
To clarify, I am not analogizing the positions of those who disagree with me with luddites from the 19th century. This is not my intention, nor was it my argument.
I think we’re talking past each other here, so I will respectfully drop this discussion.