In this section (including the footnote) I suggested that
there’s a category of engineered artifacts that includes planes and bridges and GOFAI and the Linux kernel;
there’s another category of engineered artifacts that includes plant cultivars, most pharmaceutical drugs, and trained ML models
with the difference being whether questions of the form “why is the artifact exhibiting thus-and-such behavior” are straightforwardly answerable (for the first category) or not (for the second category).
If you were to go around the public saying “we have no idea how trained ML models work, nor how plant cultivars work, nor how most pharmaceutical drugs work” … umm, I understand there’s an important technical idea that you’re trying to communicate here, but I’m just not sure about that wording. It seems at least a little bit misleading, right? I understand that there’s not much space for nuance in public communication, etc. etc. But still. I dunno. ¯\_(ツ)_/¯
There have, in fact, been numerous objections to genetically engineered plants and by implication everything in the second category. You might not realize how much the public is/was wary of engineered biology, on the grounds that nobody understood how it worked in terms of exact internal details. The reply that sort of convinced people—though it clearly didn’t calm every fear about new biotech—wasn’t that we understood it in a sense. It was that humanity had been genetically engineering plants via cultivation for literal millennia, so empirical facts allowed us to rule out many potential dangers.
Oh sorry, my intention was to refer to non-GMO plant cultivars. There do exist issues with non-GMO plant cultivars, like them getting less nutritious, or occasionally being toxic, but to my knowledge the general public has never gotten riled up about any aspect of non-GMO plant breeding, for better or worse. Like you said, we’ve been doing that for millennia. (This comment is not secretly arguing some point about AI, just chatting.)
Hm. Solid point regarding how the counter-narrative to this would look like, I guess. Something to prepare for in advance, and shape the initial message to make harder.
Basic point: Not all things in the second category are created equal. AIs turning out to belong to the second category is just one fact about them. A fact that makes them less safe, but not as safe as any other thing belonging to that category. Updating on this information should provoke a directional update, not anchoring onto the baseline safety level of that category.
As I’d mentioned in the post, I think a lot of baseline safety-feelings towards AIs are based on the background assumption that they do belong to the first category. So taking out this assumption wouldn’t be just some irrelevant datum to a lot of people – it would significantly update their judgement on the whole matter.
Incidentally this is among my favorite theses, with a beautiful elucidation of ‘weird machines’ in chapter two. Recommended reading if you’re at all interested in computers or computation.
I think the distinction is that even for plant cultivars and pharmaceuticals, we can straightforwardly circumscribe the potential danger, e.g. a pharmaceutical will not endanger people unless they take it, and a new plant cultivar will not resist our attempts to control it outside of the usual ways plants behave. That’s not necessarily the case with an AI that’s smarter than us.
In this section (including the footnote) I suggested that
there’s a category of engineered artifacts that includes planes and bridges and GOFAI and the Linux kernel;
there’s another category of engineered artifacts that includes plant cultivars, most pharmaceutical drugs, and trained ML models
with the difference being whether questions of the form “why is the artifact exhibiting thus-and-such behavior” are straightforwardly answerable (for the first category) or not (for the second category).
If you were to go around the public saying “we have no idea how trained ML models work, nor how plant cultivars work, nor how most pharmaceutical drugs work” … umm, I understand there’s an important technical idea that you’re trying to communicate here, but I’m just not sure about that wording. It seems at least a little bit misleading, right? I understand that there’s not much space for nuance in public communication, etc. etc. But still. I dunno. ¯\_(ツ)_/¯
There have, in fact, been numerous objections to genetically engineered plants and by implication everything in the second category. You might not realize how much the public is/was wary of engineered biology, on the grounds that nobody understood how it worked in terms of exact internal details. The reply that sort of convinced people—though it clearly didn’t calm every fear about new biotech—wasn’t that we understood it in a sense. It was that humanity had been genetically engineering plants via cultivation for literal millennia, so empirical facts allowed us to rule out many potential dangers.
Oh sorry, my intention was to refer to non-GMO plant cultivars. There do exist issues with non-GMO plant cultivars, like them getting less nutritious, or occasionally being toxic, but to my knowledge the general public has never gotten riled up about any aspect of non-GMO plant breeding, for better or worse. Like you said, we’ve been doing that for millennia. (This comment is not secretly arguing some point about AI, just chatting.)
Hm. Solid point regarding how the counter-narrative to this would look like, I guess. Something to prepare for in advance, and shape the initial message to make harder.
Basic point: Not all things in the second category are created equal. AIs turning out to belong to the second category is just one fact about them. A fact that makes them less safe, but not as safe as any other thing belonging to that category. Updating on this information should provoke a directional update, not anchoring onto the baseline safety level of that category.
As I’d mentioned in the post, I think a lot of baseline safety-feelings towards AIs are based on the background assumption that they do belong to the first category. So taking out this assumption wouldn’t be just some irrelevant datum to a lot of people – it would significantly update their judgement on the whole matter.
Computer viruses belong to the first category while biological weapons and gain of function research to the second.
For now.
See Urschleim in Silicon: Return-Oriented Program Evolution with ROPER (2018).
Incidentally this is among my favorite theses, with a beautiful elucidation of ‘weird machines’ in chapter two. Recommended reading if you’re at all interested in computers or computation.
I think the distinction is that even for plant cultivars and pharmaceuticals, we can straightforwardly circumscribe the potential danger, e.g. a pharmaceutical will not endanger people unless they take it, and a new plant cultivar will not resist our attempts to control it outside of the usual ways plants behave. That’s not necessarily the case with an AI that’s smarter than us.