The position is self-undermining for their vibes if you take it literally.
This part seems like a strawman. My understanding is that the “AI as Normal Technology” view is that AIs is like electricity or the internet or the smart phone: likely the most important technology of the decade (or maybe last few decades) but should be managed in a way pretty similar to prior technologies. Like, yes, they think AI is important but that the right approach is more normal. Like, maybe they’d think it’s a top 5 most important technology over last 100 years.
I don’t see why thinking AI is this amount important but not crazier than this is (in and of itself) self-undermining. I do think it’s notable that the “AI as Normal Technology” position thinks of AI as extremely important (e.g. significantly more important than tends to be the view of US policy makers or random people) and that the main advocates for this view don’t strongly emphasize this, but this isn’t necessarily a problem with the view itself.
This comment generally felt a bit strawman-y to me and several points seemed off the mark, though I ultimately that “AI as Normal Technology” is a very bad prediction backed by poor argumentation. And I tend to think their argumentation isn’t getting at their real crux (which is more like “AI is unlikely to reach true parity with human experts in the key domains within the next 10 years” and “we shouldn’t focus on the possiblity this might happen even though we also don’t think there is strong evidence against this”).
I agree my complaints may be more strawmanny than ideal. I read only a subset of their posts and fairly quickly so it’s definitely possible I misunderstood some key arguments; though my current guess is that if I read more carefully I would not end up concluding that they’re overall more reasonable than my comment implies (my median expectation is that i’d go down in my estimation).
This part seems like a strawman. My understanding is that the “AI as Normal Technology” view is that AIs is like electricity or the internet or the smart phone: likely the most important technology of the decade (or maybe last few decades) but should be managed in a way pretty similar to prior technologies. Like, yes, they think AI is important but that the right approach is more normal. Like, maybe they’d think it’s a top 5 most important technology over last 100 years.
I don’t see why thinking AI is this amount important but not crazier than this is (in and of itself) self-undermining. I do think it’s notable that the “AI as Normal Technology” position thinks of AI as extremely important (e.g. significantly more important than tends to be the view of US policy makers or random people) and that the main advocates for this view don’t strongly emphasize this, but this isn’t necessarily a problem with the view itself.
This comment generally felt a bit strawman-y to me and several points seemed off the mark, though I ultimately that “AI as Normal Technology” is a very bad prediction backed by poor argumentation. And I tend to think their argumentation isn’t getting at their real crux (which is more like “AI is unlikely to reach true parity with human experts in the key domains within the next 10 years” and “we shouldn’t focus on the possiblity this might happen even though we also don’t think there is strong evidence against this”).
I agree my complaints may be more strawmanny than ideal. I read only a subset of their posts and fairly quickly so it’s definitely possible I misunderstood some key arguments; though my current guess is that if I read more carefully I would not end up concluding that they’re overall more reasonable than my comment implies (my median expectation is that i’d go down in my estimation).