I think that these are genuinely hard questions to answer in a scientific way. My own speculation is that using AI to solve problems is a skill of its own, along with recognizing which problems they are currently not good for. Some use of LLMs teaches these skills, which is useful.
I think a potential failure mode for AI might be when people systematically choose to work on lower-impact problems that AI can be used to solve, rather than higher-impact problems that AI is less useful for but that can be solved in other ways. Of course, AI can also increase people’s ambitions by unlocking the ability to pursue higher-impact goals they would not have been able to otherwise achieve. Whether or not AI increases or decreases human ambition on net seems like a key question.
In my world, I see limited use of AI except as a complement to traditional internet search, a coding assistant by competent programmers, a sort of Grammarly on steroids, an OK-at-best tutor that’s cheap and always available on any topic, and a way to get meaningless paperwork done faster. These use cases all seem basically ambition-enhancing to me. That’s the reason I asked John why he’s worried about this version of AI. My experience is that once I gained some familiarity with the limitations of AI, it’s been a straightforwaredly useful tool, with none of the serious downsides I have experienced from social media and smartphones.
The issues I’ve seen seem to have to do with using AI to deepfake political policy proposals, homework, blog posts, and job applications. These are genuine and serious problems, but mainly have to do with adding a tremendous amount of noise to collective discourse rather than the self-sabotage enabled by smartphones and social media. So I’m wondering if John’s more concerned about those social issues or by some sort of self-sabotage capacity from AI that I’m not seeing. Using AI to do your homework is obviously self-sabotage, but given the context I’m assuming that’s not what John’s talking about.
I think that these are genuinely hard questions to answer in a scientific way. My own speculation is that using AI to solve problems is a skill of its own, along with recognizing which problems they are currently not good for. Some use of LLMs teaches these skills, which is useful.
I think a potential failure mode for AI might be when people systematically choose to work on lower-impact problems that AI can be used to solve, rather than higher-impact problems that AI is less useful for but that can be solved in other ways. Of course, AI can also increase people’s ambitions by unlocking the ability to pursue higher-impact goals they would not have been able to otherwise achieve. Whether or not AI increases or decreases human ambition on net seems like a key question.
In my world, I see limited use of AI except as a complement to traditional internet search, a coding assistant by competent programmers, a sort of Grammarly on steroids, an OK-at-best tutor that’s cheap and always available on any topic, and a way to get meaningless paperwork done faster. These use cases all seem basically ambition-enhancing to me. That’s the reason I asked John why he’s worried about this version of AI. My experience is that once I gained some familiarity with the limitations of AI, it’s been a straightforwaredly useful tool, with none of the serious downsides I have experienced from social media and smartphones.
The issues I’ve seen seem to have to do with using AI to deepfake political policy proposals, homework, blog posts, and job applications. These are genuine and serious problems, but mainly have to do with adding a tremendous amount of noise to collective discourse rather than the self-sabotage enabled by smartphones and social media. So I’m wondering if John’s more concerned about those social issues or by some sort of self-sabotage capacity from AI that I’m not seeing. Using AI to do your homework is obviously self-sabotage, but given the context I’m assuming that’s not what John’s talking about.