Yes, I do think that. They don’t actively diminish thought, after all, it’s a tool you decide to use. But when you use it to handle a problem, you lose the thoughts, and the growth you could’ve had solving it yourself. It could be argued, however, that if you are experienced enough in solving such problems, there isn’t much to lose, and you gain time to pursue other issues.
But as to why I think this way: people already don’t learn skills because chatGPT can do it for them, as lesswronguser123 said “A lot of my friends will most likely never learn coding properly and rely solely on ChatGPT”, and not just his friends use it this way. Such people, at the very least, lose the opportunity to adopt a programming mindset, which is useful beyond programming.
Outside of people not learning skills, I also believe there is a lot of potential to delegate almost all of your thinking to chatGPT. For example: I could have used it to write this response, decide what to eat for breakfast, tell me what I should do in the future, etc. It can tell you what to do on almost every day-to-day decision. Some use it to a lesser extent, some to a greater, but you do think less if you use it this way.
Does it redistrubute thinking to another topic? I believe it depends on the person in question, some use it to have more time to solve a more complex problem, others to have more time for entertainment.
I think that these are genuinely hard questions to answer in a scientific way. My own speculation is that using AI to solve problems is a skill of its own, along with recognizing which problems they are currently not good for. Some use of LLMs teaches these skills, which is useful.
I think a potential failure mode for AI might be when people systematically choose to work on lower-impact problems that AI can be used to solve, rather than higher-impact problems that AI is less useful for but that can be solved in other ways. Of course, AI can also increase people’s ambitions by unlocking the ability to pursue higher-impact goals they would not have been able to otherwise achieve. Whether or not AI increases or decreases human ambition on net seems like a key question.
In my world, I see limited use of AI except as a complement to traditional internet search, a coding assistant by competent programmers, a sort of Grammarly on steroids, an OK-at-best tutor that’s cheap and always available on any topic, and a way to get meaningless paperwork done faster. These use cases all seem basically ambition-enhancing to me. That’s the reason I asked John why he’s worried about this version of AI. My experience is that once I gained some familiarity with the limitations of AI, it’s been a straightforwaredly useful tool, with none of the serious downsides I have experienced from social media and smartphones.
The issues I’ve seen seem to have to do with using AI to deepfake political policy proposals, homework, blog posts, and job applications. These are genuine and serious problems, but mainly have to do with adding a tremendous amount of noise to collective discourse rather than the self-sabotage enabled by smartphones and social media. So I’m wondering if John’s more concerned about those social issues or by some sort of self-sabotage capacity from AI that I’m not seeing. Using AI to do your homework is obviously self-sabotage, but given the context I’m assuming that’s not what John’s talking about.
Yes, I do think that. They don’t actively diminish thought, after all, it’s a tool you decide to use. But when you use it to handle a problem, you lose the thoughts, and the growth you could’ve had solving it yourself. It could be argued, however, that if you are experienced enough in solving such problems, there isn’t much to lose, and you gain time to pursue other issues.
But as to why I think this way: people already don’t learn skills because chatGPT can do it for them, as lesswronguser123 said “A lot of my friends will most likely never learn coding properly and rely solely on ChatGPT”, and not just his friends use it this way. Such people, at the very least, lose the opportunity to adopt a programming mindset, which is useful beyond programming.
Outside of people not learning skills, I also believe there is a lot of potential to delegate almost all of your thinking to chatGPT. For example: I could have used it to write this response, decide what to eat for breakfast, tell me what I should do in the future, etc. It can tell you what to do on almost every day-to-day decision. Some use it to a lesser extent, some to a greater, but you do think less if you use it this way.
Does it redistrubute thinking to another topic? I believe it depends on the person in question, some use it to have more time to solve a more complex problem, others to have more time for entertainment.
I think that these are genuinely hard questions to answer in a scientific way. My own speculation is that using AI to solve problems is a skill of its own, along with recognizing which problems they are currently not good for. Some use of LLMs teaches these skills, which is useful.
I think a potential failure mode for AI might be when people systematically choose to work on lower-impact problems that AI can be used to solve, rather than higher-impact problems that AI is less useful for but that can be solved in other ways. Of course, AI can also increase people’s ambitions by unlocking the ability to pursue higher-impact goals they would not have been able to otherwise achieve. Whether or not AI increases or decreases human ambition on net seems like a key question.
In my world, I see limited use of AI except as a complement to traditional internet search, a coding assistant by competent programmers, a sort of Grammarly on steroids, an OK-at-best tutor that’s cheap and always available on any topic, and a way to get meaningless paperwork done faster. These use cases all seem basically ambition-enhancing to me. That’s the reason I asked John why he’s worried about this version of AI. My experience is that once I gained some familiarity with the limitations of AI, it’s been a straightforwaredly useful tool, with none of the serious downsides I have experienced from social media and smartphones.
The issues I’ve seen seem to have to do with using AI to deepfake political policy proposals, homework, blog posts, and job applications. These are genuine and serious problems, but mainly have to do with adding a tremendous amount of noise to collective discourse rather than the self-sabotage enabled by smartphones and social media. So I’m wondering if John’s more concerned about those social issues or by some sort of self-sabotage capacity from AI that I’m not seeing. Using AI to do your homework is obviously self-sabotage, but given the context I’m assuming that’s not what John’s talking about.