it’ll be even harder if I know the other person is responding to an AI-rewritten version of my comment, referring to an AI-summarized version of my profile, running AI hypotheticals on how I would react
I think all of these are better than the likely alternatives though, which is that
I fail to understand someone’s comment or the reasoning/motivations behind their words, and most likely just move on (instead of asking them to clarify)
I have little idea what their background knowledge/beliefs are when replying to them
I fail to consider some people’s perspectives on some issue
It also seems like I change my mind (or at least become somewhat more sympathetic) more easily when arguing with an AI-representation of someone’s perspective, maybe due to less perceived incentive to prove that I was right all along.
This seems like one-shot reasoning though. If you extend it to more people, the end result is a world where everyone treats understanding people as a chore to be outsourced to AI. To me this is somewhere I don’t want to go; I think a large part of my values are chores that I don’t want to outsource. (And in fact this attitude of mine began quite a few steps before AI, somewhere around smartphones.)
Hmm, I find it hard to understand or appreciate this attitude. I can’t think of any chores that I intrinsically don’t want to outsource, only concerns that I may not be able to trust the results. What are some other examples of chores you do and don’t want to outsource? Do you have any pattern or explanation of where you draw the line? Do you think people who don’t mind outsourcing all their chores are wrong in some way?
There’s no “line” per se. The intuition goes something like this. If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn’t even need a brain. But if my value system is about doing stuff myself, then the logical endpoint is Leonardo da Vinci. To me that’s obviously better. So there are quite a lot of skills—like doing math, playing musical instruments, navigating without a map, or understanding people as in your example—that I want to do myself even if there are machines that could do it for me cheaper and better.
If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn’t even need a brain.
Unless one of the things you want to receive from the universe is to be like Leonardo da Vinci, or be able to do everything effortlessly and with extreme competence. Why “do chores” now if you can get to that endpoint either way, or maybe even more likely if you don’t “do chores” because it allows you to save on opportunity costs and better deploy your comparative advantage? (I can understand if you enjoy the time spent doing these activities, but by calling them “chores” you seem to be implying that you don’t?)
Well, there’s no point in asking the AI to make me good at things if I’m the kind of person who will just keep asking the AI to do more things for me! That path just leads to the consumer blob again. The only alternative is if I like doing things myself, and in that case why not start now. After all, Leonardo himself wasn’t motivated by the wish to become a polymath, he just liked doing things and did them. Even when then they’re a bit difficult (“chores”).
Anyway that was the theoretical argument, but the practical argument is that it’s not what’s being offered now. We started talking about outsourcing the task of understanding people to AI, right? That doesn’t seem like a step toward Leonardo to me! It would make me stop using a pretty important part of my mind. Moreover, it’s being offered by corporations that would love to make me dependent, and that have a bit of history getting people addicted to stuff.
Well, there’s no point in asking the AI to make me good at things if I’m the kind of person who will just keep asking the AI to do more things for me!
But I’m only asking the AI to do things for me because they’re too effortful or costly. If the AI made me good at these things with no extra effort or cost (versus asking the AI to do it) then why wouldn’t I do them myself? For example I’m pretty sure I’d love the experience of playing like a concert pianist, and would ask for this ability, if doing so involved minimal effort and cost.
On the practical side, I agree that atrophy and being addicted/exploited are risks/costs worth keeping in mind, but I’ve generally made tradeoffs more in the direction of using shortcuts to minimize “doing chores” (e.g., buying a GPS for my car as soon as they came out, giving up learning an instrument very early) and haven’t regretted it so far.
(This thread is getting a bit long, and we might not be convincing each other very much, so hope it’s ok if I only reply with points I consider interesting—not just push-pull.)
With the concert pianist thing I think there’s a bit of type error going on. The important skill for a musician isn’t having fast fingers, it’s having something to say. Same as: “I’d like to be able to write like a professional writer”—does that mean anything? You either have things you want to write in the way that you want to write, or there’s no point being a writer at all, much less asking an AI to make you one. With music or painting it’s the same. There’s some amount of technique required, but you need to have something to say, otherwise there’s no point doing it.
So with that in mind, maybe music isn’t the best example in your case. Let’s take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Let’s take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Outsourcing philosophy is the main thing I’ve been trying to do, or trying to figure out how to safely do, for decades at this point. I’ve written about it in various places, including this post and my pinned tweet on X. Quoting from the latter:
Among my first reactions upon hearing “artificial superintelligence” were “I can finally get answers to my favorite philosophical problems” followed by “How do I make sure the ASI actually answers them correctly?”
Aside from wanting to outsource philosophy to ASI, I’d also love to have more humans who could answer these questions for me. I think about this a fair bit and wrote some things down but don’t have any magic bullets.
(I currently think the best bet to eventually getting what I want is to encourage an AI pause along with genetic enhancements for human intelligence, have the enhanced humans solve metaphilosophy and other aspects of AI safety, then outsource the rest of philosophy to ASI, or have the enhanced humans decide what to do at that point.)
BTW I thought this would be a good test for how competent current AIs are at understanding someone’s perspective so I asked a bunch of them how Wei Dai would answer your question, and all of them got it wrong on the first try, except Claude Sonnet 4.5 which got it right on the first try but wrong on the second try. It seems like having my public content in their training data isn’t enough, and finding relevant info from the web and understanding nuance are still challenging for them. (GPT-5 essentially said I’d answer no because I wouldn’t trust current AIs enough, which is really missing the point despite having this whole thread as context.)
Yeah, I wouldn’t have predicted this response either. Maybe it’s a case of something we talked about long ago—that if a person’s “true values” are partly defined by how the person themselves would choose to extrapolate them, then different people can end up on very diverging trajectories. Like, it seems I’m slightly more attached to some aspects of human experience that you don’t care much about, and that affects the endpoint a lot.
I think all of these are better than the likely alternatives though, which is that
I fail to understand someone’s comment or the reasoning/motivations behind their words, and most likely just move on (instead of asking them to clarify)
I have little idea what their background knowledge/beliefs are when replying to them
I fail to consider some people’s perspectives on some issue
It also seems like I change my mind (or at least become somewhat more sympathetic) more easily when arguing with an AI-representation of someone’s perspective, maybe due to less perceived incentive to prove that I was right all along.
This seems like one-shot reasoning though. If you extend it to more people, the end result is a world where everyone treats understanding people as a chore to be outsourced to AI. To me this is somewhere I don’t want to go; I think a large part of my values are chores that I don’t want to outsource. (And in fact this attitude of mine began quite a few steps before AI, somewhere around smartphones.)
Hmm, I find it hard to understand or appreciate this attitude. I can’t think of any chores that I intrinsically don’t want to outsource, only concerns that I may not be able to trust the results. What are some other examples of chores you do and don’t want to outsource? Do you have any pattern or explanation of where you draw the line? Do you think people who don’t mind outsourcing all their chores are wrong in some way?
There’s no “line” per se. The intuition goes something like this. If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn’t even need a brain. But if my value system is about doing stuff myself, then the logical endpoint is Leonardo da Vinci. To me that’s obviously better. So there are quite a lot of skills—like doing math, playing musical instruments, navigating without a map, or understanding people as in your example—that I want to do myself even if there are machines that could do it for me cheaper and better.
Unless one of the things you want to receive from the universe is to be like Leonardo da Vinci, or be able to do everything effortlessly and with extreme competence. Why “do chores” now if you can get to that endpoint either way, or maybe even more likely if you don’t “do chores” because it allows you to save on opportunity costs and better deploy your comparative advantage? (I can understand if you enjoy the time spent doing these activities, but by calling them “chores” you seem to be implying that you don’t?)
Well, there’s no point in asking the AI to make me good at things if I’m the kind of person who will just keep asking the AI to do more things for me! That path just leads to the consumer blob again. The only alternative is if I like doing things myself, and in that case why not start now. After all, Leonardo himself wasn’t motivated by the wish to become a polymath, he just liked doing things and did them. Even when then they’re a bit difficult (“chores”).
Anyway that was the theoretical argument, but the practical argument is that it’s not what’s being offered now. We started talking about outsourcing the task of understanding people to AI, right? That doesn’t seem like a step toward Leonardo to me! It would make me stop using a pretty important part of my mind. Moreover, it’s being offered by corporations that would love to make me dependent, and that have a bit of history getting people addicted to stuff.
But I’m only asking the AI to do things for me because they’re too effortful or costly. If the AI made me good at these things with no extra effort or cost (versus asking the AI to do it) then why wouldn’t I do them myself? For example I’m pretty sure I’d love the experience of playing like a concert pianist, and would ask for this ability, if doing so involved minimal effort and cost.
On the practical side, I agree that atrophy and being addicted/exploited are risks/costs worth keeping in mind, but I’ve generally made tradeoffs more in the direction of using shortcuts to minimize “doing chores” (e.g., buying a GPS for my car as soon as they came out, giving up learning an instrument very early) and haven’t regretted it so far.
(This thread is getting a bit long, and we might not be convincing each other very much, so hope it’s ok if I only reply with points I consider interesting—not just push-pull.)
With the concert pianist thing I think there’s a bit of type error going on. The important skill for a musician isn’t having fast fingers, it’s having something to say. Same as: “I’d like to be able to write like a professional writer”—does that mean anything? You either have things you want to write in the way that you want to write, or there’s no point being a writer at all, much less asking an AI to make you one. With music or painting it’s the same. There’s some amount of technique required, but you need to have something to say, otherwise there’s no point doing it.
So with that in mind, maybe music isn’t the best example in your case. Let’s take an area where you have something to say, like philosophy. Would you be willing to outsource that?
Outsourcing philosophy is the main thing I’ve been trying to do, or trying to figure out how to safely do, for decades at this point. I’ve written about it in various places, including this post and my pinned tweet on X. Quoting from the latter:
Aside from wanting to outsource philosophy to ASI, I’d also love to have more humans who could answer these questions for me. I think about this a fair bit and wrote some things down but don’t have any magic bullets.
(I currently think the best bet to eventually getting what I want is to encourage an AI pause along with genetic enhancements for human intelligence, have the enhanced humans solve metaphilosophy and other aspects of AI safety, then outsource the rest of philosophy to ASI, or have the enhanced humans decide what to do at that point.)
BTW I thought this would be a good test for how competent current AIs are at understanding someone’s perspective so I asked a bunch of them how Wei Dai would answer your question, and all of them got it wrong on the first try, except Claude Sonnet 4.5 which got it right on the first try but wrong on the second try. It seems like having my public content in their training data isn’t enough, and finding relevant info from the web and understanding nuance are still challenging for them. (GPT-5 essentially said I’d answer no because I wouldn’t trust current AIs enough, which is really missing the point despite having this whole thread as context.)
Yeah, I wouldn’t have predicted this response either. Maybe it’s a case of something we talked about long ago—that if a person’s “true values” are partly defined by how the person themselves would choose to extrapolate them, then different people can end up on very diverging trajectories. Like, it seems I’m slightly more attached to some aspects of human experience that you don’t care much about, and that affects the endpoint a lot.