However in specific unusual situations silence might not work… like if you’re talking to potential investors (or philanthropists) and they ask “How come you think you’re good enough to do this [thing that you want us to partially fund]?”
If I understand correctly, Eliezer decided at a young age to work on a public good whose value would be difficult (or evil) to reserve only to those who helped pay to bring it about, and which was unintelligible to voters, congress critters, the vast majority of philanthropists, and even to most of the high prestige experts in the relevant technical fields.
Having tracked much of his online oeuvre for approaching two decades, I say that arguably his biggest life accomplishment has been the construction of an entire subcultural ecosystem wherein the thing he aspired to spend his life on (ie building Friendly AGI) is basically validated as worth donating to.
There is still the question of whether the existence of such a culture is necessary or sufficient to actually be safe from “unaligned AGI” or “grey goo” or various other scary things (because at some point the rubber will meet the road) but the existence of the culture is probably a positive factor on net.
The existence of this cuture has caused a lot of culturalechoes within the broader english speaking world, and the plurality of this global outcome, traced back through causal dominos that have been falling since 1999 or so, can probably be laid at Eliezer’s feet, though he may not want to claim it all. Gleick should write a book about him, because, he is pretty clearly the Drexler of transhuman AI. (Admittedly, Vernor Vinge, Nick Bostrom, Anna Salamon, Seth Baum, and Peter Thiel would probably deserve chapters in the book.)
Thus Eliezer’s entire public life has sorta been one giant pitch to the small minority of philanthropists who will “get it” and the halo of people who are close to his ideal target audience in edit distance or in the social graph. I think a key reason it happened this way is that, economically speaking, for people working on “public credence goods that non-geniuses disbelieve” it kinda has to be funded this way. For such people, validation is not only digestable, validation is pretty much the only thing they can hope to eat.
In particular, I think the key distinction is between “I demand you justify yourself to me” and “I would appreciate if you could help satisfy my curiosity”. Even if the person is a potential investor it’s best to decline to jump through hoops and wait for them to shift to genuine curiosity.
If someone asks “how come you think you’re good enough to do this?”, I generally interpret this as “You seem to be implying that I should see you as high status. If you are going to demand I see you as high status then I counter-demand that you back it up. If you don’t back up your active bids for status, I will conclude that you’re a faker and declare you to be low status”. The correct response to this is to not try to control your status in their mind in the first place. In response to this question, I’d probably go with something like “I might not be. I don’t know.” and emphasizing that this is a very real thing to me. Showing agreement and real weight to the idea that you might not deserve the status claim they see implied is something that you can’t do if you’re trying to make a grab for the status, so it tends to defuse those concerns and make it harder to continue to frame you as trying to status grab. At the same time, it is not engaging in false modesty, and you don’t lose anything by pretending that you know that you aren’t capable of it.
There’s a whole lot of this that goes on below conscious awareness just in how people carry themselves and just working on your underlying frames can do a lot to prevent this kind of thing from ever becoming an issue, though there will always be cases where someone is status-insecure enough to keep trying to insist on framing you as a status grabber even after saying “I might not be”. Eventually it get’s pretty hard to keep it up though, if every time you respond with credible evidence that this isn’t what you’re doing. Sooner or later they pretty much have to give up on that framing and accept that you’re willing to accept them seeing you however they see you and affording you whatever status they feel like you deserve.
If they’re bothering to try to reject your status bids and they showed up in the first place, this is usually plenty high to fuel some genuine curiosity for how you can be status-secure while holding open some very strange possibilities, and when you see that shift you actually have an unprejudiced ear to hear your answer to “what makes you think you might be able to do this?”. It likely won’t make any sense to them anyway if you think very differently to them, but it’ll at least create the opening for them to start noticing things and weighing evidence, and they can’t really rule it out they way they otherwise would have.
How would you differentiate this from someone just asking for additional evidence because they think you’ve made a false statement? E.g. If Alice tells Bob the earth is flat, its reasonable for him to ask for additional evidence, and doing so doesn’t imply he’s playing status games. But could equally reasonably be replied to by saying that Bob is only disagreeing because he thinks Alice isn’t high status enough to make cosmological claims.
I generally wouldn’t ask questions like “is his disagreement explained by status alone or by facts alone?”. I generally ask questions more like “if he saw the person saying these things as higher or lower ‘status’, how much would this change his perception of the facts?” (and others, but this is the part of the picture I think is most important to illuminate here). If a fields medalists looks at your proof and says “you’re wrong”, you’re going to respond differently than if a random homeless guy said it because when a fields medalist says it you’re more likely to believe that your proof is flawed (and rightly so!). Presumably there’s no one you hold in high enough regard that if they were to say “the earth is flat” you’d conclude “it’s more likely that I’m wrong about the earth being round and all of the things that tie into that than it is that this person is wrong, so as weird as it is, the earth is probably flat”, however even there status concerns change how you respond.
Coincidentally, just as I started drafting my response to this I got interrupted to go out to dinner and on the way was told about Newman’s energy machine and how it produced more energy than it required, how Big Oil was involved in shutting it down, and the like. This certainly counts as “something I think is false” in the same way Bob thinks “the earth is flat” is false, but how, specifically, does that justify asking for evidence? The case against perpetual motion machines is very solid and this is not what a potentially successful challenge would look like (to put it lightly), so it’s not like I need to ask for evidence to make sure I shouldn’t be working on perpetual motion machines or something. Since I can’t pretend I’d be doing it for my personal learning, what could motivate me to ask?
I could ask for evidence because of a sense of [“duty”](http://xkcd.com/386/), but it was clear to me that he wasn’t just gonna say “Huh, I guess my evidence is actually incredibly weak. Thanks!”, so it’s not like he was actually going to stop being wrong in the time/effort allotted. I could ask for evidence to make it clear to the “audience” that he has no good evidence, but there was no one there that was at risk of believing in perpetual motion machines.
Why should I ask him for evidence, if not for reasons having to do with wanting him to afford more respect to the things I think, less to what he thinks, and to punish him by making him feel stupid if he tries to resist?
I think you’re right in general.
However in specific unusual situations silence might not work… like if you’re talking to potential investors (or philanthropists) and they ask “How come you think you’re good enough to do this [thing that you want us to partially fund]?”
If I understand correctly, Eliezer decided at a young age to work on a public good whose value would be difficult (or evil) to reserve only to those who helped pay to bring it about, and which was unintelligible to voters, congress critters, the vast majority of philanthropists, and even to most of the high prestige experts in the relevant technical fields.
Having tracked much of his online oeuvre for approaching two decades, I say that arguably his biggest life accomplishment has been the construction of an entire subcultural ecosystem wherein the thing he aspired to spend his life on (ie building Friendly AGI) is basically validated as worth donating to.
There is still the question of whether the existence of such a culture is necessary or sufficient to actually be safe from “unaligned AGI” or “grey goo” or various other scary things (because at some point the rubber will meet the road) but the existence of the culture is probably a positive factor on net.
The existence of this cuture has caused a lot of cultural echoes within the broader english speaking world, and the plurality of this global outcome, traced back through causal dominos that have been falling since 1999 or so, can probably be laid at Eliezer’s feet, though he may not want to claim it all. Gleick should write a book about him, because, he is pretty clearly the Drexler of transhuman AI. (Admittedly, Vernor Vinge, Nick Bostrom, Anna Salamon, Seth Baum, and Peter Thiel would probably deserve chapters in the book.)
Thus Eliezer’s entire public life has sorta been one giant pitch to the small minority of philanthropists who will “get it” and the halo of people who are close to his ideal target audience in edit distance or in the social graph. I think a key reason it happened this way is that, economically speaking, for people working on “public credence goods that non-geniuses disbelieve” it kinda has to be funded this way. For such people, validation is not only digestable, validation is pretty much the only thing they can hope to eat.
In particular, I think the key distinction is between “I demand you justify yourself to me” and “I would appreciate if you could help satisfy my curiosity”. Even if the person is a potential investor it’s best to decline to jump through hoops and wait for them to shift to genuine curiosity.
If someone asks “how come you think you’re good enough to do this?”, I generally interpret this as “You seem to be implying that I should see you as high status. If you are going to demand I see you as high status then I counter-demand that you back it up. If you don’t back up your active bids for status, I will conclude that you’re a faker and declare you to be low status”. The correct response to this is to not try to control your status in their mind in the first place. In response to this question, I’d probably go with something like “I might not be. I don’t know.” and emphasizing that this is a very real thing to me. Showing agreement and real weight to the idea that you might not deserve the status claim they see implied is something that you can’t do if you’re trying to make a grab for the status, so it tends to defuse those concerns and make it harder to continue to frame you as trying to status grab. At the same time, it is not engaging in false modesty, and you don’t lose anything by pretending that you know that you aren’t capable of it.
There’s a whole lot of this that goes on below conscious awareness just in how people carry themselves and just working on your underlying frames can do a lot to prevent this kind of thing from ever becoming an issue, though there will always be cases where someone is status-insecure enough to keep trying to insist on framing you as a status grabber even after saying “I might not be”. Eventually it get’s pretty hard to keep it up though, if every time you respond with credible evidence that this isn’t what you’re doing. Sooner or later they pretty much have to give up on that framing and accept that you’re willing to accept them seeing you however they see you and affording you whatever status they feel like you deserve.
If they’re bothering to try to reject your status bids and they showed up in the first place, this is usually plenty high to fuel some genuine curiosity for how you can be status-secure while holding open some very strange possibilities, and when you see that shift you actually have an unprejudiced ear to hear your answer to “what makes you think you might be able to do this?”. It likely won’t make any sense to them anyway if you think very differently to them, but it’ll at least create the opening for them to start noticing things and weighing evidence, and they can’t really rule it out they way they otherwise would have.
How would you differentiate this from someone just asking for additional evidence because they think you’ve made a false statement? E.g. If Alice tells Bob the earth is flat, its reasonable for him to ask for additional evidence, and doing so doesn’t imply he’s playing status games. But could equally reasonably be replied to by saying that Bob is only disagreeing because he thinks Alice isn’t high status enough to make cosmological claims.
Good question.
I generally wouldn’t ask questions like “is his disagreement explained by status alone or by facts alone?”. I generally ask questions more like “if he saw the person saying these things as higher or lower ‘status’, how much would this change his perception of the facts?” (and others, but this is the part of the picture I think is most important to illuminate here). If a fields medalists looks at your proof and says “you’re wrong”, you’re going to respond differently than if a random homeless guy said it because when a fields medalist says it you’re more likely to believe that your proof is flawed (and rightly so!). Presumably there’s no one you hold in high enough regard that if they were to say “the earth is flat” you’d conclude “it’s more likely that I’m wrong about the earth being round and all of the things that tie into that than it is that this person is wrong, so as weird as it is, the earth is probably flat”, however even there status concerns change how you respond.
Coincidentally, just as I started drafting my response to this I got interrupted to go out to dinner and on the way was told about Newman’s energy machine and how it produced more energy than it required, how Big Oil was involved in shutting it down, and the like. This certainly counts as “something I think is false” in the same way Bob thinks “the earth is flat” is false, but how, specifically, does that justify asking for evidence? The case against perpetual motion machines is very solid and this is not what a potentially successful challenge would look like (to put it lightly), so it’s not like I need to ask for evidence to make sure I shouldn’t be working on perpetual motion machines or something. Since I can’t pretend I’d be doing it for my personal learning, what could motivate me to ask?
I could ask for evidence because of a sense of [“duty”](http://xkcd.com/386/), but it was clear to me that he wasn’t just gonna say “Huh, I guess my evidence is actually incredibly weak. Thanks!”, so it’s not like he was actually going to stop being wrong in the time/effort allotted. I could ask for evidence to make it clear to the “audience” that he has no good evidence, but there was no one there that was at risk of believing in perpetual motion machines.
Why should I ask him for evidence, if not for reasons having to do with wanting him to afford more respect to the things I think, less to what he thinks, and to punish him by making him feel stupid if he tries to resist?