I think what might be a problem is that such criticisms haven’t been collected into a single place where they can draw attention and stir up drama, as Holden’s post has.
I put them in discussion, because well, I bring them up for the purpose of discussion, and not for the purpose of forming an overall judgement of SIAI or trying to convince people to stop donating to SIAI. I’m rarely sure that my overall beliefs are right and SI people’s are wrong, especially on core issues that I know SI people have spent a lot of time thinking about, so mostly I try to bring up ideas, arguments, and possible scenarios that I suspect they may not have considered. (This is one major area where I differ from Holden: I have greater respect for SI people’s rationality, at least their epistemic rationality. And I don’t know why Holden is so confident about some of his own original ideas, like his solution to Pascal’s Mugging, and Tool-AI ideas. (Well I guess I do, it’s probably just typical human overconfidence.))
Having said that, I reserve the right to collect all my criticisms together and make a post in main in the future if I decide that serves my purposes, although I suspect that without the influence of GiveWell behind me it won’t stir up nearly as much as drama as Holden’s post. :)
ETA: Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post). This episode makes me think I may have overestimated how much attention they pay. It would be good if Luke or Eliezer could comment on this.
Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general
I read most such (apparently-relevant from post titles) discussions, and Anna reads a minority. I think Eliezer reads very few. I’m not very sure about Luke.
Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post).
I’m somewhat confident (from directly asking him a related question and also from many related observations over the last two years) that Eliezer mostly doesn’t, or is very good at pretending that he doesn’t. He’s also not good at reading so even if he sees something he’s only somewhat likely to understand it unless he already thinks it’s worth it for him to go out of his way to understand it. If you want to influence Eliezer it’s best to address him specifically and make sure to state your arguments clearly, and to explicitly disclaim that you’re specifically not making any of the stupid arguments that your arguments could be pattern-matched to.
Also I know that Anna is often too busy to read LessWrong.
I put them in discussion, because well, I bring them up for the purpose of discussion, and not for the purpose of forming an overall judgement of SIAI or trying to convince people to stop donating to SIAI. I’m rarely sure that my overall beliefs are right and SI people’s are wrong, especially on core issues that I know SI people have spent a lot of time thinking about, so mostly I try to bring up ideas, arguments, and possible scenarios that I suspect they may not have considered. (This is one major area where I differ from Holden: I have greater respect for SI people’s rationality, at least their epistemic rationality. And I don’t know why Holden is so confident about some of his own original ideas, like his solution to Pascal’s Mugging, and Tool-AI ideas. (Well I guess I do, it’s probably just typical human overconfidence.))
Having said that, I reserve the right to collect all my criticisms together and make a post in main in the future if I decide that serves my purposes, although I suspect that without the influence of GiveWell behind me it won’t stir up nearly as much as drama as Holden’s post. :)
ETA: Also, I had expected that SI people monitored LW discussions, not just for critiques, but also for new ideas in general (like the decision theory results that cousin_it, Nesov, and others occasionally post). This episode makes me think I may have overestimated how much attention they pay. It would be good if Luke or Eliezer could comment on this.
I read most such (apparently-relevant from post titles) discussions, and Anna reads a minority. I think Eliezer reads very few. I’m not very sure about Luke.
Do you forward relevant posts to other SI people?
Ones that seem novel and valuable, either by personal discussion or email.
Yes, I read most LW posts that seem to be relevant to my concerns, based on post titles. I also skim the comments on those posts.
I’m somewhat confident (from directly asking him a related question and also from many related observations over the last two years) that Eliezer mostly doesn’t, or is very good at pretending that he doesn’t. He’s also not good at reading so even if he sees something he’s only somewhat likely to understand it unless he already thinks it’s worth it for him to go out of his way to understand it. If you want to influence Eliezer it’s best to address him specifically and make sure to state your arguments clearly, and to explicitly disclaim that you’re specifically not making any of the stupid arguments that your arguments could be pattern-matched to.
Also I know that Anna is often too busy to read LessWrong.