I disagree. I see it as a bad thing, but moreso a minor bad thing than a major one.
From a first-order consequentialist perspective, I strongly suspect that Wave treats people quite well and that this policy isn’t silencing anything to a non-trivial degree.
Looking at the nth-order effects of this policy, or from a more “virtues as heuristics” perspective, I think it probably has some sort of small negative consequences. Like marginally normalizing an unfair and unhealthy norm. And also normalizing the idea of doing sketchy things in the name of the greater good. But I’m pretty confident that overall, the negative consequences here aren’t large.
Furthermore, I think that Working With Monsters is important. Well, there’s some threshold. I’m not sure where that threshold is. I’m extremely confident that Nonlinear has crossed that threshold by a large margin, for what that’s worth. But in general I feel like the threshold should be on the high side. It’s just too hard to coordinate to get anything done if you get hung up on these sorts of things. Especially if you have shorter timelines. And with that said, I suspect quite strongly that Wave is way below the threshold and that it’d make sense to continue being strong “allies” with them.
I’m a former employee of wave, so I want to make it clear that this question is not driven by private information. I would have asked that question in response to that sentence no matter what the proper noun was. I have been on about “it’s impossible to make a utilitarian argument for lying[2] because truth is necessary to calculate utils” for months.
A prior that most organizations don’t have >= moderately-sized issues that really need to be silenced. Which is vaguely informed by my own experiences working for various companies, chatting with friends and acquaintances about their experiences, etc.
A prior that rationalists and rationalist-adjacent people are a good deal above average in terms of how well they treat people.
I’ve read a bunch of benkuhn’s writing and Dan Luu’s writing. From this writing, I’m very confident that both of them are really awesome people. And they’re associated with Wave. And I remember Ben writing Wave-specific things that made me feel good about Wave. I see all of this as, I dunno, weak-to-moderate evidence of Wave being “good”.
I see now that lincolnquirk is a cofounder of Wave. I don’t remember anything specific about him, but the name rings a bell of “I have above-average opinions of you compared to other rationalists”. And I have pretty good opinions about the average rationalist.
I think what I’d expect to see in terms of stories of people being mistreated would be roughly the same. Because if they are mistreating people, evidence of that would likely be suppressed.
So I think where I’m coming from is moreso that various things, IMO, point towards the prior probability[1] of mistreatment being low in the first place.
(This is a fun opportunity to work on some Bayesian reasoning. Not to be insensitive about the context that it’s in. Please let me know if you/anyone has comments or advice. Maybe I’m missing something here.)
In the sense of, before opening your eyes and looking at what stories about Wave are out there, what would I expect the probability of there being bad things to be. Or something like that.
As an analogy, suppose you told me that Alice is a manager at Widget Corp. Then you tell me that she is a rationalist. Then you show me her blog, I read it, and I get good vibes. We can ask at this point what I think her probability of her mistreating employees and stuff being. And given what I know, I’d say that it’s very low. From there, you can say, “Ok, now go out and google stuff about Alice and Widget Corp. How do the results of googling shift your beliefs?” I think they probably wouldn’t shift my beliefs much, since regardless of whether she does bad stuff, if the information is being suppressed, I’m unlikely to observe it. But I can still think that the probability of bad stuff is low, despite the suppression.
I apologize for derailing the N(D|D)A discussion, but it’s kind of crazy to me that you think that Nonlinear (based on the content of this post?) has crossed a line such that you wouldn’t work with them, by a large margin? Why not? That post you linked is about working with murderers, not working with business owners who seemingly took advantage of their employees for a few months, or who made a trigger-happy legal threat!
Compared to (for example) any random YC company with no reputation to speak of, I didn’t see anything in this post that made it look like working with them would either be more likely to be regrettable for you, or more likely to be harmful to others, so what’s the problem?
That is a very fair question to ask. However, it’s not something that I’m interested in diving into. Sorry.
I will say that Scientific Evidence, Legal Evidence, Rational Evidence comes to mind. A lot of the evidence we have probably wouldn’t be admissible as legal evidence, and perhaps some not even as scientific evidence. But IMO, there is in fact a very large amount of Bayesian evidence that Nonlinear has crossed the line (hard to articulate where exactly the line is) by a very large margin.
As does the idea of being anchored to common sense, and resistant to reason as memetic immune disorder. Like if you described this story to a bunch of friends at a bar, I think the obvious, intuitive, “normie” conclusion would be that Nonlinear crossed the line by a wide margin (a handful of normie friends I mentioned this to felt this way).
I’ll also point out that gut instincts can certainly count as Bayesian evidence, and I’m non-trivially incorporating mine here.
If there was a way to bet on it, I’d be eager to. If anyone wants to, I’d probably be down to bet up to a few hundred dollars. I’d trust a lot of random people here (above 100 karma, let’s say) to approach the bet in an honorable way and I am not concerned about the possibility that I end up feeling unhappy with how things turn out (worst case it’s a few hundred bucks, oh well).
I disagree. I see it as a bad thing, but moreso a minor bad thing than a major one.
From a first-order consequentialist perspective, I strongly suspect that Wave treats people quite well and that this policy isn’t silencing anything to a non-trivial degree.
Looking at the nth-order effects of this policy, or from a more “virtues as heuristics” perspective, I think it probably has some sort of small negative consequences. Like marginally normalizing an unfair and unhealthy norm. And also normalizing the idea of doing sketchy things in the name of the greater good. But I’m pretty confident that overall, the negative consequences here aren’t large.
Furthermore, I think that Working With Monsters is important. Well, there’s some threshold. I’m not sure where that threshold is. I’m extremely confident that Nonlinear has crossed that threshold by a large margin, for what that’s worth. But in general I feel like the threshold should be on the high side. It’s just too hard to coordinate to get anything done if you get hung up on these sorts of things. Especially if you have shorter timelines. And with that said, I suspect quite strongly that Wave is way below the threshold and that it’d make sense to continue being strong “allies” with them.
What are you basing this on?[1]
I’m a former employee of wave, so I want to make it clear that this question is not driven by private information. I would have asked that question in response to that sentence no matter what the proper noun was. I have been on about “it’s impossible to make a utilitarian argument for lying[2] because truth is necessary to calculate utils” for months.
Except when you are actively at war with someone and are considering other usually-banned actions like murder and property destruction.
Hm. Something along these lines I think:
A prior that most organizations don’t have >= moderately-sized issues that really need to be silenced. Which is vaguely informed by my own experiences working for various companies, chatting with friends and acquaintances about their experiences, etc.
A prior that rationalists and rationalist-adjacent people are a good deal above average in terms of how well they treat people.
I’ve read a bunch of benkuhn’s writing and Dan Luu’s writing. From this writing, I’m very confident that both of them are really awesome people. And they’re associated with Wave. And I remember Ben writing Wave-specific things that made me feel good about Wave. I see all of this as, I dunno, weak-to-moderate evidence of Wave being “good”.
I see now that lincolnquirk is a cofounder of Wave. I don’t remember anything specific about him, but the name rings a bell of “I have above-average opinions of you compared to other rationalists”. And I have pretty good opinions about the average rationalist.
How does this differ from what you’d expect to see if an organization had substantial downsides, but supressed negative information?
I think what I’d expect to see in terms of stories of people being mistreated would be roughly the same. Because if they are mistreating people, evidence of that would likely be suppressed.
So I think where I’m coming from is moreso that various things, IMO, point towards the prior probability[1] of mistreatment being low in the first place.
(This is a fun opportunity to work on some Bayesian reasoning. Not to be insensitive about the context that it’s in. Please let me know if you/anyone has comments or advice. Maybe I’m missing something here.)
In the sense of, before opening your eyes and looking at what stories about Wave are out there, what would I expect the probability of there being bad things to be. Or something like that.
As an analogy, suppose you told me that Alice is a manager at Widget Corp. Then you tell me that she is a rationalist. Then you show me her blog, I read it, and I get good vibes. We can ask at this point what I think her probability of her mistreating employees and stuff being. And given what I know, I’d say that it’s very low. From there, you can say, “Ok, now go out and google stuff about Alice and Widget Corp. How do the results of googling shift your beliefs?” I think they probably wouldn’t shift my beliefs much, since regardless of whether she does bad stuff, if the information is being suppressed, I’m unlikely to observe it. But I can still think that the probability of bad stuff is low, despite the suppression.
I apologize for derailing the N(D|D)A discussion, but it’s kind of crazy to me that you think that Nonlinear (based on the content of this post?) has crossed a line such that you wouldn’t work with them, by a large margin? Why not? That post you linked is about working with murderers, not working with business owners who seemingly took advantage of their employees for a few months, or who made a trigger-happy legal threat!
Compared to (for example) any random YC company with no reputation to speak of, I didn’t see anything in this post that made it look like working with them would either be more likely to be regrettable for you, or more likely to be harmful to others, so what’s the problem?
That is a very fair question to ask. However, it’s not something that I’m interested in diving into. Sorry.
I will say that Scientific Evidence, Legal Evidence, Rational Evidence comes to mind. A lot of the evidence we have probably wouldn’t be admissible as legal evidence, and perhaps some not even as scientific evidence. But IMO, there is in fact a very large amount of Bayesian evidence that Nonlinear has crossed the line (hard to articulate where exactly the line is) by a very large margin.
Faster Than Science also comes to mind.
The Sin of Underconfidence also comes to mind.
As does the idea of being anchored to common sense, and resistant to reason as memetic immune disorder. Like if you described this story to a bunch of friends at a bar, I think the obvious, intuitive, “normie” conclusion would be that Nonlinear crossed the line by a wide margin (a handful of normie friends I mentioned this to felt this way).
I’ll also point out that gut instincts can certainly count as Bayesian evidence, and I’m non-trivially incorporating mine here.
If there was a way to bet on it, I’d be eager to. If anyone wants to, I’d probably be down to bet up to a few hundred dollars. I’d trust a lot of random people here (above 100 karma, let’s say) to approach the bet in an honorable way and I am not concerned about the possibility that I end up feeling unhappy with how things turn out (worst case it’s a few hundred bucks, oh well).