I’m a former employee of wave, so I want to make it clear that this question is not driven by private information. I would have asked that question in response to that sentence no matter what the proper noun was. I have been on about “it’s impossible to make a utilitarian argument for lying[2] because truth is necessary to calculate utils” for months.
A prior that most organizations don’t have >= moderately-sized issues that really need to be silenced. Which is vaguely informed by my own experiences working for various companies, chatting with friends and acquaintances about their experiences, etc.
A prior that rationalists and rationalist-adjacent people are a good deal above average in terms of how well they treat people.
I’ve read a bunch of benkuhn’s writing and Dan Luu’s writing. From this writing, I’m very confident that both of them are really awesome people. And they’re associated with Wave. And I remember Ben writing Wave-specific things that made me feel good about Wave. I see all of this as, I dunno, weak-to-moderate evidence of Wave being “good”.
I see now that lincolnquirk is a cofounder of Wave. I don’t remember anything specific about him, but the name rings a bell of “I have above-average opinions of you compared to other rationalists”. And I have pretty good opinions about the average rationalist.
I think what I’d expect to see in terms of stories of people being mistreated would be roughly the same. Because if they are mistreating people, evidence of that would likely be suppressed.
So I think where I’m coming from is moreso that various things, IMO, point towards the prior probability[1] of mistreatment being low in the first place.
(This is a fun opportunity to work on some Bayesian reasoning. Not to be insensitive about the context that it’s in. Please let me know if you/anyone has comments or advice. Maybe I’m missing something here.)
In the sense of, before opening your eyes and looking at what stories about Wave are out there, what would I expect the probability of there being bad things to be. Or something like that.
As an analogy, suppose you told me that Alice is a manager at Widget Corp. Then you tell me that she is a rationalist. Then you show me her blog, I read it, and I get good vibes. We can ask at this point what I think her probability of her mistreating employees and stuff being. And given what I know, I’d say that it’s very low. From there, you can say, “Ok, now go out and google stuff about Alice and Widget Corp. How do the results of googling shift your beliefs?” I think they probably wouldn’t shift my beliefs much, since regardless of whether she does bad stuff, if the information is being suppressed, I’m unlikely to observe it. But I can still think that the probability of bad stuff is low, despite the suppression.
What are you basing this on?[1]
I’m a former employee of wave, so I want to make it clear that this question is not driven by private information. I would have asked that question in response to that sentence no matter what the proper noun was. I have been on about “it’s impossible to make a utilitarian argument for lying[2] because truth is necessary to calculate utils” for months.
Except when you are actively at war with someone and are considering other usually-banned actions like murder and property destruction.
Hm. Something along these lines I think:
A prior that most organizations don’t have >= moderately-sized issues that really need to be silenced. Which is vaguely informed by my own experiences working for various companies, chatting with friends and acquaintances about their experiences, etc.
A prior that rationalists and rationalist-adjacent people are a good deal above average in terms of how well they treat people.
I’ve read a bunch of benkuhn’s writing and Dan Luu’s writing. From this writing, I’m very confident that both of them are really awesome people. And they’re associated with Wave. And I remember Ben writing Wave-specific things that made me feel good about Wave. I see all of this as, I dunno, weak-to-moderate evidence of Wave being “good”.
I see now that lincolnquirk is a cofounder of Wave. I don’t remember anything specific about him, but the name rings a bell of “I have above-average opinions of you compared to other rationalists”. And I have pretty good opinions about the average rationalist.
How does this differ from what you’d expect to see if an organization had substantial downsides, but supressed negative information?
I think what I’d expect to see in terms of stories of people being mistreated would be roughly the same. Because if they are mistreating people, evidence of that would likely be suppressed.
So I think where I’m coming from is moreso that various things, IMO, point towards the prior probability[1] of mistreatment being low in the first place.
(This is a fun opportunity to work on some Bayesian reasoning. Not to be insensitive about the context that it’s in. Please let me know if you/anyone has comments or advice. Maybe I’m missing something here.)
In the sense of, before opening your eyes and looking at what stories about Wave are out there, what would I expect the probability of there being bad things to be. Or something like that.
As an analogy, suppose you told me that Alice is a manager at Widget Corp. Then you tell me that she is a rationalist. Then you show me her blog, I read it, and I get good vibes. We can ask at this point what I think her probability of her mistreating employees and stuff being. And given what I know, I’d say that it’s very low. From there, you can say, “Ok, now go out and google stuff about Alice and Widget Corp. How do the results of googling shift your beliefs?” I think they probably wouldn’t shift my beliefs much, since regardless of whether she does bad stuff, if the information is being suppressed, I’m unlikely to observe it. But I can still think that the probability of bad stuff is low, despite the suppression.