Karma: 824

# A study on cults and non-cults—an­swer ques­tions about a group and get a cult score

19 Jun 2024 14:30 UTC
−1 points
(www.guidedtrack.com)

# What should the EA com­mu­nity learn from the FTX /​ SBF dis­aster? An in-depth dis­cus­sion with Will MacAskill on the Clearer Think­ing pod­cast

16 Apr 2024 13:11 UTC
20 points
(podcast.clearerthinking.org)
• Thanks for your comment. Some thoughts:

“But a lot of your pro-DAE evidence seems to me to fail this test. E.g. ok, he lied to the customers and to the Congress; why is this substantial evidence of DAE in particular?”

Because E is evidence in favor of a hypothesis H if:

P(E given H is true) > P(E given H is false)

And the strength of the evidence is determined by the ratio:

bayes factor = P(E given H is true)/​P(E given H is false)

In my view there isn’t really any other reasonable mathematical definition of evidence other than the bayes factor (or transformations of the bayes factor).

Applied to this specific case:

Probabilityiity(Lying to congress given DAE) > Probability(Lying to congress given not DAE)

And the reason that inequality is true is because people with DAE are more likely to lie than people without DAE (all else equal).

“Everything under this seems to fail the rain test, at least; very many people have this willingness [to lie and deceive others] most of them don’t have DAE (simply based on the prevalence you mention). Is this particular “style” of dishonesty characteristic of DAE?”

The question of whether E is evidence for H is not the same as the question “Is H true most of the time when E?” That’s just a different question, and in my view, not the correct question to ask when evaluating evidence. The question to ask to evaluate evidence is whether the evidence is more likely if the hypothesis is true than if it’s not true.

And yes, lying is indeed characteristic of DAE.

• I’m glad to see that Nonlinear’s evidence is now public, since Ben’s post did not seem to be a thorough investigation. As I said to Ben before he posted his original post, I knew of evidence that strongly contradicted his post, and I encouraged him to temporarily pause the release of his post so he could review the evidence carefully, but he would not delay. [cross posted this comment on EA forum]

• At the top it says it’s a link post and links to the full post, I thought that would make it clear that it’s a link post not a full post.

It’s difficult to keep three versions in sync as I fix typos and correct mistakes, which is why I prefer to not have three separate full versions.

• The reason I talk about DAE and not NPD is because DAE and NPD are different conditions, and while I took seriously while investigating this the possibility that NPD was the cause, I didn’t find enough evidence for that explanation, whereas I found a lot of evidence for DAE. If you think I’m wrong, and see significant evidence for NPD I’d be interested to see that evidence.

Not to say that DAE and NPD have nothing to do with each other, but they aren’t the same.

I would never say to someone who was abused by someone with NPD that they are merely experiencing the result of DAE.

To clarify, DAE refers to two very specific things: a person lacking the emotion of guilt, and/​or a person lacking the experience of empathy.

NPD in DSM 5, as I understand it, involves: “the presence of at least 5 of the following 9 criteria: A grandiose sense of self-importance A preoccupation with fantasies of unlimited success, power, brilliance, beauty, or ideal love A belief that he or she is special and unique and can only be understood by, or should associate with, other special or high-status people or institutions A need for excessive admiration A sense of entitlement Interpersonally exploitive behavior A lack of empathy Envy of others or a belief that others are envious of him or her A demonstration of arrogant and haughty behaviors or attitudes”

So a lack of empathy (from DAE) is one potential feature or NPD out of 9. Lack of guilt is not on the list at all.

• Thanks for letting me know

• There were clear ways in which he was really bad at things, but also, clear ways that he was really good at some things. The FTX exchange is not something easy to build, and it’s much harder still to make it into a successful exchange like he did. Seems pretty clear he was really skilled at some things, despite his big weaknesses. But I don’t think it can be dismissed as just that he was bad at stuff. Also, him being bad at stuff doesn’t explain highly unethical actions that he appears to have taken.

• I do go into that—see the full version on my blog.

• It’s more specific than sociopathy. Also, terms like sociopath/​psychopath are problematic because people have a lot of associations with those terms, not all of them accurate, and so I thought it would be better to be more precise about what I mean and also to avoid terms that people have connations around.

# Who is Sam Bankman-Fried (SBF) re­ally, and how could he have done what he did? - three the­o­ries and a lot of evidence

11 Nov 2023 1:04 UTC
36 points
(www.spencergreenberg.com)
• You’re using a different word “utility” than I am here. There are at least three definitions of that word. I’m using the one from hedonic utilitarianism (since that’s what most EAs identify as), not the one from decision theory (e..g., “expected utility maximization” as a decision theory), and not the one from economics (rational agents maximizing “utility”).

# Should Effec­tive Altru­ists be Valuists in­stead of util­i­tar­i­ans?

25 Sep 2023 14:03 UTC
1 point
• If we want to look at general principles rather than specific cases, if the original post had not contained a bunch of serious misinformation (according to evidence that I have access to) then I would have been much more sympathetic to not delaying.

But the combination of serious misinformation + being unwilling to delay a short period to get the rest of the evidence I find to be a very bad combination.

I also don’t think the retaliation point is a very good one, as refusing to delay doesn’t actually prevent retaliation.

I don’t find the lost productivity point is particularly strong given that this was a major investigation already involving something like 150 hours of work. In that context, another 20 hours carefully reviewing evidence seems minimal (if it’s worth ~150 hours to investigate it’s worth 170 to ensure it’s accurate presumably)

Guarding against reality distortion fields is an interesting point I hadn’t thought of until Oliver brought it up. However, it doesn’t seem (correct me if I’m wrong) that Ben felt swayed away from posting after talking to nonlinear for 3 hours—if that’s true then it doesn’t seem like much of a concern here. I also think pre-committing to a release date helps a bit with that.

• Hi all, I wanted to chime in because I have had conversations relevant to this post with just about all involved parties at various points. I’ve spoken to “Alice” (both while she worked at nonlinear and afterward), Kat (throughout the period when the events in the post were alleged to have happened and afterward), Emerson, Drew, and (recently) the author Ben, as well as, to a much lesser extent, “Chloe” (when she worked at nonlinear). I am (to my knowledge) on friendly terms with everyone mentioned (by name or pseudonym) in this post. I wish well for everyone involved. I also want the truth to be known, whatever the truth is.

I was sent a nearly final draft of this post yesterday (Wednesday), once by Ben and once by another person mentioned in the post.

I want to say that I find this post extremely strange for the following reasons:

(1) The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious despite ~150 hours being spent on this investigation. This makes it harder for me to take at face value the parts of the post that I have no knowledge of. Why am I, an outsider on this whole thing, finding serious errors in the final hours before publication? That’s not to say everything in the post is inaccurate, just that I was disturbed to see serious inaccuracies, and I have no idea why nobody caught these (I really don’t feel like I should be the one to correct mistakes, given my lack of involvement, but it feels important to me to comment here since I know there were inaccuracies in the piece, so here we are).

(2) Nonlinear reached out to me and told me they have proof that a bunch of claims in the post are completely false. They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.

This really confuses me. Clearly, Ben spent a huge amount of time on this post (which has presumably involved weeks or months of research), so why not wait one additional week for Nonlinear to provide what they say is proof that his post contains substantial misinformation? Of course, if the evidence provided by nonlinear is weak, he should treat it as such, but if it is strong, it should also be treated as such. I struggle to wrap my head around the decision not to look at that evidence. I am also confused why Ben, despite spending a huge amount of time on this research, apparently didn’t seek out this evidence from Nonlinear long ago.

To clarify: I think it’s very important in situations like this not to let the group being criticized have a way to delay publication indefinitely. If I were in Ben’s shoes, I believe what I would have done is say something like, “You have exactly one week to provide proof of any false claims in this post (and I’ll remove any claim you can prove is false) then I’m publishing the post no matter what at that time.” This is very similar to the policy we use for our Transparent Replications project (where we replicate psychology results of publications in top journals), and we have found it to work well. We give the original authors a specific window of time during which they can point out any errors we may have made (which is at least a week). This helps make sure our replications are accurate, fair, and correct, and yet the teams being replicated have no say over whether the replications are released (they always are released regardless of whether we get a response).

It seems to me that basic norms of good epistemics require that, on important topics, you look at all the evidence that can be easily acquired.

I also think that if you publish misinformation, you can’t just undo it by updating the post later or issuing a correction. Sadly, that’s not the way human minds/​social information works. In other words, misinformation can’t be jammed back into the bottle once it is released. I have seen numerous cases where misinformation is released only later to be retracted, in which the misinformation got way more attention than the retraction, and most people came away only with the misinformation. This seems to me to provide a strong additional reason why a small delay in the publication date appears well worth it (to me, as an outsider) to help avoid putting out a post with potentially substantial misinformation. I hope that the lesswrong/​EA communities will look at all the evidence once it is released, which presumably will be in the next week or so, in order to come to a fair and accurate conclusion (based on all the evidence, whatever that accurate final conclusion turns out to be) and do better than these other cases I’ve witnessed where misinformation won the day.

Of course, I don’t know Ben’s reason for jumping to publish immediately, so I can’t evaluate his reasons directly.

Disclaimer: I am friends with multiple people connected to this post. As a reminder, I wish well for everyone involved, and I wish for the truth to be known, whatever that truth happens to be. I have acted (informally) as an advisor to nonlinear (without pay) - all that means, though, is that every so often, team members there will reach out to me to ask for my advice on things.

Note: I’ve updated this comment a few times to try to make my position clearer, to add some additional context, and to fix grammatical mistakes.

# An­nounc­ing the Clearer Think­ing micro-grants pro­gram for 2023

7 Aug 2023 15:21 UTC
14 points
(www.clearerthinking.org)
• The way you define values in your comment:

“From the AI “engineering” perspective, values/​valued states are “rewards” that the agent adds themselves in order to train (in RL style) their reasoning/​planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection.”

is just something different than what I’m talking about in my post when I use the phrase “intrinsic values.”

From what I can tell, you seem to be arguing:

[paraphrasing] “In this one line of work, we define values this way”, and then jumping from there to “therefore, you are misunderstanding values,” when actually I think you’re just using the phrase to mean something different than I’m using it to mean.

# Valuism—an ap­proach to life for you to consider

19 Jul 2023 15:23 UTC
17 points