I think you make multiple valid points which are similar to the points I’ve made in my post, but I do think our stances differ in a few ways.
I think that you are certainly correct that psychosis, or a similar type of mental illness / disorder, is a plausible explanatory hypothesis for Annie making the claims that she has.
However, though I do recognize that the simplicity of a hypothesis is a boon to its plausibility, I do not share your belief that we have been unknowingly subsumed by the “MeToo world order”, which has damaged our rationalism and obstructed our ability to recognize this as being obviously the simplest hypothesis. (Though perhaps this is a overly dramatic / inaccurate representation of your assertion.)
While I do agree that this post may encapsulate behavior representative of a person suffering from psychosis, or a similar mental illness, I see the hypothesis space as primarily dual, where mental illness / misrepresentation-of-reality-type hypotheses form one primary subspace, but there exists another primary subspace wherein the behavior detailed in this post is indeed representative of a person who has gone through the things which Annie has claimed she has.
I do appreciate your inclusion of quantitative rates; I think your analysis benefits from it.
The points you make are valid. You also make a good point about the importance of additional context.
I think I may have miscommunicated myself to some extent, based on the fact that I largely agree with your reply here.
The most clear, and most general framing of my motives is this:
My overarching, most fundamental desire is for humanity to have a positive AI future.
Because of this, I want to do my best to determine the validity of a claim(s) such as Annie’s that asserts that the CEO of the world’s (leading) artificial intelligence company / research org / lab / whatever you want to call it may actually be a person of highly questionable morals. The whole reason we got OpenAI in the first place is, apparently, because Elon freaked out when Larry Page called him a ‘specist’ back in 2013. (I will not bother commenting on whether or not I think this was ultimately a good thing. ) I very much want the person leading the development of and (attempts at) alignment of superintelligence to be a good person.
The reason I have made this post here is because of (2), not because I thought that this forum was the right place to worry about the mental health of Annie Altman. While obviously I am concerned for Annie Altman herself independent of my superintelligence / Sam Altman / OpenAI concerns, the reason why I am posting “about Annie” here on LessWrong is because of the potential ramifications of what she is saying about Sam Altman. This isn’t an “Annie Altman post”; it’s a “Sam Altman post” where Annie Altman is the conduit.
Hopefully this framing of mine is more reasonable. And thank you for the compliment—I am trying my best to conduct myself rationally :)