I think I’m particularly triggered by all this because of a conversation I had last year with someone who takes AI takeover risk very seriously and could double AI safety philanthropy if they wanted to. I was arguing they should start funding AI safety, but the conversation was a total misfire because they conflated “AI safety” with “stop AI development”: their view was that that will never happen, and they were actively annoyed that they were hearing what they considered to be such a dumb idea. My guess was that EY’s TIME article was a big factor there.
Mandatory check: was this billionaire a sociopath who made their money unethically or illegally (perhaps through crypto), like the last time you persuaded someone in this position to put tons of their philanthropy into AI safety?
(Perhaps you can show that they weren’t, but given your atrocious track record these datapoints shouldn’t really be taken seriously without double-checking.)
(As a suggestion, you could DM the name to me or anyone in this thread and have them report back their impression of whether the person is a crook or obviously unethical, without releasing the identity widely.)
Mandatory check: was this billionaire a sociopath who made their money unethically or illegally (perhaps through crypto), like the last time you persuaded someone in this position to put tons of their philanthropy into AI safety?
(Perhaps you can show that they weren’t, but given your atrocious track record these datapoints shouldn’t really be taken seriously without double-checking.)
(As a suggestion, you could DM the name to me or anyone in this thread and have them report back their impression of whether the person is a crook or obviously unethical, without releasing the identity widely.)