P(doom) = 60%, 4 years to AGI.
I give full permission for anyone to post part or all of any of my comments/posts to other platforms, with attribution.
Currently upskilling for future work on technical AI alignment research. Want to do part time work translating AI developments in China for a Western audiance in the meantime. Would really appreciate funding on either.
Will analyze papers and do bioinformatics research for money. DM me if interested.
Let’s just say that weirdness in China is very different from weirdness in the West. AI safety isn’t even a weird concept here. It’s something people talk about, briefly think over, then mostly forget, like Peter Thiel’s new book. People are generally receptive to it. What AI safety needs to get traction in the Chinese idea sphere is to rapidly disassociate with really really weird ideas like EA. EA is like trying to shove a square peg into the round hole of Chinese psychology. It’s a really bad sign that the AI Safety toehold in China is clustered around EA.
Rationality is pretty weird too, and is honestly just extra baggage. Why add it to the conversation?
We don’t need rationality or EA to get Chinese to care about AI safety. Trying to import the Western EA-AI safety-Rationality memeplex wholesale is both unnecessary and detrimental.