If anyone wants to have a voice chat with me about a topic that I’m interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main “claims to fame”:
Created the first general purpose open source cryptography programming library (Crypto++, 1995), motivated by AI risk and what’s now called “defensive acceleration”.
Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.
Under Some Thoughts on Singularity Strategies (the first link in my OP), I commented:
I did not pursue the HIA first argument myself much after that, as it didn’t seem to be my comparative advantage at the time, and it seemed like @JustinShovelain’s efforts was picking up steam. I’m not sure what happened afterwards, but it would be rather surprising if it didn’t have something to do with Eliezer’s insistence and optimism on directly building FAI at the time (which is largely incompatible with “IA first”), but I don’t have any direct evidence of this. I wasn’t in any physical rationalists communities, and don’t recall any online discussions of Justin’s document after this.
ETA: The same comment quoted a passage from Eliezer saying that he considered and rejected “IA first” which probably also directly influenced many people who deferred to him on AI x-risk strategy.