If anyone wants to have a voice chat with me about a topic that I’m interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main “claims to fame”:
Created the first general purpose open source cryptography programming library (Crypto++, 1995), motivated by AI risk and what’s now called “defensive acceleration”.
Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.
Is this why the smartest humans (e.g. John von Neumann, Terrance Tao) go into math, where verification is definitely easier than generation, instead of fields like philosophy and long-horizon strategy, where plans and outputs are much harder to judge by others? (JvN did do some philosophy and strategy, but surprisingly little relative to his abilities and interests, and I note that his philosophical work, in decision theory, was heavily math flavored.)