In trying to reply to this comment I identified four “waves” of AI safety, and lists of the central people in each wave. Since this is socially complicated I’ll only share the full list of the first wave here, and please note that this is all based on fuzzy intuitions gained via gossip and other unreliable sources.
The first wave I’ll call the “founders”; I think of them as the people who set up the early institutions and memeplexes of AI safety before around 2015. My list:
Eliezer Yudkowsky
Michael Vassar
Anna Salamon
Carl Schulman
Scott Alexander
Holden Karnofsky
Nick Bostrom
Robin Hanson
Wei Dai
Shane Legg
Geoff Anders
The second wave I’ll call the “old guard”; those were the people who joined or supported the founders before around 2015. A few central examples include Paul Christiano, Chris Olah, Andrew Critch and Oliver Habryka.
Around 2014/2015 AI safety became significantly more professionalized and growth-oriented. Bostrom published Superintelligence, the Puerto Rico conference happened, OpenAI was founded, DeepMind started a safety team (though I don’t recall exactly when), and EA started seriously pushing people towards AI safety. I’ll call the people who entered the field from then until around 2020 “safety scalers” (though I’m open to better names). A few central examples include Miles Brundage, Beth Barnes, John Wentworth, Rohin Shah, Dan Hendrycks and myself.
And then there’s the “newcomers” who joined in the last 5-ish years. I have a worse mental map of these people, but some who I respect are Leo Gao, Sahil, Marius Hobbhahn and Jesse Hoogland.
In this comment I expressed concern that my generation (by which I mean the “safety scalers”) have kinda given up on solving alignment. But another higher-level concern is: are people from these last two waves the kinds of people who would have been capable of founding AI safety in the first place? And if not, where are those people now? Of course there’s some difference in the skills required for founding a field vs pushing the field forward, but to a surprising extent I keep finding that the people who I have the most insightful conversations with are the ones who were around from the very beginning. E.g. I think Vassar is the single person doing the best thinking about the lessons we can learn about failures of AI safety over the last decade (though he’s hard to interface with), and Yudkowsky is still the single person who’s most able to push the Overton window towards taking alignment seriously (even though in principle many other people could have written (less doomy versions of) his Time op-ed or his recent book), Scott is still the single best blogger in the space, and so on.
Relatedly, when I talk to someone who’s exceptionally thoughtful about politics (and particularly the psychological aspects of politics), a disturbingly large proportion of the time it turns out that worked at (or were somehow associated with) Leverage. This is really weird to me. Maybe I just have Leverage-aligned tastes/networks, but even so, it’s a very striking effect. (Also, how come there’s no young Moldbug?)
Assuming that I’m gesturing at something real, what are some possible explanations?
There was a unique historical period during which blogging culture was coming online, during which a bunch of ideas and people could come together. This is hard for anyone to replicate now, and so they can’t “level up” in the same way.
This is just what it’s like to be “inside” a paradigm in general. Founding it seems like a really impressive achievement that nobody can match by pushing it forward incrementally; and the founders seem brilliant because they can operate the paradigm better than anyone else. Eventually the issues with this paradigm will pile up enough that someone else can found a new paradigm.
The “takeover” of AI safety by EA changed the kinds of people who were attracted to it. The kinds of people who could found a movement have gone elsewhere (but where?)
The world produces fewer of the kinds of people who are capable of founding movements like this now (but why?)
This is all only a rough gesture at the phenomenon, and you should be wary that I’m just being pessimistic rather than identifying something important. Also it’s a hard topic to talk about clearly because it’s loaded with a bunch of social baggage. But I do feel pretty confused and want to figure this stuff out.
My guess would be that nowadays many people who could bring a fresh perspective, or simply high-caliber original thinking, get either selected out/drowned out or are pushed through social and financial incentives to align there thinking towards more “mainstream” views.
Given that Vernor Vinge wrote The Coming Technological Singularity: How to Survive in the Post-Human Era in 1993, which single-handedly established much of the memeplex, including the still ongoing AI-first vs IA-first debate, another interesting question is why didn’t anyone found the AI safety field until around 2000.
For me, I’m not sure when I read this essay, but I did read Vinge’s A Fire Upon the Deep in 1994 as a college freshman, which made me worried about a future AI takeover, but (as I wrote previously) I thought there would be plenty of smarter people working in AI safety so I went into applied cryptography instead (as a form of d/acc). Eliezer after reading Vinge (as a teen) didn’t immediately heed the implicit or explicit safety warnings and instead wanted to accelerate the arrival of the Singularity as much as possible. It took him until around 2000 to pivot to safety. Nick Bostrom I think was concerned from the beginning or very early, but he was a PhD student when he got interested and I guess it took him a while to work through the academic system until he could found FHI in 2005.
Maybe the real question is why didn’t anyone else, i.e., someone with established credentials and social capital, found the field. Why did the task fall to a bunch of kids/students? The fact that nobody did it earlier does seem to suggest that it takes a very rare confluence of factors/circumstances for someone to do it.
(Another tangential puzzle is why Vinge himself didn’t get involved, as he was a professor of computer science in addition to science fiction writer. AFAIK he stayed completely off the early mailing lists as well as OB/LW nor had any contacts with anyone in AI safety.)
Why did the task fall to a bunch of kids/students?
I’m not surprised by this, my sense is that it’s usually young people and outsiders who pioneer new fields. Older people are just so much more shaped by existing paradigms, and also have so much more to lose, that it outweighs the benefits of their expertise and resources.
Also 1993 to 2000 doesn’t seem like that large a gap to me. Though I guess the thing I’m pointing at could also be summarized as “why hasn’t someone created a new paradigm of AI safety in the last decade?” And one answer is that Paul and Chris and a few others created a half-paradigm of “ML safety”, but it hasn’t yet managed to show impressive enough results to fully take over. However, it did win on a memetic level amongst EAs in particular.
The task at hand might then be understood as synthesizing the original “AI safety” with “ML safety”. Or, to put it a bit more poetically, it’s synthesizing the rationalist approach to aligning AGI with the empiricist approach to aligning AGI.
I’m not surprised by this, my sense is that it’s usually young people and outsiders who pioneer new fields. Older people are just so much more shaped by existing paradigms, and also have so much more to lose, that it outweighs the benefits of their expertise and resources.
All of the fields that come to my mind (cryptography, theory of computation, algorithmic information theory, decision theory, game theory) were founded by much more established researchers. (But on reflection these all differ from AI safety by being fairly narrow and technical/mathematical, at least at their founding.) Which fields are you thinking of, that were founded by younger people and outsiders?
Perplexity AI Pro (with GPT-5.1-Thinking)’s answer to “Who were the founders of academic cryptography research as a field and what where their jobs at the time?”
There isn’t a single universally agreed-on “founder” of academic cryptography. Instead, a small group of researchers in the 1940s–1970s are usually credited with turning cryptography into an open, university-based research field.
No single founder
Histories of the subject generally describe a progression: Claude Shannon’s mathematical theory of secrecy in the 1940s, followed by the public‑key revolution of the 1970s and early 1980s that created today’s academic cryptography community. Shannon’s work was foundational, but it did not yet create an academic field in the modern sense; that came later with Whitfield Diffie, Martin Hellman, Ralph Merkle, and the inventors of RSA, whose work is often described as pioneering “modern” cryptography and has been recognized by ACM Turing Awards for cryptography pioneers.wikipedia+1
Early mathematical groundwork
Claude Shannon is widely regarded as the founder of mathematical cryptography; in the 1940s he worked at Bell Labs as a researcher, where he developed the information‑theoretic framework for secrecy systems that later influenced public‑key cryptography. At roughly the same time and into the 1960s, cryptography research also existed in industry—most notably at IBM, where Horst Feistel headed an internal cryptography research group that designed ciphers such as Lucifer, which evolved into the Data Encryption Standard (DES), but this work was largely not yet an open academic discipline.research.ibm+1
Founders of modern academic cryptography
Most accounts of “academic cryptography as a field” point first to the group around Stanford in the 1970s, whose work on public‑key ideas made cryptography a mainstream research topic in universities. In that period, the key people and their roles were approximately:sandilands+2
Whitfield Diffie – A researcher working with Martin Hellman at Stanford when they introduced public‑key cryptography in their 1976 work; he had come from earlier industry and research positions and was not yet a long‑tenured professor at that time.
Martin Hellman – A faculty member (electrical engineering professor) at Stanford University, supervising cryptography research and collaborating with Diffie on the new public‑key paradigm.sandilands
Ralph Merkle – A graduate student whose “Merkle puzzles” were developed first as an undergraduate project and then as part of his PhD work under Hellman at Stanford, making him one of the student‑level founders of modern academic cryptography.sandilands
These three are often singled out as the core founders of modern, open, university‑based cryptography research because their work shifted cryptography from a mostly classified or industrial activity into a widely studied academic topic.awards.acm+2
RSA and later theoretical founders
Immediately after the Diffie–Hellman–Merkle work, Ronald Rivest, Adi Shamir, and Leonard Adleman introduced RSA while they were all at MIT, with Rivest and Shamir as faculty members and Adleman as a researcher/graduate student; this solidified public‑key cryptography as a central academic topic and led to sustained university research groups in cryptography. Their roles at MIT and the impact of RSA meant that, alongside the Stanford group, they are commonly treated as co‑founders of modern academic cryptography.
In the 1980s, researchers such as Manuel Blum, Shafi Goldwasser, and Silvio Micali (then faculty and graduate students at Berkeley and MIT) developed the complexity‑theoretic foundations of cryptography—zero‑knowledge proofs, formal security definitions, and provable security—which are why they are frequently listed among the leading theoreticians and foundational figures of the field in modern cryptographer lists.wikipedia
“What about theory of computation, algorithmic information theory, decision theory, game theory? You can be more concise with these.”
Each of these fields has several widely recognized “founders,” mostly mathematicians, logicians, and economists working in universities or research institutes. Below is a concise mapping from field → key founders → what they were professionally doing at the time.
Theory of computation
Modern theory of computation is usually traced to Alonzo Church and Alan Turing in the 1930s, along with related work by Kurt Gödel. Church was a logician and professor of mathematics at Princeton University when he introduced the lambda calculus and the Church–Turing thesis–style view of effective calculability. Turing was a young mathematician at Cambridge University when he published his 1936 paper defining Turing machines and formalizing computation, later becoming widely regarded as a founder of theoretical computer science.wikipedia+1
Algorithmic information theory
Algorithmic information theory was founded by Ray Solomonoff, an American mathematician who, working largely as an independent researcher on algorithmic probability and inductive inference, first presented the core ideas at a Caltech conference and in a 1960 report on a “General Theory of Inductive Inference.” It was then independently and almost simultaneously redeveloped by Andrey Kolmogorov, a leading Soviet mathematician, and Gregory Chaitin, a young mathematician working in industrial research, both of whom formalized similar notions of algorithmic complexity.wikipedia+2
Game theory
Modern game theory is generally credited to John von Neumann and Oskar Morgenstern. Von Neumann, a Hungarian mathematician working in pure and applied mathematics (including at Princeton and later the Institute for Advanced Study), introduced the foundational minimax theorem for zero‑sum games and then co‑authored Theory of Games and Economic Behavior in 1944 with Morgenstern, who was at the time an economist and professor (later at Princeton), thereby establishing game theory as a major branch of economics and social science.cmu+2
Decision theory
Normative decision theory in economics is typically traced to the von Neumann–Morgenstern expected‑utility framework, developed in the same 1944 book that founded game theory. In that work, von Neumann (mathematician) and Morgenstern (economist) provided an axiomatic treatment of rational choice under uncertainty, which is widely treated as the foundational formulation of modern decision theory in economics and statistics.wikipedia+2
Though I guess the thing I’m pointing at could also be summarized as “why hasn’t someone created a new paradigm of AI safety in the last decade?”
Creating a new paradigm within an existing field seems different enough from creating a new field that the important factors might differ a lot. Also, by asking this question it seems like you’re assuming that someone should have created a new paradigm of AI safety in the last decade, which a lot of people would presumably disagree with (because they either think the existing paradigms are good enough, or this is just too hard technically). (Basically I’m suggesting it may be hard to interest people in this question, until someone has created such a paradigm, and then you can go back and say “why didn’t someone do this earlier?”)
In trying to reply to this comment I identified four “waves” of AI safety, and lists of the central people in each wave. Since this is socially complicated I’ll only share the full list of the first wave here, and please note that this is all based on fuzzy intuitions gained via gossip and other unreliable sources.
The first wave I’ll call the “founders”; I think of them as the people who set up the early institutions and memeplexes of AI safety before around 2015. My list:
Eliezer Yudkowsky
Michael Vassar
Anna Salamon
Carl Schulman
Scott Alexander
Holden Karnofsky
Nick Bostrom
Robin Hanson
Wei Dai
Shane Legg
Geoff Anders
The second wave I’ll call the “old guard”; those were the people who joined or supported the founders before around 2015. A few central examples include Paul Christiano, Chris Olah, Andrew Critch and Oliver Habryka.
Around 2014/2015 AI safety became significantly more professionalized and growth-oriented. Bostrom published Superintelligence, the Puerto Rico conference happened, OpenAI was founded, DeepMind started a safety team (though I don’t recall exactly when), and EA started seriously pushing people towards AI safety. I’ll call the people who entered the field from then until around 2020 “safety scalers” (though I’m open to better names). A few central examples include Miles Brundage, Beth Barnes, John Wentworth, Rohin Shah, Dan Hendrycks and myself.
And then there’s the “newcomers” who joined in the last 5-ish years. I have a worse mental map of these people, but some who I respect are Leo Gao, Sahil, Marius Hobbhahn and Jesse Hoogland.
In this comment I expressed concern that my generation (by which I mean the “safety scalers”) have kinda given up on solving alignment. But another higher-level concern is: are people from these last two waves the kinds of people who would have been capable of founding AI safety in the first place? And if not, where are those people now? Of course there’s some difference in the skills required for founding a field vs pushing the field forward, but to a surprising extent I keep finding that the people who I have the most insightful conversations with are the ones who were around from the very beginning. E.g. I think Vassar is the single person doing the best thinking about the lessons we can learn about failures of AI safety over the last decade (though he’s hard to interface with), and Yudkowsky is still the single person who’s most able to push the Overton window towards taking alignment seriously (even though in principle many other people could have written (less doomy versions of) his Time op-ed or his recent book), Scott is still the single best blogger in the space, and so on.
Relatedly, when I talk to someone who’s exceptionally thoughtful about politics (and particularly the psychological aspects of politics), a disturbingly large proportion of the time it turns out that worked at (or were somehow associated with) Leverage. This is really weird to me. Maybe I just have Leverage-aligned tastes/networks, but even so, it’s a very striking effect. (Also, how come there’s no young Moldbug?)
Assuming that I’m gesturing at something real, what are some possible explanations?
There was a unique historical period during which blogging culture was coming online, during which a bunch of ideas and people could come together. This is hard for anyone to replicate now, and so they can’t “level up” in the same way.
This is just what it’s like to be “inside” a paradigm in general. Founding it seems like a really impressive achievement that nobody can match by pushing it forward incrementally; and the founders seem brilliant because they can operate the paradigm better than anyone else. Eventually the issues with this paradigm will pile up enough that someone else can found a new paradigm.
The “takeover” of AI safety by EA changed the kinds of people who were attracted to it. The kinds of people who could found a movement have gone elsewhere (but where?)
The world produces fewer of the kinds of people who are capable of founding movements like this now (but why?)
This is all only a rough gesture at the phenomenon, and you should be wary that I’m just being pessimistic rather than identifying something important. Also it’s a hard topic to talk about clearly because it’s loaded with a bunch of social baggage. But I do feel pretty confused and want to figure this stuff out.
My guess would be that nowadays many people who could bring a fresh perspective, or simply high-caliber original thinking, get either selected out/drowned out or are pushed through social and financial incentives to align there thinking towards more “mainstream” views.
Given that Vernor Vinge wrote The Coming Technological Singularity: How to Survive in the Post-Human Era in 1993, which single-handedly established much of the memeplex, including the still ongoing AI-first vs IA-first debate, another interesting question is why didn’t anyone found the AI safety field until around 2000.
For me, I’m not sure when I read this essay, but I did read Vinge’s A Fire Upon the Deep in 1994 as a college freshman, which made me worried about a future AI takeover, but (as I wrote previously) I thought there would be plenty of smarter people working in AI safety so I went into applied cryptography instead (as a form of d/acc). Eliezer after reading Vinge (as a teen) didn’t immediately heed the implicit or explicit safety warnings and instead wanted to accelerate the arrival of the Singularity as much as possible. It took him until around 2000 to pivot to safety. Nick Bostrom I think was concerned from the beginning or very early, but he was a PhD student when he got interested and I guess it took him a while to work through the academic system until he could found FHI in 2005.
Maybe the real question is why didn’t anyone else, i.e., someone with established credentials and social capital, found the field. Why did the task fall to a bunch of kids/students? The fact that nobody did it earlier does seem to suggest that it takes a very rare confluence of factors/circumstances for someone to do it.
(Another tangential puzzle is why Vinge himself didn’t get involved, as he was a professor of computer science in addition to science fiction writer. AFAIK he stayed completely off the early mailing lists as well as OB/LW nor had any contacts with anyone in AI safety.)
I’m not surprised by this, my sense is that it’s usually young people and outsiders who pioneer new fields. Older people are just so much more shaped by existing paradigms, and also have so much more to lose, that it outweighs the benefits of their expertise and resources.
Also 1993 to 2000 doesn’t seem like that large a gap to me. Though I guess the thing I’m pointing at could also be summarized as “why hasn’t someone created a new paradigm of AI safety in the last decade?” And one answer is that Paul and Chris and a few others created a half-paradigm of “ML safety”, but it hasn’t yet managed to show impressive enough results to fully take over. However, it did win on a memetic level amongst EAs in particular.
The task at hand might then be understood as synthesizing the original “AI safety” with “ML safety”. Or, to put it a bit more poetically, it’s synthesizing the rationalist approach to aligning AGI with the empiricist approach to aligning AGI.
All of the fields that come to my mind (cryptography, theory of computation, algorithmic information theory, decision theory, game theory) were founded by much more established researchers. (But on reflection these all differ from AI safety by being fairly narrow and technical/mathematical, at least at their founding.) Which fields are you thinking of, that were founded by younger people and outsiders?
Perplexity AI Pro (with GPT-5.1-Thinking)’s answer to “Who were the founders of academic cryptography research as a field and what where their jobs at the time?”
There isn’t a single universally agreed-on “founder” of academic cryptography. Instead, a small group of researchers in the 1940s–1970s are usually credited with turning cryptography into an open, university-based research field.
No single founder
Histories of the subject generally describe a progression: Claude Shannon’s mathematical theory of secrecy in the 1940s, followed by the public‑key revolution of the 1970s and early 1980s that created today’s academic cryptography community. Shannon’s work was foundational, but it did not yet create an academic field in the modern sense; that came later with Whitfield Diffie, Martin Hellman, Ralph Merkle, and the inventors of RSA, whose work is often described as pioneering “modern” cryptography and has been recognized by ACM Turing Awards for cryptography pioneers.wikipedia+1
Early mathematical groundwork
Claude Shannon is widely regarded as the founder of mathematical cryptography; in the 1940s he worked at Bell Labs as a researcher, where he developed the information‑theoretic framework for secrecy systems that later influenced public‑key cryptography. At roughly the same time and into the 1960s, cryptography research also existed in industry—most notably at IBM, where Horst Feistel headed an internal cryptography research group that designed ciphers such as Lucifer, which evolved into the Data Encryption Standard (DES), but this work was largely not yet an open academic discipline.research.ibm+1
Founders of modern academic cryptography
Most accounts of “academic cryptography as a field” point first to the group around Stanford in the 1970s, whose work on public‑key ideas made cryptography a mainstream research topic in universities. In that period, the key people and their roles were approximately:sandilands+2
Whitfield Diffie – A researcher working with Martin Hellman at Stanford when they introduced public‑key cryptography in their 1976 work; he had come from earlier industry and research positions and was not yet a long‑tenured professor at that time.
Martin Hellman – A faculty member (electrical engineering professor) at Stanford University, supervising cryptography research and collaborating with Diffie on the new public‑key paradigm.sandilands
Ralph Merkle – A graduate student whose “Merkle puzzles” were developed first as an undergraduate project and then as part of his PhD work under Hellman at Stanford, making him one of the student‑level founders of modern academic cryptography.sandilands
These three are often singled out as the core founders of modern, open, university‑based cryptography research because their work shifted cryptography from a mostly classified or industrial activity into a widely studied academic topic.awards.acm+2
RSA and later theoretical founders
Immediately after the Diffie–Hellman–Merkle work, Ronald Rivest, Adi Shamir, and Leonard Adleman introduced RSA while they were all at MIT, with Rivest and Shamir as faculty members and Adleman as a researcher/graduate student; this solidified public‑key cryptography as a central academic topic and led to sustained university research groups in cryptography. Their roles at MIT and the impact of RSA meant that, alongside the Stanford group, they are commonly treated as co‑founders of modern academic cryptography.
In the 1980s, researchers such as Manuel Blum, Shafi Goldwasser, and Silvio Micali (then faculty and graduate students at Berkeley and MIT) developed the complexity‑theoretic foundations of cryptography—zero‑knowledge proofs, formal security definitions, and provable security—which are why they are frequently listed among the leading theoreticians and foundational figures of the field in modern cryptographer lists.wikipedia
https://en.wikipedia.org/wiki/History_of_cryptography
https://en.wikipedia.org/wiki/List_of_cryptographers
https://en.wikipedia.org/wiki/International_Cryptology_Conference
https://www.reddit.com/r/crypto/comments/6loygx/who_is_your_favorite_cryptography_researcher_and/
https://cryptography.isg.rhul.ac.uk/history.html
https://sandilands.info/crypto/OrganisationsandPeopleinCryptography.html
https://research.ibm.com/topics/cryptography
https://www.ibm.com/think/topics/cryptography-history
https://cryptoanarchy.wiki/people-and-organisations/pioneers-of-modern-cryptography
https://en.wikipedia.org/wiki/Cryptologist
https://www.reddit.com/r/cybersecurity/comments/1oo653i/ama_were_cryptography_and_security_researchers/
https://awards.acm.org/about/2015-turing
https://www.reddit.com/r/crypto/comments/393r8g/where_is_the_best_research_in_cryptography/
“What about theory of computation, algorithmic information theory, decision theory, game theory? You can be more concise with these.”
Each of these fields has several widely recognized “founders,” mostly mathematicians, logicians, and economists working in universities or research institutes. Below is a concise mapping from field → key founders → what they were professionally doing at the time.
Theory of computation
Modern theory of computation is usually traced to Alonzo Church and Alan Turing in the 1930s, along with related work by Kurt Gödel. Church was a logician and professor of mathematics at Princeton University when he introduced the lambda calculus and the Church–Turing thesis–style view of effective calculability. Turing was a young mathematician at Cambridge University when he published his 1936 paper defining Turing machines and formalizing computation, later becoming widely regarded as a founder of theoretical computer science.wikipedia+1
Algorithmic information theory
Algorithmic information theory was founded by Ray Solomonoff, an American mathematician who, working largely as an independent researcher on algorithmic probability and inductive inference, first presented the core ideas at a Caltech conference and in a 1960 report on a “General Theory of Inductive Inference.” It was then independently and almost simultaneously redeveloped by Andrey Kolmogorov, a leading Soviet mathematician, and Gregory Chaitin, a young mathematician working in industrial research, both of whom formalized similar notions of algorithmic complexity.wikipedia+2
Game theory
Modern game theory is generally credited to John von Neumann and Oskar Morgenstern. Von Neumann, a Hungarian mathematician working in pure and applied mathematics (including at Princeton and later the Institute for Advanced Study), introduced the foundational minimax theorem for zero‑sum games and then co‑authored Theory of Games and Economic Behavior in 1944 with Morgenstern, who was at the time an economist and professor (later at Princeton), thereby establishing game theory as a major branch of economics and social science.cmu+2
Decision theory
Normative decision theory in economics is typically traced to the von Neumann–Morgenstern expected‑utility framework, developed in the same 1944 book that founded game theory. In that work, von Neumann (mathematician) and Morgenstern (economist) provided an axiomatic treatment of rational choice under uncertainty, which is widely treated as the foundational formulation of modern decision theory in economics and statistics.wikipedia+2
https://en.wikipedia.org/wiki/Alan_Turing
https://en.wikipedia.org/wiki/Theory_of_computation
https://en.wikipedia.org/wiki/Ray_Solomonoff
https://en.wikipedia.org/wiki/Algorithmic_information_theory
https://en.wikipedia.org/wiki/Algorithmic_information_theory?oldformat=true
https://www.cmu.edu/dietrich/philosophy/research/areas/science-methodology/rational-choice.html
https://en.wikipedia.org/wiki/Oskar_Morgenstern
https://en.wikipedia.org/wiki/Game_theory
https://thebrooklyninstitute.com/items/courses/new-york/alan-turing-algorithms-computation-machines/
https://www.reddit.com/r/compsci/comments/4vhxnk/why_is_turing_considered_the_father_of_computer/
https://people.idsia.ch/~juergen/goedel-1931-founder-theoretical-computer-science-AI.html
https://www.cs.cmu.edu/~lblum/PAPERS/AlanTuring_and_the_Other_Theory_of_Computation.pdf
https://principlesofcomputation.wordpress.com/what-is-theory-of-computation/
https://christosaioannou.com/History%20of%20Game%20Theory.pdf
https://www.youtube.com/watch?v=CVw1j6pFQ5o
http://www.scholarpedia.org/article/Algorithmic_information_theory
https://en.wikipedia.org/wiki/Theoretical_computer_science
https://www.cs.bu.edu/fac/lnd/research/al-i.htm
https://plato.stanford.edu/entries/game-theory/
https://bgibhopal.com/what-is-the-theory-of-computation-and-why-is-it-important-in-computer-science/
Creating a new paradigm within an existing field seems different enough from creating a new field that the important factors might differ a lot. Also, by asking this question it seems like you’re assuming that someone should have created a new paradigm of AI safety in the last decade, which a lot of people would presumably disagree with (because they either think the existing paradigms are good enough, or this is just too hard technically). (Basically I’m suggesting it may be hard to interest people in this question, until someone has created such a paradigm, and then you can go back and say “why didn’t someone do this earlier?”)
Chaitin was quite young when he (co-)invented AIT.
Basically I’d bet capable people are still around, only that the circumstances don’t allow them to rise to the top for whatever reason.