https://twitter.com/DavidSKrueger
https://www.davidscottkrueger.com/
https://therealartificialintelligence.substack.com/p/the-real-ai-deploys-itself
David Scott Krueger
Reflections on InkHaven
On today’s panel with Bernie Sanders
The AI x-risk lawsuit waiting to happen
On the political feasibility of stopping AI
AI might surprise itself by going rogue
Diary of a “Doomer”: 12+ years arguing about AI risk (part 3: the LLM era)
In a word: InkHaven.
But seriously, I’m still working full-time on Evitable.com and so am trying to churn out my daily blog posts FAST. There are topics I know I have things to say about, and I try to get them down in words in ~1-2 hours tops. In this case, the motivation is something like: “It’s annoying when people make behaviorist arguments about how AIs are more aligned/trustworthy than people”.
Reasons not to trust AI
Diary of a “Doomer”: 12+ years arguing about AI risk (part 2)
Well, I did say “Naively”… but yes I agree the analysis was too naive, and I will edit the post. You make a good point that it can be improved by considering that harms from AI (especially large-scale ones like x-risk) are overdetermined when there are multiple developers. The naive analysis is more accurate when the risk is smaller.
As a side note, if the risk from a single project is so large, then the first project is probably disincentivized at the individual level (would you really want to take an 80% risk of extinction?), and it’s a “pure” coordination problem, like a stag hunt, rather than an incentive problem (like prisoner’s dilema).
Another way the “naive” calculation can be is wrong (which is the main one I had in mind) is if the risks of different projects are correlated, which they are, e.g. because they are all using similar technology.
I’m not going after particular people’s justifications for their work; I’m going after the institutionalization of “marginal risk” as a relevant concept and the way it justifies unacceptable risk-taking.
What happens after we stop AI?
Marginal Risk is BS
My Last 7 Blog Posts: a weekly round-up
Stop AI Now
I think it’s helpful to disaggregate things sometimes, and e.g. look at what trends might underly this general trend we observe
Stop AI
Idea Economics
greater wealth hasn’t changed the picture tremendously
I don’t think I made that claim anywhere in my piece.
This is a good point, and I’m not aware of research asking this question as phrased.