Tyler Cowen’s challenge to develop an ‘actual mathematical model’ for AI X-Risk
On the Russ Roberts ECONTALK Podcast #893, guest Tyler Cowen challenges Eliezer Yudkowsky and the Less Wrong/EA Alignment communities to develop a mathematical model for AI X-Risk.
Will Tyler Cowen agree that an ‘actual mathematical model’ for AI X-Risk has been developed by October 15, 2023?
(This market resolves to “YES” if Tyler Cowen publicly acknowledges, by October 15 2023, that an actual mathematical model of AI X-Risk has been developed.)
Two excerpts from the conversation:
...But, I mean, here would be my initial response to Eliezer. I’ve been inviting people who share his view simply to join the discourse. So, they have the sense, ‘Oh, we’ve been writing up these concerns for 20 years and no one listens to us.’ My view is quite different. I put out a call and asked a lot of people I know, well-informed people, ‘Is there any actual mathematical model of this process of how the world is supposed to end?’
So, if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including—and then models with data. I’m not saying you have to like those models. But the point is: there’s something you look at and then you make up your mind whether or not you like those models; and then they’re tested against data...
...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we’ve been talking about this seriously, there isn’t a single model done. Period. Flat out.
So, I don’t think any idea should be dismissed. I’ve just been inviting those individuals to actually join the discourse of science. ‘Show us your models. Let us see their assumptions and let’s talk about those.’...
Will there be a funding commitment of at least $1 billion in 2023 to a program for mitigating AI risk?
Will the US government launch an effort in 2023 to augment human intelligence biologically in response to AI risk?
Will the general public in the United States become deeply concerned by LLM-facilitated scams by Aug 2 2023?