If spending $0.1 billion on AI risk is too much money and promising talent, then surely spending $800 billion on military defence is a godawful waste of promising talent. Would you agree that the US should cut its military budget by 99%, down to $8 billion?
I’m not following your thinking. To answer your question: probably no? I suspect that the US (and a lot of the world) would do quite badly if the US military’s budget were cut down by 99%.
But I get the impression you’re doing this as a kind of logical gotcha…? And I don’t see the connection.
Maybe you think I’m saying that we’re putting too much energy into AI safety research? That’s not what I’m saying at all. It’s irrelevant to what I’m saying.
What I’m saying is more like, if the path to getting more funding for AI safety research goes through mainly frightening people into it, then I worry about it creating bad incentives and really unwholesome effects, and thereby maybe not resulting in much good AI risk mitigation.
I think I’m missing your point though. Could you clarify?
Suppose there was a prediction market for “when will most new cars be self driving,” or some other future event.
Suppose in 2025, the median prediction is that it’ll happen in 2027. Suppose in 2028, the median prediction is that it’ll happen in 2030.
Will that be enough empirical evidence, for you to conclude that the crowd is repeatedly predicting short timelines which never materialize? And would you bet your life (or your life savings) against the 2030 prediction?
I like this analogy. The logic is similar. I’m wanting to ask something like “Hey, if we keep betting on most new cars being self driving, and we keep being wrong, when can we pause and collectively reconsider how we’re doing this? How about thus-and-such time given thus-and-such circumstances; does that work?”
Are you’re referring to those, weird coercive EA charities which try to guilt people into supporting their cause right? If you are, I see what you mean. People should avoid them and perhaps warn others against them.
However I feel this isn’t very clear in your post, you were saying that:
In particular, it strikes me that the AI risk community orbiting Less Wrong has had basically the same strategy running for about two decades. A bunch of the tactics have changed, but the general effort occurs to me as the same.
The gist is to frighten people into action. Usually into some combo of (a) donating money and (b) finding ways of helping to frighten more people the same way. But sometimes into (c) finding or becoming promising talent and funneling them into AI alignment research.
This sounded you were talking about the typical organization, not the most coercive ones. The typical organization does not try to terrify its readers. It does not say you will die and your family will die. It makes the same kind of sober-minded argument that reducing this risk is very cost effective and urgent, that the military makes.
A lot of charities appeal at least a little to urgency and guilt. Look at this video by Against Malaria, which is considered a very reputable charity (iirc almost every dollar is spent on buying nets, no executive salaries).
Is this unwholesome? It does make you feel bad.
But the very nature of human empathy and conscience is designed to feel bad, not good or wholesome. If we are not willing to feel bad to get things done, should we try and cure empathy and conscience?
Everything needs to be taken in moderation. Yes, charities working on AI risk should avoid sensationalism and terror! But that doesn’t mean the typical dollar spent and hour worked on AI safety is “being frightened into it,” while somehow dollars spent and hours worked on military defence isn’t.
The average promising talent who joins the military experiences far more fear than the average promising talent who works in AI safety. Even those who never face active combat are still forced to think about the possibility all day, and other unwholesome things happen in the military. This is an argument that we should invest in their well-being, not an argument that we should dramatically tone down the military defence priority or urgency.
I think if people repeatedly delay their predictions, it’s reasonable to suspect the predictions are too early. But AI 2027 being delayed to 2028+ is only one data point, and past predictions have moved closer rather than further:
I hope you can continue argue for more positivity and optimism, but don’t paint concern (“fear”) and urgency as the bad guy :). During hard times, optimism can coexist with concern and urgency.
PS: Even if I disagree, it’s meaningful that you believe people are making a mistake, you’re making effort to help them out even if they don’t all appreciate you, and you’re not giving up :).
Out of curiosity, I looked at your post “Here’s the exit.”
Although I didn’t agree with it, all the people harshly criticizing you really made me really feel your frustrations. Yes, this is the LessWrong echo chamber. The internet forum hivemind.
I feel very sorry for being sarcastic etc. you’ve obviously received too much of that from others.
Reading their accusations, I start to feel like I’m on your side :/ and my biased brain decided to remember all the examples of what you’re talking about.
I remember all these discussions where people didn’t just objectively believe that P(doom) was high, they were sort of rude and antagonistic about it. I remember some people villainizing AI researchers similarly to how militant vegans villainize meat eaters. It doesn’t even work, you don’t convince meat eaters by telling them they’re monsters, you’re supposed to be empathetic.
I really wish some of those people should chill out more, and I think your post is very good.
(Strangely, I can’t find these examples right now. All I found was this comment, which wasn’t that bad, and the replies to your “Here’s the exit” post.)
I’m not following your thinking. To answer your question: probably no? I suspect that the US (and a lot of the world) would do quite badly if the US military’s budget were cut down by 99%.
But I get the impression you’re doing this as a kind of logical gotcha…? And I don’t see the connection.
Maybe you think I’m saying that we’re putting too much energy into AI safety research? That’s not what I’m saying at all. It’s irrelevant to what I’m saying.
What I’m saying is more like, if the path to getting more funding for AI safety research goes through mainly frightening people into it, then I worry about it creating bad incentives and really unwholesome effects, and thereby maybe not resulting in much good AI risk mitigation.
I think I’m missing your point though. Could you clarify?
I like this analogy. The logic is similar. I’m wanting to ask something like “Hey, if we keep betting on most new cars being self driving, and we keep being wrong, when can we pause and collectively reconsider how we’re doing this? How about thus-and-such time given thus-and-such circumstances; does that work?”
Retracted
Are you’re referring to those, weird coercive EA charities which try to guilt people into supporting their cause right? If you are, I see what you mean. People should avoid them and perhaps warn others against them.
However I feel this isn’t very clear in your post, you were saying that:
This sounded you were talking about the typical organization, not the most coercive ones. The typical organization does not try to terrify its readers. It does not say you will die and your family will die. It makes the same kind of sober-minded argument that reducing this risk is very cost effective and urgent, that the military makes.
A lot of charities appeal at least a little to urgency and guilt. Look at this video by Against Malaria, which is considered a very reputable charity (iirc almost every dollar is spent on buying nets, no executive salaries).
Is this unwholesome? It does make you feel bad.
But the very nature of human empathy and conscience is designed to feel bad, not good or wholesome. If we are not willing to feel bad to get things done, should we try and cure empathy and conscience?
Everything needs to be taken in moderation. Yes, charities working on AI risk should avoid sensationalism and terror! But that doesn’t mean the typical dollar spent and hour worked on AI safety is “being frightened into it,” while somehow dollars spent and hours worked on military defence isn’t.
The average promising talent who joins the military experiences far more fear than the average promising talent who works in AI safety. Even those who never face active combat are still forced to think about the possibility all day, and other unwholesome things happen in the military. This is an argument that we should invest in their well-being, not an argument that we should dramatically tone down the military defence priority or urgency.
I think if people repeatedly delay their predictions, it’s reasonable to suspect the predictions are too early. But AI 2027 being delayed to 2028+ is only one data point, and past predictions have moved closer rather than further:
On the other hand, just now I was looking at a video clip about self driving seeming so close in 2013 and not happening. https://www.reddit.com/r/singularity/comments/1lfzmbc/andrej_karpathy_says_selfdriving_felt_imminent/
So there’s great uncertainty.
I hope you can continue argue for more positivity and optimism, but don’t paint concern (“fear”) and urgency as the bad guy :). During hard times, optimism can coexist with concern and urgency.
PS: Even if I disagree, it’s meaningful that you believe people are making a mistake, you’re making effort to help them out even if they don’t all appreciate you, and you’re not giving up :).
Out of curiosity, I looked at your post “Here’s the exit.”
Although I didn’t agree with it, all the people harshly criticizing you really made me really feel your frustrations. Yes, this is the LessWrong echo chamber. The internet forum hivemind.
I feel very sorry for being sarcastic etc. you’ve obviously received too much of that from others.
Reading their accusations, I start to feel like I’m on your side :/ and my biased brain decided to remember all the examples of what you’re talking about.
I remember all these discussions where people didn’t just objectively believe that P(doom) was high, they were sort of rude and antagonistic about it. I remember some people villainizing AI researchers similarly to how militant vegans villainize meat eaters. It doesn’t even work, you don’t convince meat eaters by telling them they’re monsters, you’re supposed to be empathetic.
I really wish some of those people should chill out more, and I think your post is very good.
(Strangely, I can’t find these examples right now. All I found was this comment, which wasn’t that bad, and the replies to your “Here’s the exit” post.)