Fine, and also I’m not saying what to do about it (shame or polarize or whatever), but PRIOR to that, we have to STOP PRETENDING IT’S JUST A VIEW. It’s a conflictual stance that they are taking. It’s like saying that the statisticians arguing against “smoking causes cancer” “have a nuanced view”.
I’m not pretending it’s just a view. The immense importance of this issue is another reason to avoid polarization. Look at how the climate change issue worked out with polarization involved.
The arguments for caution are very strong. Proponents of caution are at an advantage in a discussion. We’re also a minority, so we’re at a disadvantage in a fight. So it seems important to not help it move from being a discussion to a fight.
climate change issue worked out with polarization involved
The climate change issue has pretty widespread international agreement and in most countries is considered a bipartisan issue. The capture of climate change by polarising forces has not really affected intervention outcomes (other problems of implementation are, imo, far greater).
I don’t want to derail the AI safety organising conversation, but I see this climate change comparison come up a lot. It strikes me as a pretty low-quality argument and it’s not clear a) whether the central claim is even true and b) whether it is transferable to organising in AI safety.
The flipside of the polarisation issue is the “false balance” issue, and that reference to smoking by TsviBT seems to be what this discussion is pointing at.
Admittedly, most of the reason why we are able to solve climate change easily while polarization happened is because it turned out to be the case that the problem was far easier to solve than feared (if we don’t care about animal welfare much, which is the case for ~all humans) without much government intervention.
I actually think this has a reasonable likelihood of happening, but conditional on no alignment solution that’s cheap enough to be adopted without large government support, if it’s doable at all, then polarization matters far more here, so it’s actually a useful case study for worlds where alignment is hard.
The climate change issue didn’t become polarized in other countries, and that’s good. It did get polarized here, and that’s bad. It has roadblocked even having discussions about solutions behind discussing the increasingly ridulous—but also increasingly prevelant—“question” of whether human-caused climate change is even real. People in the US questioned the reality of anthropogenic climate change MORE even as the evidence for it mounted—because it had become polarized, so was more about identity than facts and logic. See my AI scares and changing public beliefs for one graph of this maddening degredation of clarity.
So why create polarization on this issue?
The false balance issue is separate. One might suppose that creating polarization leads to false balance arguments, because then there are two sides so to be fair we should balance both of them. If there are just a range of opinions, false blance is less easy to argue for.
I don’t know what you mean by “the central claim” here.
I also don’t want to derail to actually discussing climate change; I just used it as one example in which polarization was pretty clearly really bad for solving a problem.
Sorry, this was perhaps unfair of me to pick on you for making the same sort of freehand argument that many have done, maybe I should write a top-level post about it.
To clarify—the idea that “climate change is not being solved because of polarisation” and “AI safety would suffer from being like climate action [due to the previous]” are twin claims that are not obvious. These arguments seem surface-level reasonable by hinging on a lot of internal American politics that I don’t think engages with the breadth of drivers of climate action. To some extent these arguments betray the lip service that AI safety is an international movement because they seek to explain the solution of an international problem solely within the framework of US politics. I also feel the polarisation of climate change is itself sensationalised.
But I think what you’ve said here is more interesting:
One might suppose that creating polarization leads to false balance arguments, because then there are two sides so to be fair we should balance both of them. If there are just a range of opinions, false blance is less easy to argue for.
It seems like you believe that the opposite of polarisation is plurality (all arguments seen as equally valid), whereas I would see the opposite of polarisation as consensus (one argument is seen as valid). This is in contrast to polarisation (different groups see different arguments as valid). Valid here being more like “respectable” rather than “100% accurate”. But indeed, it’s not obvious to me that the chain of causality is polarisation → desire for false balance, rather than desire for false balance → polarisation. (Also handwavey notion to the idea that this desire for false balance comes from conflicting goals a la conflict theory).
So it seems important to not help it move from being a discussion to a fight.
It seems like part of the practical implication of whatever you mean by this is to say:
Calling people kind of stupid for holding the position they do (which Tegmark’s framing definitely does)
Like, Tegmark’s post is pretty neutral, unless I’m missing something. So it sounds like you’re saying to not describe there being two camps at all. Is that roughly what you’re saying? I’m saying that in your abstract analysis of the situation, you should stop preventing yourself from understanding that there are two camps.
I’m just repeating Raemon’s sentiment and elaborating on some reasons to be so concerned with this. I agree with him that just not framing with the title “which side are you on” seems to have much the same upside and much less polarization downside.
The fact that there are people advocating for two incompatible strategies does not mean that there are two groups in other important senses. One could look at it, and I do, as a bunch of confused humans in a complex domain, none of whom have a very good grip on the real situation, and who fall on different sides of this policy issue, but could be persuaded to change their minds on it.
The title “Which side of the AI safety community are you in?” reifies the existence of two groups at odds, with some sort of group identity, and it doesn’t seem to be much benefit to making the call for signatures that way.
So yes, I’m objecting to using the term two groups at all, let alone in the title and as the central theme. Motivating people by stirring up resentment against an outgroup is a strategy as old as time. It works in the short term. But it has big long-term costs: now you have a conflict between groups instead of a bunch of people with a variety of opinions.
Fine, and also I’m not saying what to do about it (shame or polarize or whatever), but PRIOR to that, we have to STOP PRETENDING IT’S JUST A VIEW. It’s a conflictual stance that they are taking. It’s like saying that the statisticians arguing against “smoking causes cancer” “have a nuanced view”.
I’m not pretending it’s just a view. The immense importance of this issue is another reason to avoid polarization. Look at how the climate change issue worked out with polarization involved.
The arguments for caution are very strong. Proponents of caution are at an advantage in a discussion. We’re also a minority, so we’re at a disadvantage in a fight. So it seems important to not help it move from being a discussion to a fight.
The climate change issue has pretty widespread international agreement and in most countries is considered a bipartisan issue. The capture of climate change by polarising forces has not really affected intervention outcomes (other problems of implementation are, imo, far greater).
I don’t want to derail the AI safety organising conversation, but I see this climate change comparison come up a lot. It strikes me as a pretty low-quality argument and it’s not clear a) whether the central claim is even true and b) whether it is transferable to organising in AI safety.
The flipside of the polarisation issue is the “false balance” issue, and that reference to smoking by TsviBT seems to be what this discussion is pointing at.
Admittedly, most of the reason why we are able to solve climate change easily while polarization happened is because it turned out to be the case that the problem was far easier to solve than feared (if we don’t care about animal welfare much, which is the case for ~all humans) without much government intervention.
I actually think this has a reasonable likelihood of happening, but conditional on no alignment solution that’s cheap enough to be adopted without large government support, if it’s doable at all, then polarization matters far more here, so it’s actually a useful case study for worlds where alignment is hard.
I don’t get it?
The climate change issue didn’t become polarized in other countries, and that’s good. It did get polarized here, and that’s bad. It has roadblocked even having discussions about solutions behind discussing the increasingly ridulous—but also increasingly prevelant—“question” of whether human-caused climate change is even real. People in the US questioned the reality of anthropogenic climate change MORE even as the evidence for it mounted—because it had become polarized, so was more about identity than facts and logic. See my AI scares and changing public beliefs for one graph of this maddening degredation of clarity.
So why create polarization on this issue?
The false balance issue is separate. One might suppose that creating polarization leads to false balance arguments, because then there are two sides so to be fair we should balance both of them. If there are just a range of opinions, false blance is less easy to argue for.
I don’t know what you mean by “the central claim” here.
I also don’t want to derail to actually discussing climate change; I just used it as one example in which polarization was pretty clearly really bad for solving a problem.
Sorry, this was perhaps unfair of me to pick on you for making the same sort of freehand argument that many have done, maybe I should write a top-level post about it.
To clarify—the idea that “climate change is not being solved because of polarisation” and “AI safety would suffer from being like climate action [due to the previous]” are twin claims that are not obvious. These arguments seem surface-level reasonable by hinging on a lot of internal American politics that I don’t think engages with the breadth of drivers of climate action. To some extent these arguments betray the lip service that AI safety is an international movement because they seek to explain the solution of an international problem solely within the framework of US politics. I also feel the polarisation of climate change is itself sensationalised.
But I think what you’ve said here is more interesting:
It seems like you believe that the opposite of polarisation is plurality (all arguments seen as equally valid), whereas I would see the opposite of polarisation as consensus (one argument is seen as valid). This is in contrast to polarisation (different groups see different arguments as valid). Valid here being more like “respectable” rather than “100% accurate”. But indeed, it’s not obvious to me that the chain of causality is polarisation → desire for false balance, rather than desire for false balance → polarisation. (Also handwavey notion to the idea that this desire for false balance comes from conflicting goals a la conflict theory).
It seems like part of the practical implication of whatever you mean by this is to say:
Like, Tegmark’s post is pretty neutral, unless I’m missing something. So it sounds like you’re saying to not describe there being two camps at all. Is that roughly what you’re saying? I’m saying that in your abstract analysis of the situation, you should stop preventing yourself from understanding that there are two camps.
I’m just repeating Raemon’s sentiment and elaborating on some reasons to be so concerned with this. I agree with him that just not framing with the title “which side are you on” seems to have much the same upside and much less polarization downside.
The fact that there are people advocating for two incompatible strategies does not mean that there are two groups in other important senses. One could look at it, and I do, as a bunch of confused humans in a complex domain, none of whom have a very good grip on the real situation, and who fall on different sides of this policy issue, but could be persuaded to change their minds on it.
The title “Which side of the AI safety community are you in?” reifies the existence of two groups at odds, with some sort of group identity, and it doesn’t seem to be much benefit to making the call for signatures that way.
So yes, I’m objecting to using the term two groups at all, let alone in the title and as the central theme. Motivating people by stirring up resentment against an outgroup is a strategy as old as time. It works in the short term. But it has big long-term costs: now you have a conflict between groups instead of a bunch of people with a variety of opinions.