Within my worldview, an important aspect of the “Change Overton Window” plan, is that humanity will need to do some pretty nuanced things. It’s not enough to get humanity to do one discrete action that you burn through the epistemic commons to achieve. We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
This seems pretty much correct to me (though the sought-after epistemic-clarity-win may turn out to be a broader target with less requisite clarity than many of us suspect). I think it is important to be aware of what kinds of trades we are making, and there is no sense in selling the car for gas money. I also think there are some things worth selling/burning that aren’t needed in order to unlock a “good ending.”
There are real strategic tensions to navigate. I think engaging in some mild motte-and-bailey in order to use a broad coalition of AI-worriers as a battering ram against x-risk is not obviously a bad idea. Or more honestly: I think that is a component of the wisest available path for AI Safety advocacy.
For example, I think it is a good idea to put the phrases “[all these people you like] agree that superintelligence shouldn’t be built” and “a rogue superintelligence might kill us all” next to each other on TV, and it is usually not a good idea to spend any audience attention on clarifying that the people in that list don’t all agree with the latter phrase. However, directly falsely stating that they do agree about something they don’t agree about is probably a mistake on all fronts.
I would much rather walk on broad flat ground rather than a tightrope, but the slope appears to be slippery on both sides, so onwards I walk.
The moment we find ourselves in is an exceptional one in many ways. That doesn’t mean all our hard-earned wisdom can lightly be cast aside. In fact, we are going to have to rely on its inertia to keep our balance. But it does mean that we will have to do an unusual amount of work to evaluate each action on its own strategic merits and its specific likely effects, even if it belongs to a class of actions that are typically frowned upon.
Don’t burn down too much. Stay sane. Leave yourself room to retreat. Leave yourself room to get lucky. Leave yourself room to win. With that, my call is to not cling too tightly to outward performances of epistemic hygiene if they ever stand in the way of reaching the people you need to reach.
I’m interpreting your comment as fitting into a kind of “game theory for rationalists who want to effect change in the real world.” This is complex and situational, and I’m skeptical I can say much that is interesting, actionable, and generalizable. Still, the following statements seem true to me, but I haven’t vetted them:
A strategy of aiming for high epidemic standards has many advantages:
It is a durable way to build and maintain trust.
It is resilient to changing circumstances and knowledge.
intellectual honesty is an endowment that should be developed, not degraded, because it is necessary for wise decisions, and we will have to make many smart decisions to survive and thrive.
However, this might:
limit one’s audience and reach.
not be sufficient.
not be effective against other strategies in certain environments or communication channels.
This seems pretty much correct to me (though the sought-after epistemic-clarity-win may turn out to be a broader target with less requisite clarity than many of us suspect). I think it is important to be aware of what kinds of trades we are making, and there is no sense in selling the car for gas money. I also think there are some things worth selling/burning that aren’t needed in order to unlock a “good ending.”
There are real strategic tensions to navigate. I think engaging in some mild motte-and-bailey in order to use a broad coalition of AI-worriers as a battering ram against x-risk is not obviously a bad idea. Or more honestly: I think that is a component of the wisest available path for AI Safety advocacy.
For example, I think it is a good idea to put the phrases “[all these people you like] agree that superintelligence shouldn’t be built” and “a rogue superintelligence might kill us all” next to each other on TV, and it is usually not a good idea to spend any audience attention on clarifying that the people in that list don’t all agree with the latter phrase. However, directly falsely stating that they do agree about something they don’t agree about is probably a mistake on all fronts.
I would much rather walk on broad flat ground rather than a tightrope, but the slope appears to be slippery on both sides, so onwards I walk.
The moment we find ourselves in is an exceptional one in many ways. That doesn’t mean all our hard-earned wisdom can lightly be cast aside. In fact, we are going to have to rely on its inertia to keep our balance. But it does mean that we will have to do an unusual amount of work to evaluate each action on its own strategic merits and its specific likely effects, even if it belongs to a class of actions that are typically frowned upon.
Don’t burn down too much. Stay sane. Leave yourself room to retreat. Leave yourself room to get lucky. Leave yourself room to win. With that, my call is to not cling too tightly to outward performances of epistemic hygiene if they ever stand in the way of reaching the people you need to reach.
I’m interpreting your comment as fitting into a kind of “game theory for rationalists who want to effect change in the real world.” This is complex and situational, and I’m skeptical I can say much that is interesting, actionable, and generalizable. Still, the following statements seem true to me, but I haven’t vetted them:
A strategy of aiming for high epidemic standards has many advantages:
It is a durable way to build and maintain trust.
It is resilient to changing circumstances and knowledge.
intellectual honesty is an endowment that should be developed, not degraded, because it is necessary for wise decisions, and we will have to make many smart decisions to survive and thrive.
However, this might:
limit one’s audience and reach.
not be sufficient.
not be effective against other strategies in certain environments or communication channels.