I’m sure some of people’s ignorance of these threat models comes from the reasons. But my intuition is that most of it comes from “these are vaguer threat models that seem very up in the air, and other ones seem more obviously real and more shovel-ready” (this is similar to your “Flinch”, but I think more conscious and endorsed).
Thus, I think the best way to converge on whether these threat models are real/likely/actionable is to work through as-detailed-as-possible example trajectories. Someone objects that the state will handle it? Let’s actually think through how the state might look like in 5 years! Someone objects that democracy will prevent it? Let’s actually think through the actual consequences of cheap cognitive labor in democracy! This is analogous to what pessimists about single-single alignment have gone through. They have some abstract arguments, people don’t buy them, so they start working through them in more detail or provide example failures. I buy some parts of them, but not others. And if you did the same for this threat model, I’m uncertain how much I’d buy!
Of course, the paper might have been your way of doing that. I enjoyed it, but still would have preferred more fully detailed examples, on top of the abstract arguments. You do use examples (both past and hypothetical), but they are more like “small, local examples that embody one of the abstract arguments”, rather than “an ambitious (if incomplete) and partly arbitrary picture of how these abstract arguments might actually pan out in practice”. And I would like to know the messy details of how you envision these abstract arguments coming into contact with reality. This is why I liked TASRA, and indeed I was more looking forward to an expanded, updated and more detailed version of TASRA.
I’m sure some of people’s ignorance of these threat models comes from the reasons. But my intuition is that most of it comes from “these are vaguer threat models that seem very up in the air, and other ones seem more obviously real and more shovel-ready” (this is similar to your “Flinch”, but I think more conscious and endorsed).
Thus, I think the best way to converge on whether these threat models are real/likely/actionable is to work through as-detailed-as-possible example trajectories. Someone objects that the state will handle it? Let’s actually think through how the state might look like in 5 years! Someone objects that democracy will prevent it? Let’s actually think through the actual consequences of cheap cognitive labor in democracy!
This is analogous to what pessimists about single-single alignment have gone through. They have some abstract arguments, people don’t buy them, so they start working through them in more detail or provide example failures. I buy some parts of them, but not others. And if you did the same for this threat model, I’m uncertain how much I’d buy!
Of course, the paper might have been your way of doing that. I enjoyed it, but still would have preferred more fully detailed examples, on top of the abstract arguments. You do use examples (both past and hypothetical), but they are more like “small, local examples that embody one of the abstract arguments”, rather than “an ambitious (if incomplete) and partly arbitrary picture of how these abstract arguments might actually pan out in practice”. And I would like to know the messy details of how you envision these abstract arguments coming into contact with reality. This is why I liked TASRA, and indeed I was more looking forward to an expanded, updated and more detailed version of TASRA.