I’m curious as to the viewpoint of the other party in these conversations? If they’re not aware of/interested in/likely to be thinking about the disruptive effects of AI, then I would usually just omit mentioning it. You know you’re conditioning on that caveat, and their thinking does so without them realizing it.
If the other party is more AI-aware, and they know you are as well, you can maybe just keep it simple, something like, “assuming enough normality for this to matter.”
Generally it’s the former, or someone who is faintly AI aware but not so interested in delving into the consequences. However, I’d like to represent my true opinions which involve significant AI driven disruption, hence the need for a caveat.
I’m curious as to the viewpoint of the other party in these conversations? If they’re not aware of/interested in/likely to be thinking about the disruptive effects of AI, then I would usually just omit mentioning it. You know you’re conditioning on that caveat, and their thinking does so without them realizing it.
If the other party is more AI-aware, and they know you are as well, you can maybe just keep it simple, something like, “assuming enough normality for this to matter.”
Generally it’s the former, or someone who is faintly AI aware but not so interested in delving into the consequences. However, I’d like to represent my true opinions which involve significant AI driven disruption, hence the need for a caveat.