If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play.
I think the key differences are that (1) the adversarial nature of chess is a given (a company merger could or should be collabroative). (2) People know it is possible to ‘win’ chess. Forcing a stalemate is not easy. In naughts and crosses, getting a draw is pretty easy, doesn’t matter how smart the computer is, I can at least tie. For all I (or most people) know company mergers that become adversarial might look more like naughts and crosses than chess.
So, I think what people actually want, is not so much a sketch of how they will loose. But more a sketch of the facts that (1) it is adversarial situation and (2) it is very likely someone will loose. (Not a tie). At that point you are already pretty worried (50% loss chance) even if you think your enemy is no stronger than you.
The analogy falls apart at the seams. It’s true Stockfish will beat you in a symmetric game, but let’s say we had an asymmetric game, say with odds.
Someone asks who will win. Someone replies, ‘Stockfish will win because Stockfish is smarter.’ They respond, ‘this doesn’t make the answer seem any clearer; can you explain how Stockfish would win from this position despite these asymmetries?’ And indeed chess is such that engines can win from some positions and not others, and it’s not always obvious a priori which are which. The world is much more complicated than that.
I say this not asking for clarification; I think it’s fairly obvious that a sufficiently smart system wins in the real world. I also think it’s fine to hold on to heuristic uncertainties, like Elizabeth mentions. I do think it’s pretty unhelpful to claim certainty and then balk from giving specifics that actually address the systems as they exhibit in reality.
If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play
I think this is in large part because it’s a lower stakes claim—they don’t lose anything by saying you’re right.
Emotionally, I think this is closer to telling a gambler that they’re extremely unlikely to ever win the lottery. They have to actually make a large change to their life and feel like they’re giving up on something.
If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play.
I think the key differences are that (1) the adversarial nature of chess is a given (a company merger could or should be collabroative). (2) People know it is possible to ‘win’ chess. Forcing a stalemate is not easy. In naughts and crosses, getting a draw is pretty easy, doesn’t matter how smart the computer is, I can at least tie. For all I (or most people) know company mergers that become adversarial might look more like naughts and crosses than chess.
So, I think what people actually want, is not so much a sketch of how they will loose. But more a sketch of the facts that (1) it is adversarial situation and (2) it is very likely someone will loose. (Not a tie). At that point you are already pretty worried (50% loss chance) even if you think your enemy is no stronger than you.
The analogy falls apart at the seams. It’s true Stockfish will beat you in a symmetric game, but let’s say we had an asymmetric game, say with odds.
Someone asks who will win. Someone replies, ‘Stockfish will win because Stockfish is smarter.’ They respond, ‘this doesn’t make the answer seem any clearer; can you explain how Stockfish would win from this position despite these asymmetries?’ And indeed chess is such that engines can win from some positions and not others, and it’s not always obvious a priori which are which. The world is much more complicated than that.
I say this not asking for clarification; I think it’s fairly obvious that a sufficiently smart system wins in the real world. I also think it’s fine to hold on to heuristic uncertainties, like Elizabeth mentions. I do think it’s pretty unhelpful to claim certainty and then balk from giving specifics that actually address the systems as they exhibit in reality.
I think this is in large part because it’s a lower stakes claim—they don’t lose anything by saying you’re right.
Emotionally, I think this is closer to telling a gambler that they’re extremely unlikely to ever win the lottery. They have to actually make a large change to their life and feel like they’re giving up on something.