Listening to people demand more specifics from If Anyone Builds it, Everyone Dies gives me a similar feeling to when a friend’s start-up was considering a merger.
Friend got a bad feeling about this because the other company clearly had different goals, was more sophisticated than them, and had an opportunistic vibe. Friend didn’t know how specifically other company would screw them, but that was part of the point- their company wasn’t sophisticated enough to defend themselves from the other one.
Friend fought a miserable battle with their coworkers over this. They were called chicken little because they couldn’t explain their threat model, until another employee stepped in with a story of how they’d been outmaneuvered at a previous company in exactly the way friend feared but couldn’t describe. Suddenly, co-workers came around on the issue. They ultimately decided against the merger.
“They’ll be so much smarter I can’t describe how they’ll beat us” can feel like a shitty argument because it’s hard to disprove, but sometimes it’s true. The debate has to be about whether a specific They will actually be that smart.
If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play.
I think the key differences are that (1) the adversarial nature of chess is a given (a company merger could or should be collabroative). (2) People know it is possible to ‘win’ chess. Forcing a stalemate is not easy. In naughts and crosses, getting a draw is pretty easy, doesn’t matter how smart the computer is, I can at least tie. For all I (or most people) know company mergers that become adversarial might look more like naughts and crosses than chess.
So, I think what people actually want, is not so much a sketch of how they will loose. But more a sketch of the facts that (1) it is adversarial situation and (2) it is very likely someone will loose. (Not a tie). At that point you are already pretty worried (50% loss chance) even if you think your enemy is no stronger than you.
The analogy falls apart at the seams. It’s true Stockfish will beat you in a symmetric game, but let’s say we had an asymmetric game, say with odds.
Someone asks who will win. Someone replies, ‘Stockfish will win because Stockfish is smarter.’ They respond, ‘this doesn’t make the answer seem any clearer; can you explain how Stockfish would win from this position despite these asymmetries?’ And indeed chess is such that engines can win from some positions and not others, and it’s not always obvious a priori which are which. The world is much more complicated than that.
I say this not asking for clarification; I think it’s fairly obvious that a sufficiently smart system wins in the real world. I also think it’s fine to hold on to heuristic uncertainties, like Elizabeth mentions. I do think it’s pretty unhelpful to claim certainty and then balk from giving specifics that actually address the systems as they exhibit in reality.
If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play
I think this is in large part because it’s a lower stakes claim—they don’t lose anything by saying you’re right.
Emotionally, I think this is closer to telling a gambler that they’re extremely unlikely to ever win the lottery. They have to actually make a large change to their life and feel like they’re giving up on something.
i mean, in general, it’s a lot easier to tell plausible-seeming stories of things going really poorly than actually high-likelihood stories of things going poorly. so the anecdata of it actually happening is worth a lot
sure, but the fact that that’s a really reasonable algorithm would not have saved the co-workers from the consequences of merging with the probably-predatory company, in the world where the company didn’t happen to have an employee with the perfect anecdote.
Or maybe there wouldn’t be a lot of worlds where the merger was totally fine and beneficial, because if you don’t have enough discernment to tell founded from unfounded fears, you’ll fall into adverse selection and probably get screwed over. (Some domains are like that, I don’t know if this one is.)
Listening to people demand more specifics from If Anyone Builds it, Everyone Dies gives me a similar feeling to when a friend’s start-up was considering a merger.
Friend got a bad feeling about this because the other company clearly had different goals, was more sophisticated than them, and had an opportunistic vibe. Friend didn’t know how specifically other company would screw them, but that was part of the point- their company wasn’t sophisticated enough to defend themselves from the other one.
Friend fought a miserable battle with their coworkers over this. They were called chicken little because they couldn’t explain their threat model, until another employee stepped in with a story of how they’d been outmaneuvered at a previous company in exactly the way friend feared but couldn’t describe. Suddenly, co-workers came around on the issue. They ultimately decided against the merger.
“They’ll be so much smarter I can’t describe how they’ll beat us” can feel like a shitty argument because it’s hard to disprove, but sometimes it’s true. The debate has to be about whether a specific They will actually be that smart.
If I told someone ‘I bet stockfish could beat you at Chess’ i think it is very unlikely they would demand that I provide the exact sequence of moves it would play.
I think the key differences are that (1) the adversarial nature of chess is a given (a company merger could or should be collabroative). (2) People know it is possible to ‘win’ chess. Forcing a stalemate is not easy. In naughts and crosses, getting a draw is pretty easy, doesn’t matter how smart the computer is, I can at least tie. For all I (or most people) know company mergers that become adversarial might look more like naughts and crosses than chess.
So, I think what people actually want, is not so much a sketch of how they will loose. But more a sketch of the facts that (1) it is adversarial situation and (2) it is very likely someone will loose. (Not a tie). At that point you are already pretty worried (50% loss chance) even if you think your enemy is no stronger than you.
The analogy falls apart at the seams. It’s true Stockfish will beat you in a symmetric game, but let’s say we had an asymmetric game, say with odds.
Someone asks who will win. Someone replies, ‘Stockfish will win because Stockfish is smarter.’ They respond, ‘this doesn’t make the answer seem any clearer; can you explain how Stockfish would win from this position despite these asymmetries?’ And indeed chess is such that engines can win from some positions and not others, and it’s not always obvious a priori which are which. The world is much more complicated than that.
I say this not asking for clarification; I think it’s fairly obvious that a sufficiently smart system wins in the real world. I also think it’s fine to hold on to heuristic uncertainties, like Elizabeth mentions. I do think it’s pretty unhelpful to claim certainty and then balk from giving specifics that actually address the systems as they exhibit in reality.
I think this is in large part because it’s a lower stakes claim—they don’t lose anything by saying you’re right.
Emotionally, I think this is closer to telling a gambler that they’re extremely unlikely to ever win the lottery. They have to actually make a large change to their life and feel like they’re giving up on something.
i mean, in general, it’s a lot easier to tell plausible-seeming stories of things going really poorly than actually high-likelihood stories of things going poorly. so the anecdata of it actually happening is worth a lot
sure, but the fact that that’s a really reasonable algorithm would not have saved the co-workers from the consequences of merging with the probably-predatory company, in the world where the company didn’t happen to have an employee with the perfect anecdote.
yeah, but there would be a lot of worlds where the merger was totally fine and beneficial where it fell through because people had unfounded fears
Or maybe there wouldn’t be a lot of worlds where the merger was totally fine and beneficial, because if you don’t have enough discernment to tell founded from unfounded fears, you’ll fall into adverse selection and probably get screwed over. (Some domains are like that, I don’t know if this one is.)