It certainly seems that a mastery of tank warfare would have helped a lot. But the British experience with tanks shows that there was a huge amount of resistance within the military to new forms of warfare. Britain only had tanks because Winston Churchill made it his priority to support them.
New weapon systems are not impressive at first. The old ways are typically a local optimum. So the real question here is how to leave that local optimum!
That’s a good point. I will clarify. I mean [a] - you win, the enemy surrenders.
I’m struggling to see why fun books would make any difference. Germany didn’t lose because it ran out of light reading material.
As for troop morale and so on, I don’t think that was a decisive element as by the time it started to matter, defeat was already overdetermined.
In other words, I think Germany would have lost WWI even with infinite morale.
If it pays out in advance it isn’t insurance.
A contract that relies on a probability to calculate payments is also a serious theoretical headache. If you are a Bayesian, there’s no objective probability to use since probabilities are subjective things that only exist relative to a state of partial ignorance about the world. If you are a frequentist there’s no dataset to use.
There’s another issue.
As the threat of extinction gets higher and also closer in time, it can easily be the case that there’s no possible payment that people ought to rationally accept.
Finally different people have different risk tolerances such that some people will gladly take a large risk of death for an upfront payment, but others wouldn’t even take it for infinity money.
E.g. right now I would take a 16% chance of death for a $1M payment, but if I had $50M net worth I wouldn’t take a 16% risk of death even if infinity money was being offered.
Since these x-risk companies must compensate everyone at once, even a single rich person in the world could make them uninsurable.
I think you have a nerdy novel society and a loss of WWI for the same reasons it was lost in our timeline
But I don’t see an actionable plan to winning here?
Sure you can bring decision theory knowledge. All I’m disallowing is something like bringing back exact plans for a nuke.
Well, it turned out that attacking on The Western Front in WWI was basically impossible. The front barely moved over 4 years, and that was with far more opposing soldiers over a much wider front.
So the best strategy for Germany would have been to dig in really deep and just wait for France to exhaust itself.
At least that’s my take as something of an amateur.
But the British could have entered the war anyway. After all, British war goals were to maintain a balance of power in Europe and they don’t want France and Russia to fall and Germany to be too strong.
OK, but if I am roleplaying the German side, I might choose to still start WWI but just not attack through Belgium. I will hold the Western Front with France and attack Russia.
True. I may in fact have been somewhat underconfident here.
I think violence helps unaligned AI more than it helps aligned AI.
If the research all goes underground it will slow it down but it will also make it basically guaranteed that there’s a competitive, uncoordinated transition to superintelligence.
Well, Altman is back in charge now.… I don’t think I’m being overconfident
It seems that I was mostly right in the specifics, there was a lot of resistance to getting rid of Altman and he is back (for now)
I didn’t make anything up. Altman is now back in charge BTW.
Well the new CEO is blowing kisses to him on Twitter
Well the board are in negotiations to have him back
“A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.”