Doesn’t any model contain the possibility, however slight, of seeing the unexpected? Sure this didn’t fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.
Your model states that pretty much under all circumstances an ambulance is going to pick up a patient. This is true to my knowledge as well, but what happens if the friend didn’t report to you that once the ambulance he called it off and refused to be transported. Or perhaps at the same time his chest pains were being judged as not-so-severe the ambulance got another call in that a massive car pileup required their immediate presence.
Your strength as a rationalist must not be the rejection of things unlikely in your model but instead the act of providing appropriate levels of concern. Perhaps the best response is something along the lines of “Sounds like a pretty strange occurrence. Are you sure your friend told you everything?” Now we’re starting to judge our level of confidence in the new information being valid.
Which is honestly a pretty difficult model to shake as well. So much of every bit of information you build your world with comes from other people that I think it pretty decent to trust with some amount of abandon.
I like to think Einstein’s confidence came instead from his belief that Relativity suitably justified the KL divergence between experiments in 1905 and physics theory in 1905. He was not necessarily in full possession of whatever evidence was required to narrow the hypothesis space down to relativity (which is a bit of a misformulation, I feel, since this space still contains a number of other theories both equally and more powerful than Physics+Relativity) but instead possessed enough so that in his own mental metropolis jumping he stumbled across Relativity (possibly the next closest convenient point climbing from the prior of Physics to the posterior including new evidence for the time) and sat there.
His comment just reflected a belief that new experiments were unlikely to yet be including the same new information he already used. In some sense, their resolution was not yet strong enough to pinpoint something more precise than Relativity.
Not to knock Einstein, of course. Just because you have new evidence drawing you to a different posterior hypothesis doesn’t mean that the update is going to be easy. That’s perhaps where the philosophy of Bayes runs into the computational limitations of today.