There’s a lot of room for debate on the correctness of the resolutions of these predictions:
e.g. Heinlein in 1949:
Space travel we will have, not fifty years from now, but much sooner. It’s breathing down our necks.
This is marked as incorrect, due to the marker assuming that this meant mass space travel, but I wouldn’t interpret this as mass space travel unless there’s some relevant context I’m missing here—keep in mind that this was from 1949, 8 years before Sputnik.[1]
On the other hand:
All aircraft will be controlled by a giant radar net run on a continent-wide basis by a multiple electronic “brain.”
This is marked as correct, apparently due to autopilot and the “USAF Airborne Command Post”? But I would interpret it as active control of the planes by a centralized computer and mark it as incorrect.[2]
Edited to add: there were a bunch i could have mentioned but want to remark on this one where my interpretation was especially different from the marker’s:
Interplanetary travel is waiting at your front door — C.O.D. It’s yours when you pay for it.
This is also from 1949. The marker interprets this as a prediction of “Commercial interplanetary travel”. I see it rather as a conditional prediction of interplanetary travel (not necessarily commercial), given the willingness to fund it, i.e. a prediction that the necessary technology would be available but not necessarily that it would be funded. If this is the right interpretation, it seems correct to me. Again, I could be completely wrong depending on the context. [3]
- ^
Edited to add: I realized I actually have a copy of Heinlein’s “Expanded Universe” which includes “Where To?” and followup 1965 and 1980 comments. In context, this statement comes right in the middle of a discussion of hospitals for old people on the moon, which considerably shifts the interpretation towards it being intended to refer to mass space travel, though if Heinlein were still here he could argue it literally meant any space travel.
- ^
In context, it’s not 100% clear that he meant a single computer, though I still think so. But he definitely meant full automation outside of emergency or unusual situations; from his 1980 followup: “But that totally automated traffic control system ought to be built. … all routine (99.9%+ )takeoffs and landings should be made by computer.”
- ^
And now seeing the context, I stand by this interpretation: It’s a standalone comment from the original, but Heinlein’s 1965 followup includes “and now we are paying for it and the cost is high”, confirming that government space travel counted in his view...but, given that he did assert we were paying for it, and interplanetary space travel has not occurred (I interpret the prediction as meaning human space travel), this actually might cut against counting this as a correct prediction.
I’m not convinced by the argument that AI science systems are necessarily dangerous.
It’s generically* the case that any AI that is trying to achieve some real-world future effect is dangerous. In that linked post Nate Soares used chess as an example, which I objected to in a comment. An AI that is optimizing within a chess game isn’t thereby dangerous, as long as the optimization stays within the chess game. E.g., an AI might reliably choose strong chess moves, but still not show real-world Omohundro drives (e.g. not avoiding being turned off).
I think scientific research is more analogous to chess than trying to achieve a real-world effect in this regard (even if the scientific research has real-world side effects), in that you can, in principle, optimize for reliably outputting scientific insights without actually leading the AI to output anything based on its real-world effects. (the outputs are selected based on properties aligned with “scientific value”, but that doesn’t necessarily require the assessment to take into account how it will be used, or any other effect on the future of the world. You might need to be careful not to evaluate in such a way that it will wind up optimizing for real-world effects, though).
Note: an AI that can “build a fusion rocket” is generically dangerous. But an AI that can design a fusion rocket, if that design is based on general principles and not tightly tuned on what will produce some exact real-world effect, is likely not dangerous.
*generically dangerous: I use this to mean, an AI with this properties is going to be dangerous unless some unlikely-by-default (and possibly very difficult) safety precautions are taken.