>This is why Egan is avoiding stories of transcendence—out of fear that these will become comforting or enervating falsehoods for the superfans who take it literally.
I don’t think that is quite true. He has been pretty explicit in interviews of his views in this regard—he was dissatisfied with writings in the 80s he saw as “churning out very lame noir plots that utterly squandered the philosophical implications of the technology,” regularly expresses dissatisfaction with re-treading concepts he feels he has already explored and whatnot. It seems pretty evident he views transcendence similiarly, less interesting to him as a philosophical concept to explore and a trope he has engaged with already in a lot of his works (and one many others have engaged with).
>It’s even why he is apriori against the idea of an LLM-powered singularity happening in the real world.
I think it is better to assume he is honest about his reasons for being critical of AI. He facially doesn’t find the idea of AI in general implausible, but views many claims as being on their face seemingly silly and unevidenced. He has expressed sympathy that human minds are not inherently unique (including by citing Tegram, incidentally), and could be emulated by artifical machines (some of his works deal with this directly!) but explicitly doesn’t see on its face how running human language through a series of regression models would lead to human extinction or create an emergent entity with human intelligence (lacking any evidence of such a thing). That is a perfectly reasonable view and I would agree with cautioning in favor of understanding technology and advancements inside empirical frameworks that we can evidence.
Mainstream among scientists was different from political communities. Scientists didn’t have the same expectations (which isn’t to say they never do or didn’t have expectations of their own).
>my question is whether this threshold is higher or delay is greater when the scale at which the theory operates is enormous or tiny.
As I said, the larger the implications of a theory, the more interwoven and difficult to test its implications, the harder it is to rule out alternative explanations
>there was no way that light could have a speed because it would need to be too fast
The speed of light was measured to be in the ballpark of 300 million m/s since the late 17th century. There were a bunch of rather clever ways of estimating it way before Einstein. There wasn’t a consensus frim the time if ancient greece through the 17th century, but since ancient Greece it has been imagined that light might consist of some emissions that take non-zero time to propagate.