This requires some level of coverup (the developers likely know), and with a few years’ lead superintelligent coverup (or else other labs would convergently reinvent it based on the state of research literature). Superintelligent coverup could as well place us in the 1800s, it’s not fundamentally more complicated or less useful than playacting persistently fruitless AGI research in the 2100s, as long as it’s not revealed immediately after AGI gains sovereignty.
One reason for a persistent coverup I can think of is the world being part of a CEV simulation, eventually giving more data about humanity’s preference as the civilization grows starting from a particular configuration/history (that’s not necessarily close to what was real, or informed of that fact). In this scenario AGI was built in a different history, so can’t be timed on the local history (time of AGI development is located sideways, not in the past).
This requires some level of coverup (the developers likely know), and with a few years’ lead superintelligent coverup (or else other labs would convergently reinvent it based on the state of research literature). Superintelligent coverup could as well place us in the 1800s, it’s not fundamentally more complicated or less useful than playacting persistently fruitless AGI research in the 2100s, as long as it’s not revealed immediately after AGI gains sovereignty.
One reason for a persistent coverup I can think of is the world being part of a CEV simulation, eventually giving more data about humanity’s preference as the civilization grows starting from a particular configuration/history (that’s not necessarily close to what was real, or informed of that fact). In this scenario AGI was built in a different history, so can’t be timed on the local history (time of AGI development is located sideways, not in the past).
This is why I don’t find the idea that an AGI/ASI already here plausible. The AI industry is just much too open for that to really happen.