If AGI is human-equivalent for the purposes of developing a civilization, a collective of AGIs is at least as capable as humanity, plus it has AI advantages, so it’s much more capable than a single AGI instance, or any single human. This leads to ASI being often used synonymously with AGI lately (via individual vs. collective conflation). Such use of “ASI” might free up “AGI” for something closer to its original meaning, which didn’t carry the implication of human-equivalence. But this setup leaves the qualitatively-more-capable-than-humanity bucket without a label, that’s important for gesturing at AI danger.
I think the other extreme for meaning of “ASI”, being qualitatively much stronger than humanity, can be made more specific by having “ASI” refer to the level of capabilities that follows software-only singularity (under the assumption that it does advance capabilities a lot). This way, it’s neither literal technological maturity of hitting the limits of physical law, nor merely a collective of jugged-human-level AGI instances wielding their AI advantages. Maybe “RSI” is a more stable label for this, as in Superintelligence Strategy framing where “intelligence recursion” is the central destabilization bogeyman, rather than any given level of capabilities on its own.
Maybe there are modes of engagement that should be avoided, and many ideas/worldviews themselves are not worth engaging with (though neglectedness in your own personal understanding is a reason to seek them out). But as long as you have allocated time to something, even largely as a result of external circumstances, doing a superficial and half-hearted job of it is a waste. It certainly shouldn’t be the intent from the outset, as in the quote I was replying to.