Oh, on the actual topic, it would never occur to me that an explicit link would check robots.txt. Even back in the day, it was only used by a few indexers as an advisory to keep some things out of indexes (and even then, not well-followed), to minimize scraping, not to prevent linking or actual use. https://en.wikipedia.org/wiki/Robots_exclusion_standard points out that the internet archive has ignored it since 2017.
I think the choice not to snapshot or centrally cache link content was more a bandwidth decision (less data between mastodon servers, more data from big CDN sites to many mastodon servers).
Oh, on the actual topic, it would never occur to me that an explicit link would check robots.txt. Even back in the day, it was only used by a few indexers as an advisory to keep some things out of indexes (and even then, not well-followed), to minimize scraping, not to prevent linking or actual use. https://en.wikipedia.org/wiki/Robots_exclusion_standard points out that the internet archive has ignored it since 2017.
I think the choice not to snapshot or centrally cache link content was more a bandwidth decision (less data between mastodon servers, more data from big CDN sites to many mastodon servers).