I often like to have Claude summarize longer LessWrong posts for me if I’m unsure whether I want to commit to reading the entire thing. Lately, however, I’ve noticed that probably 75+% of the time, it fails to fetch the page because of rate limits. Maybe LW would just be overloaded by fetches from AIs, so it must limit them? Is there any solution to this on my end besides e.g. saving the page as a PDF and uploading it manually?
A web standard for micropayments to cover hosting costs so that AI companies don’t have to be ratelimited is probably the correct solution.
I’m not sure how much it would cost AI companies if they had to compensate the internet for the obscene amount of traffic they generate, it’s probably a large number, but maybe not a large proportion of trianing costs.
I’ve noticed the same thing happening for papers on arXiv. There’s probably some way to set up an MCP server so Claude can access sites outside of Anthropic’s servers, but right now it’s easier to upload the PDF manually.
I often like to have Claude summarize longer LessWrong posts for me if I’m unsure whether I want to commit to reading the entire thing. Lately, however, I’ve noticed that probably 75+% of the time, it fails to fetch the page because of rate limits. Maybe LW would just be overloaded by fetches from AIs, so it must limit them? Is there any solution to this on my end besides e.g. saving the page as a PDF and uploading it manually?
A web standard for micropayments to cover hosting costs so that AI companies don’t have to be ratelimited is probably the correct solution.
I’m not sure how much it would cost AI companies if they had to compensate the internet for the obscene amount of traffic they generate, it’s probably a large number, but maybe not a large proportion of trianing costs.
I’ve noticed the same thing happening for papers on arXiv. There’s probably some way to set up an MCP server so Claude can access sites outside of Anthropic’s servers, but right now it’s easier to upload the PDF manually.