1. I think there is an interesting question here and am happy to see it be discussed.
2. “This would, obviously, be a system capable of writing things that we deem worth reading.” → To me, LLMs produce tons of content worth me reading. I chat to LLMs all the time. Often I prefer LLM responses to LessWrong summaries, where the two compete. I also use LLMs to come up with ideas, edit text, get feedback, and a lot of other parts of writing.
3. Regarding (2), my guess is that “LessWrong Blog Posts” might become “Things we can’t easily get from LLMs”—in which case it’s a very high bar for LLMs!
4. There’s a question on Manifold about “When will AIs produce movies as well as humans?” I think you really need to specify a specific kind of movie here. As AIs improve, humans will use AI tools to produce better and better movies—so “completely AI movies” will have a higher and higher bar to meet. So instead of asking, “When will AI blog posts be as good as human blog posts?” I’d ask, “When will AI blog posts be as good as human blog posts from [2020]” or similar. Keep the level of AI constant in one of these options.
5. We recently held the $300 Fermi challenge, where the results were largely generated with AIs. I think some of the top ones could make good blog posts.
6. As @habryka wrote recently, many readers will just stop reading something if it seems written by an LLM. I think this trend will last, and make it harder for useful LLM-generated content to be appreciated.
Some quick points:
1. I think there is an interesting question here and am happy to see it be discussed.
2. “This would, obviously, be a system capable of writing things that we deem worth reading.” → To me, LLMs produce tons of content worth me reading. I chat to LLMs all the time. Often I prefer LLM responses to LessWrong summaries, where the two compete. I also use LLMs to come up with ideas, edit text, get feedback, and a lot of other parts of writing.
3. Regarding (2), my guess is that “LessWrong Blog Posts” might become “Things we can’t easily get from LLMs”—in which case it’s a very high bar for LLMs!
4. There’s a question on Manifold about “When will AIs produce movies as well as humans?” I think you really need to specify a specific kind of movie here. As AIs improve, humans will use AI tools to produce better and better movies—so “completely AI movies” will have a higher and higher bar to meet. So instead of asking, “When will AI blog posts be as good as human blog posts?” I’d ask, “When will AI blog posts be as good as human blog posts from [2020]” or similar. Keep the level of AI constant in one of these options.
5. We recently held the $300 Fermi challenge, where the results were largely generated with AIs. I think some of the top ones could make good blog posts.
6. As @habryka wrote recently, many readers will just stop reading something if it seems written by an LLM. I think this trend will last, and make it harder for useful LLM-generated content to be appreciated.