It took me a minute to find the post I saw, since I (incorrectly) assumed it was everywhere. I was browsing Ethical Design Patterns by AnnaSalamon and noticed a few:
*Historically people worried about extinction risk from artificial intelligence have not seriously considered deliberately slowing down AI progress as a solution. Katja Grace argues this strategy should be considered more seriously, and that common objections to it are incorrect or exaggerated. *
The field of AI alignment is growing rapidly, attracting more resources and mindshare each year. As it grows, more people will be incentivized to misleadingly portray themselves or their projects as more alignment-friendly than they are. Adam proposes “safetywashing” as the term for this [note: this one cutting off is what made me suspect it was automatic]
You might feel like AI risk is an “emergency” that demands drastic changes to your life. But is this actually the best way to respond? Anna Salamon explores what kinds of changes actually make sense in different types of emergencies, and what that might mean for how to approach existential risk.
Ah, yeah, I think we shouldn’t show the spotlight item summary on hover. Seems confusing and speaking about the article and author in third person feels sudden.
I’m honestly not really happy with describing the author in the third person in spotlight either, I think we should just try to find a different way of accomplishing the goal there (which I think is to avoid “I” speak which also feels jarring in the summaries)
I said similar elsewhere, but I agree that “I” speak would be really bad (don’t put words into the author’s mouth, especially in this case, where it would mislead the reader about the writing style/ability of the post), but I also think switching out the post for a summary is pretty jarring to begin with.
Since every post for years has been peekable as the first couple paragraphs, showing a summary unlabeled is always a jarring bait-and-switch
I do not have especially strong feelings about it besides the initial confusion. I think maybe confusion could be fixed just by saying something like “Featured post summary:” or similar, which would help explain why, when I expected the first paragraph of the essay, I’m reading a summary
Okay yeah those are all posts that won Best of LessWrong. We generate like 8 AI descriptions, and then a LW teammate goes through, picks the best starting one, and then fine-tunes it to create the spotlights you see at the top of the page. (Sometimes this involves mostly rewriting it, sometimes we end up mostly sticking with the existing one).
Ah, so it really was an AI summary of a post that is totally unlabeled? An amusing twist.
Even with what appears to be a fair amount of fine-tuning, it still reads like unlabeled AI text, which is maybe why I found it so jarring. Possibly a label could help, then?
(though honestly, it’s pretty weird to not see the first paragraph, so even if the AI thing doesn’t ring as important to you, some kind of differentiating label would be REALLY HELPFUL when the expected POV is altered after the fact.)
(RE parenthetical: I heavily suspect you know this, but switching the POV to first person would be much much much worse and would lead me to skip posts I would assume read like AI text)
It took me a minute to find the post I saw, since I (incorrectly) assumed it was everywhere. I was browsing Ethical Design Patterns by AnnaSalamon and noticed a few:
*Historically people worried about extinction risk from artificial intelligence have not seriously considered deliberately slowing down AI progress as a solution. Katja Grace argues this strategy should be considered more seriously, and that common objections to it are incorrect or exaggerated. *
The field of AI alignment is growing rapidly, attracting more resources and mindshare each year. As it grows, more people will be incentivized to misleadingly portray themselves or their projects as more alignment-friendly than they are. Adam proposes “safetywashing” as the term for this [note: this one cutting off is what made me suspect it was automatic]
You might feel like AI risk is an “emergency” that demands drastic changes to your life. But is this actually the best way to respond? Anna Salamon explores what kinds of changes actually make sense in different types of emergencies, and what that might mean for how to approach existential risk.
Ah, yeah, I think we shouldn’t show the spotlight item summary on hover. Seems confusing and speaking about the article and author in third person feels sudden.
I’m honestly not really happy with describing the author in the third person in spotlight either, I think we should just try to find a different way of accomplishing the goal there (which I think is to avoid “I” speak which also feels jarring in the summaries)
I said similar elsewhere, but I agree that “I” speak would be really bad (don’t put words into the author’s mouth, especially in this case, where it would mislead the reader about the writing style/ability of the post), but I also think switching out the post for a summary is pretty jarring to begin with.
Since every post for years has been peekable as the first couple paragraphs, showing a summary unlabeled is always a jarring bait-and-switch
I do not have especially strong feelings about it besides the initial confusion. I think maybe confusion could be fixed just by saying something like “Featured post summary:” or similar, which would help explain why, when I expected the first paragraph of the essay, I’m reading a summary
Okay yeah those are all posts that won Best of LessWrong. We generate like 8 AI descriptions, and then a LW teammate goes through, picks the best starting one, and then fine-tunes it to create the spotlights you see at the top of the page. (Sometimes this involves mostly rewriting it, sometimes we end up mostly sticking with the existing one).
Ah, so it really was an AI summary of a post that is totally unlabeled? An amusing twist.
Even with what appears to be a fair amount of fine-tuning, it still reads like unlabeled AI text, which is maybe why I found it so jarring. Possibly a label could help, then?
(though honestly, it’s pretty weird to not see the first paragraph, so even if the AI thing doesn’t ring as important to you, some kind of differentiating label would be REALLY HELPFUL when the expected POV is altered after the fact.)
(RE parenthetical: I heavily suspect you know this, but switching the POV to first person would be much much much worse and would lead me to skip posts I would assume read like AI text)