Abstract added, thanks for the pointer. I did consider breaking the article in two to reduce size, but decided against it. An abstract would have been a good idea and in fact this is a (heavily) modified version of an academic paper that had an abstract, but somehow it didn’t occur to me. Go figure.
To everyone: I feel I have been way too engrossed in the material to be able to competently write a summary that would be helpful to a first time reader, so suggestions on improving the abstract would be greatly appreciated.
Yeah, it would be, but I’ve also seen it used to the effect of ‘abstract’. I’ve already added almost all features of an academic paper except for affiliation and references, I just can’t bring myself to start the damn thing with “Abstract:”
YES! Readers shouldn’t have to search back to the beginning the first time they see an acronym to figure out what it stands for. The abbreviation should always be explained the first time it is used.
It’s the best tl;dr I could muster. Probably because I’m too close to the content and have lost sight of what it’s like to see it for the first time. If someone can help conjure up a better one, I’d gladly replace it.
Splitting an article in two is also common in academia; the same strategy on LW might result in more karma, if that’s the sort of thing one finds worthwhile...
At this point, I’m more interested in adding any sort of value rather than optimizing for karma. I’ve done a lot of that on HN (and a little bit in academia), but LW is harder than that :)
I did go further, and was glad I did because I found the article very interesting. But I would have also liked an abstract and a conclusion paragraph. I do think the topic connects to rationality, but I would have liked to see more of the author’s thoughts on how it does. Also, the analogy between Google and “paperclipping” could have used a bit more explanation.
Thanks for the kind words. I consider the last long paragraph (“what we therefore see”...) to be a form of conclusion even if not marked so. It used to be longer but then I figured that it was just sermonizing about the well-known issues around FAI and decided to stick only to the original contributions. Again, I fear my long gestation of this particular topic may have warped my judgement. Will try to clearly mark the conclusion and add a little more meat around the connections I see to FAI and rationality.
I really enjoy your contributions to LR and some of your stuff I have read off-site by the way.
Alexandros was referring to a metaphorical paperclipping. I’m surprised you are not more aware of humanity’s use of a paperclip maximizer as a metaphor for everything that could go wrong with AGI. A top-level post by you about why a paperclip maximizer would cooperate with humanity indefinitely would help.
In the interests of accessibility to more of the general public, I think it’s a good idea to have references on the first instance of an acronym or idiomatic concept (especially if it’s a main point of the article).
I’m a casual reader of LessWrong, Inside of articles, I usually find some term or abbreviation which refers to a concept I have no idea about. In this article, I came across FAI (which apparently means Friendly AI?) and paperclipping. They’re not completely unGoogleable, but not explaining these terms, even parenthetically, seems self-defeating. It’s against spreading the ideas in the article to other people who don’t already share your views.
Once again; a long post, with no abstract, and a cryptic title.
IMO, if the author does not know enough to start with an abstract, then they should not expect many readers to go further.
Abstract added, thanks for the pointer. I did consider breaking the article in two to reduce size, but decided against it. An abstract would have been a good idea and in fact this is a (heavily) modified version of an academic paper that had an abstract, but somehow it didn’t occur to me. Go figure.
To everyone: I feel I have been way too engrossed in the material to be able to competently write a summary that would be helpful to a first time reader, so suggestions on improving the abstract would be greatly appreciated.
Yay! Congrats!
Not sure about “tl;dr”, though!
Isn’t that what I say when I skip your non-abstracted article...? ;-)
Yeah, it would be, but I’ve also seen it used to the effect of ‘abstract’. I’ve already added almost all features of an academic paper except for affiliation and references, I just can’t bring myself to start the damn thing with “Abstract:”
“tl;dr” seems very casual to me. If your readers are casual and you want them to treat your article casually, that may be appropriate.
Incidentally, if acronyming like that, it should read: “Optimization By Proxy (OBP)”
You can probably skip writing the word “Abstract”—if your first paragraph is isolated, in italics, and obviously starts out with a summary.
YES! Readers shouldn’t have to search back to the beginning the first time they see an acronym to figure out what it stands for. The abbreviation should always be explained the first time it is used.
Somehow I found the tl;dr impenetrable, but the actual article eminently readable. Is this deliberate?
It’s the best tl;dr I could muster. Probably because I’m too close to the content and have lost sight of what it’s like to see it for the first time. If someone can help conjure up a better one, I’d gladly replace it.
Splitting an article in two is also common in academia; the same strategy on LW might result in more karma, if that’s the sort of thing one finds worthwhile...
At this point, I’m more interested in adding any sort of value rather than optimizing for karma. I’ve done a lot of that on HN (and a little bit in academia), but LW is harder than that :)
I did go further, and was glad I did because I found the article very interesting. But I would have also liked an abstract and a conclusion paragraph. I do think the topic connects to rationality, but I would have liked to see more of the author’s thoughts on how it does. Also, the analogy between Google and “paperclipping” could have used a bit more explanation.
Thanks for the kind words. I consider the last long paragraph (“what we therefore see”...) to be a form of conclusion even if not marked so. It used to be longer but then I figured that it was just sermonizing about the well-known issues around FAI and decided to stick only to the original contributions. Again, I fear my long gestation of this particular topic may have warped my judgement. Will try to clearly mark the conclusion and add a little more meat around the connections I see to FAI and rationality.
I really enjoy your contributions to LR and some of your stuff I have read off-site by the way.
In fairness, the title does mention paperclips, which is a lot more mention of important stuff than most other articles here.
Then again, it’s pretty obvious Google has not turned the Web into paperclips, so why ask the question?
Alexandros was referring to a metaphorical paperclipping. I’m surprised you are not more aware of humanity’s use of a paperclip maximizer as a metaphor for everything that could go wrong with AGI. A top-level post by you about why a paperclip maximizer would cooperate with humanity indefinitely would help.
Thank for this report on your surprisal, and what would help, and the assumptions behind discussion of paperclip maximizers.
In the interests of accessibility to more of the general public, I think it’s a good idea to have references on the first instance of an acronym or idiomatic concept (especially if it’s a main point of the article).
I’m a casual reader of LessWrong, Inside of articles, I usually find some term or abbreviation which refers to a concept I have no idea about. In this article, I came across FAI (which apparently means Friendly AI?) and paperclipping. They’re not completely unGoogleable, but not explaining these terms, even parenthetically, seems self-defeating. It’s against spreading the ideas in the article to other people who don’t already share your views.
Thanks for the tip, it’s done.