I think I’ll save the world first, then worry about a girlfriend.
Plus, the available dating pool should be that much larger with that accomplishment on my resume.
I think I’ll save the world first, then worry about a girlfriend.
Plus, the available dating pool should be that much larger with that accomplishment on my resume.
A formula is worth a thousand pictures.
—Edsger Dijkstra
Provide separate discussion areas (subreddits?) for geographic subcommunities.
Google Groups and Meetup.com are currently used for this purpose by some, but this is not the most elegant solution. It sprawls LW content beyond the main site, requires learning how to use different interfaces, and puts us at the mercy of outside companies. The possibility of karma would also encourage more discussion among these groups.
This one’s for you, Clippy:
The specialist makes no small mistakes while moving toward the grand fallacy.
—Marshall McLuhan
Provide optional notification of nested comment replies to the parent comment’s author (beyond the initial reply).
Currently, if there is a reply to one of my comments, I receive a notice. However, if there is a reply to the reply, and so on, I don’t. These grandchildren replies are often still relevant and of interest to me, however. Having the option of being notified of them would be nice.
(Alternately, this suggestion would solve the problem also, though that solution would require an additional step from the author.)
Pickup at the right end of the bell curve looks like this:
“If I were to ask you out, would your answer to that question be the same as the answer to this one?”
(Disclaimer: I didn’t make it up. I saw it somewhere else on this site, long time ago.)
I have observed similar behavior in others. Only I called it ‘blackboxing’, for lack of a better word. I think this might actually be a slightly better term than ‘learned blankness’, so I hereby submit it for consideration. It’s borrowed from the software engineering idea of a black box abstraction.
People tend to create conceptual black boxes around certain processes, which they are remarkably reluctant to look within and explore, even when something does go wrong. This is what seems to have happened with the dishwasher incident. The dishwasher was treated as a black box. Its input was dirty dishes, its output was clean ones. When it malfunctioned, it was hard to see it as anything else. The black box was broken.
Of course, engineers and programmers often go out of their way to design highly opaque black boxes, so it’s not surprising that we fall victim to this behavior. This is often said to be done in the name of simplicity (the ‘user’ is treated as an inept, lazy moron), but I think an additional, more surreptitious reason, is to keep profit margins high. Throwing out a broken dishwasher and buying a new one is far more profitable to a manufacturer than making it easy for the users to pick it apart and fix it themselves.
The open source movement is one of the few prominent exceptions to this that I know of.
And with that in mind, how would it have affected the sanity waterline if Tony had donated that $135 to an institution that’s pursuing the improvement of human rationality?
I don’t know if recyling the sequences to the front page is the solution, but you do have some valid points.
It would be nice if some kind of sequence “book-club” functionality existed within the LW platform that enabled people to form reading groups, depending on how far along they were with their readings, and engage in fresh active discussion.
On the other hand, the sequences will likely be distilled into book format in the near future, according to the SIAI website, so there might not be much incentive to do anything about them at this point.
On a similar note, but from a different author:
Employ your time in improving yourself by other men’s writings, so that you shall gain easily what others have labored hard for.
—Socrates
I’ve tried and failed to come up with any reasonable interpretation other than my own. Please frontstab me.
A domain-specific interpretation of the same concept:
“The real hero of programming is the one who writes negative code.”
—Douglas McIlroy
I suppose it might be a little ambiguous. Here’s my interpretation (I’m curious to hear others).
The practice of backstabbing usually refers to criticizing someone when they’re not present, while feigning friendship.
Thus, “frontstabbing” would be to criticize someone openly and honestly, which is often very hard to do. Even, or perhaps especially, among friends. But it seems to be something worth aspiring towards, if one is concerned with rationality and truth.
A domain-neutral interpretation of the same concept:
Entities should not be multiplied beyond necessity.
—William of Ockham
Ditto for Toronto.
We’re still in the early stages (only two meetups behind us), but things are looking good so far.
A Google search for “save the world” yields 11,000,000 results. A search for “harm the world” yields 242,000. Also, the top results for the latter are framed as cautionary tales, rather than normative instructions, or communities for how to accomplish the malignant goal.
Saving the world is a very commonly expressed sentiment, which is why compiling a list of people who want to save the world seems a little redundant to me. A list about people who have saved the world might be a tad more useful.
As far as I know, an infinitesimal amount of the world population consciously sets out to be evil, or to do harm to the world. It’s more a case of the road to hell being paved with good intentions. I’m pretty sure there have been many studies about this, though I’d have to dig for them again. Perhaps someone else can post them.
Neither the stated desire nor the action implies donating to charities. Even you have admitted to this in the past.
I thought your claim might be based on the replies to your HELP! I want to do good thread. In that case, I thought I should point out that no equivalent “HELP! I want to do bad” or “HELP! I want to be completely benign” threads were ever created.
One could easily verify your claim by making such posts, and counting the replies. If one wanted to be really accurate about it, one could also go through the post history of the respondants, to be sure they’re not just being contentious, but truly ill-intentioned.
Extending the survey to the population at large would be similarly trivial. One could tell people on the street about a one-question survey, and if they decide to participate, alternate between: “Do you want to save/improve the world?” and “Do you want to harm the world?”
(This might be a fun exercise for the Toronto LW group, now that I think about it. Both to find the answer out for ourselves, and to get people thinking about the subject. Because thinking often precedes action. Or at least it should… )
Ability to disable images in comments.
A kind of “favourite users” already exists, under the guise of “friends”. (Click on PREFERENCES, then click FRIENDS on the re-rendered navigation bar.)
But it sounds like what you’re suggesting is a more fine-grained personal ranking of posters. This could be useful, and it could be dangerous. It sounds like it could reinforce confirmation bias, for one.
This might be good for newbies on their first visit, but if retention is the ultimate goal, it would quickly become redundant for the regulars to click through a static front page to get to the new content.
The ABOUT link under the header already serves the purpose you suggest.
I think Donald Robert Perry said it more succinctly: