Wow, I’m blown away by Holden Karnofsky, based on this post alone. His writing is eloquent, non-confrontational and rational. It shows that he spent a lot of time constructing mental models of his audience and anticipated its reaction. Additionally, his intelligence/ego ratio appears to be through the roof. He must have learned a lot since the infamous astroturfing incident. This is the (type of) person SI desperately needs to hire.
Emotions out of the way, it looks like the tool/agent distinction is the main theoretical issue. Fortunately, it is much easier than the general FAI one. Specifically, to test the SI assertion that, paraphrasing Arthur C. Clarke,
Any sufficiently advanced tool is indistinguishable from an agent.
one ought to formulate and prove this as a theorem, and present it for review and improvement to the domain experts (the domain being math and theoretical computer science). If such a proof is constructed, it can then be further examined and potentially tightened, giving new insights to the mission of averting the existential risk from intelligence explosion.
If such a proof cannot be found, this will lend further weight to the HK’s assertion that SI appears to be poorly qualified to address its core mission.
Start your post or comment with a summary when posting anything over 3-5 paragraphs.
I suspect it was the trivial inconveninece of setting it up that stopped most of those who were considering it.
There is no territory, it’s maps all the way down.
This post had an odd effect on me. I agreed with almost everything in it, as it matches my own logic and intuitions. Then I realized that I strongly disliked the logic in your anti-meat post, because it appeared so severely biased toward a predefined conclusion “eating meat is ethically bad”. So, given the common authorship, I must face the possibility that the quality of the two posts is not significantly different, and it’s my personal biases which make me think that it is. As a result, I am now slightly more inclined to consider the anti-meat arguments seriously and slightly less inclined to agree with the arguments from this post, even though the foggy future and the lack of feedback arguments make a lot of sense.
EDIT: Hmm, whatever shall I do with 1 Eliezer point and 1 Luke point...
Done. I’m glad there was nothing about Schrodinger this time around.
What happened to holding off proposing solutions?
Main is useless to me as is. As I mentioned a few times before, it should be replaced/supplemented by a “highly rated”/”greatest hits”/”best of LW” section, whether generated automatically based on post’s karma or updated manually once a day or so.
Also, instead of/in addition to subreddits, you can create a list of approved tags/keywords a poster is forced to select one or more from, which trigger notifications to these busy people, or to anyone else who subscribes to notifications.
I had been a small-time LW regular for about 3 years and witnessed its decline until I stopped commenting a couple of months back. It was frustrating to see Eliezer, Yvain, Luke and others leave for other social media platforms, and even more to watch them fragment their writings further between personal blogs, FB, Reddit and tumblr. Not because it’s a wrong thing to do, just because it’s harder to follow and the commenting system is usually even worse than here. Well, except for Reddit. Without a strong leader charismatic emerging and willing to add quality content and drive the changes, I don’t expect any site redesign to revive this rather zombified forum. Or maybe if the forum is redesigned one would emerge, who knows. Chicken and egg.
Or maybe it should be a rationality-related aggregator/hub, where all relevant links get posted and discussed. So that one could see at a glance that Scott A posted something on his blog, Eliezer on tumbler, Brienne on Facebook, gwern on his site and someone else on twitter or reddit. All on one page. There are various sites like that around. With the ability to comment locally, or go to the source and discuss it there. Maybe even add linkbacks to this site.
Just my 2c.
Just wanted to mention that an amazing amount of arguments in this thread and in the linked piece consists of misidentified non-central fallacies (in Yvain’s labelling). None of the targets of the labels used (“racist”, “eugenics”, “feminist”, what have you), correspond to a typical image evoked by using them.
There is one person whose writings seem of better quality to me than either yours or Eliezer’s, and that’s Yvain. What do you think his writing style is?
(To be clear, I enjoy what you and EY write, despite the style differences, especially when you are at your best.)
As Jack mentioned and as Eliezer repeatedly said, even if a certain question does not make sense, the meta-question “why do people think that it makes sense?” nearly always makes sense. So, to avoid going insane, you can approach your ethics courses as “what thought process makes people make certain statements about ethics and morality?”. Admittedly, this altered question belongs in cognitive science, rather than in ethics or philosophy, but your professors likely won’t notice the difference.
Having been through a Physics grad school (albeit not of a Caltech caliber), I can confirm that lack of (a real or false) modesty is a major red flag, and a tell-tale of a crank. Hawking does not refer to the black-hole radiation as Hawking radiation, and Feynman did not call his diagrams Feynman diagrams, at least not in public. A thorough literature review in the introduction section of any worthwhile paper is a must, unless you are Einstein, or can reference your previous relevant paper where you dealt with it.
Since EY claims to be doing math, he should be posting at least a couple of papers a year on arxiv.org (cs.DM or similar), properly referenced and formatted to conform with the prevailing standard (probably LaTeXed), and submit them for conference proceedings and/or into peer-reviewed journals. Anything less would be less than rational.
“you bite one maths teacher and they never let you forget it, do they?”
It tried to point to all the horcruxes in Hogwarts at once, and crashed because of an unchecked stack overflow.
Before I started tutoring I believed that anyone can learn first year math and science if only they put in the time and effort. Before I went to grad school I believed that I can learn all the advanced math and theoretical physics topics I was interested in. Neither belief survived experimental testing.