If you shout from the middle of the crowd—probably so. If you enter—visibly exhalirated—just to shout “Fire!”, there is a risk.
vi21maobk9vp
I think you forgot to mention that you considered (for example) Melatonin reliably non-harmful. Because if you discovered negative side effects (BTW, it is hard to find negative long-term side-effects...), cancelling Melatonin would save much more than the price of pills on its own.
There is at least a non-negligible minority (or a silent majority?) of those who would retroactively call it an improvement if your wish were granted by some magic measure.
Even though I do think decoherence-based MWI is a better model than Copenhagen non-interpretation, it doesn’t look like there are any new arguments in support or against it on LW anyway.
But given that LW is run mostly by SingInst people, and they do believe in possibility of FOOM, there is no reason for FAI to become offtopic on LW. Most of the time it is easy to recognize by the thread caption, so it is easy for us to participate only in those discussions that are interesting to us.
Also, it gives Facebook full access to personalized LessWrong-browsing patterns.
I agree with Holden and additionally it looks like AGI discussions have most of the properties of mindkilling.
These discussions are about policy. They are about policy affecting medium-to-far future. These policies cannot be founded in reliably scientific evidence. Bayesian inquiry heavily depends on priors, and there is nowhere near anough data for tipping the point.
As someone who practices programming and has studied CS, I find Hanson and AI researchers and Holden more convincing than Eliezer_Yudkowsky or lukeprog. But this is more prior-based than evidence-based. Nearly all that the arguments by both sides do is just bringing a system to your priors. I cannot judge which side gives more odds-changing data because arguments from one side make way more sense and I cannot factor out the original prior dissonance with the other side.
The arguments about “optimization done better” don’t tell us anything about position of fundamental limits to each kind of optimization; with a fixed computronium type it is not clear that any kind of head start would ensure that a single instance of AI would beat an instance based on 10x computronium older than 1 week (and partitioning the world’s computer power for a month requires just a few ships with conveniently dropped anchors—we have seen it before, on a bit smaller scale). The limits can be further, but it is hard to be sure.
It may be that I fail to believe some parts of arguments because my priors are too strongly tipped. But Holden who has read most of the sequences without prior strong opinion wasn’t convinced. This seems to support the theory of there being little mind-changing arguments.
Unfortunately, Transhumanist Wiki returns an error for a long time, so I cannot link to relatively recent “So you want to be a Seed AI Programmer” by Eliezer_Yudkowsky. If I say what I remembered best from there that made me more ready to discount SIAI-side priors it would be arguing with a fixed bottom line. I guess WebArchive version ( http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer ) should be quite OK—or is it missing important edits? Actually, it is a lot of content which puts Singularity arguments in slightly another light; maybe it should be either declared obsolete in public or saved at http://wiki.lesswrong.com/ for everyone who wants to read it.
I repeat once more that I consider most of the discussion to be caused by different priors and unshareable personal experiences. Personally me agreeing with Holden can give you only the information that a person like me can (not necessarily will) have such priors. If you agree with me, you cannot use me to check your reasons; if you disagree with me, I cannot convince you and you cannot convince me—not at our current state of knowledge.
- 17 May 2012 8:54 UTC; 0 points) 's comment on I Stand by the Sequences by (
There can be harmul side-effects and that topic is not covered by the article; on the other hand, pure evolutionary argument can be doubted because of changed environment.
If I stimulate my brain, it is natural to assume my brain requires more energy now. So I probably need more glucose. In evolutionary relevant context, that would make me more likely to starve—after all, I would need more highly valued energy and thinking clearly wouldn’t make a killed bull magically appear before me.
This is still true for the most of the Earth’s population. It is not true for many of LessWrong readers, though. There are some primarily-mental jobs now (in some places of the world—the places where LessWrong readers come from). Keeping more things in you mind means being a better programmer, teacher, scientific researcher. Being better at your profession often helps you to evade starvation. And getting needed amount of calories—if you already know where to get all these vitamins and microelements—is trivial in these parts of the world.
So, this modification was not a benefit earlier, and it was quite costly; both factors are significantly reduced in some parts of modern world.
Of course, increased mental capability can lead to some personality traits that make it harder to reproduce; but that is again a question of side-effects and not a self-evident thing. If you consider it harmful, you can try to spend effort on fighting these side-effects—some people report significant success..
Well, submitting a quote request form as a “Yes Y. Yes born on Yes.Yes.Yes” would not lead to anything anyway, so why bother with extra steps?
Do you consider Stupid Questions Open Thread a useful thing? Do you want new iterations to appear more reagularly? How often?
Even though I didn’t ask anything in it, I enjoyed reading it and participating in discussions and I think that it could reduce “go to Sequences as in go to hell” problem and sophistication inflation.
I would like it to reoccur with approximately the regularity of usual Open Threads; maybe not on calendar basis, but after a week of silence in the old one or something like that.
The fact that the post itself is high-quality doesn’t imply that it has the optimal title.
Why “Feeling Rational” has to have r-word in title; “Rational Romantic Relationships” would not lose much by changing to “Designing Better Romantic Relationships”.
Actually, whatever license you use, your content will be copied around.
If you use a proprietary license after taking CC-BY core content, copying your content will be less legal and less immoral.
Nope.
ZF is consistent with many negations of strong choice. For example, ZF is consistent with Lebesgue measurability of every subset in R. Well-ordering of R is enought to create unmeasurable set.
So, if ZF could prove existence of such a formula, ZF+measurability would prove contradiction, but ZF+neasurability is equiconsistent with ZF and ZF would be inconsistent.
It is very hard to say anything about any well-ordering of R, they are monster constructions...
There seems to be a significant amount of people who browse with anti-kibitzer and full-unhide.
If you want us to stop using such option combinations, maybe putting a warning into preferences would be a reasonable first step?
I’m applying what I proved in the whole previous paragraph, so it’s not as easy to explain as one single step of reasoning
OK
Formalizing the above argument in Peano Arithmetic, and writing instPA(n,x) for the object-level encoding of the meta-level function, we can prove: “For all extended-language sentences ‘C’, if PAK proves ‘C’, then for all n, PA(n+1) proves instPA(n,‘C’).”
Oh, this is the place that I should have pointed out. Sorry.
If I understand this correctly, “PA_K proves that if PA_K proves ‘C’, PA(1) proves instPA(0, ‘C’)”. Also, PA_K should prove all simple things from PA(2), like PA(1) being consistent. Let us consider ‘C’ being plain “falsehood”. Then we shall get “PA_K proves that PA(1) is not contradictory and if PA_K is contradictory, PA(1) is contradictory”. For the benefit of casual reader: this would imply that PA_K contains PA and proves its own consistency, which implies that PA_K is contradictory.
This means I missed something very basic about what goes on here. Could you point out what exactly, please?
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
After all, Yudkowsky has no credentials to speak of, either—what is SIAI? Weird charity?
I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.
Don’t worry, whether you do this or not, there is a novel where you do and a novel where you don’t, without any other distinctions.
Or maybe the parenthesis refere only to “doomsday machine”
If the post is just a data point for those who know the basics, it could be cut to its first 3 paragraphs without loss. If the post explains things for those have randomly found LW, it brief summary of publication bias near the beginning could increase expected amount of use derived from reading the post.
Maybe “span.monthly-score {display:none;} span.score {display:none;}” in userstyle would help?
MWI says that you apply no more than one collapse in every experiment, and you know why it is a collapse from your point of view. Copenhagen requires you to decide without guidance whether to apply collapse inside the experiment.
So, if I post some honest argument but make a couple of stupid mistakes (I hope that such a post will get downvoted to around −5), anyone who explains me what I have missed will be punished?