It might well be possible to commission Wiener to illustrate the Sequences.
Dolores1984
Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren’t exactly short and to the point. Frankly, they ramble. Which is fine if you’re just trying to get your thoughts out there, but people don’t finish the majority of the books they pick up. You need something that’s going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we’re looking at now. The more of it they get through, the more good you’re doing.
EDIT: Oh! And the whole thing needs a full jargon palette-swap. There’s a lot of LW-specific jargon that isn’t helpful. In many cases, there’s existing academic jargon that can take the place of the phrases Yudkowky uses. Aside from lending the whole thing a superficial-but-useful veneer of credibility, it’ll make the academics happy, and make them less likely to make snide comments about your book in public fora. If you guys aren’t already planning on a POD demand run, you really should. Ebooks are wonderful, but the bulk of the population is still humping dead trees around. An audiobook or podcast might be useful as well.
because it seems to be in direct conflict with the fact of ethics
Actual answers aside, as a rationalist, this phrase should cause you to panic.
It probably gets pattern matched to ‘state-ist hysteria being used to crush industry.’
As a general rule, rationality is not an excuse to be a dick. Go away.
You know, it would probably be possible to benefit from your organs’ value while you’re alive. Sign a contract to agree to be organ-harvested after your death, and get a stipend for the average estimated value of your cadaver, today! Free money, from your perspective. You could get more if you contractually agreed not to smoke or take certain dangerous jobs.
Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.
Not strictly. It’s still explicitly genocide with Venusians and Neptunians—it’s just easier to ignore that fact in the abstract. Connecting it to an actual genocide causes people to reference their existing thinking on the subject. Whether or not that existing thinking is applicable is open for debate, but the tactic’s not invalid out of hand.
If such a person would write a similar post and actually write in a way that they feel, rather than being incredible polite, things would look very different.
I’m assuming you think they’d come in, scoff at our arrogance for a few pages, and then waltz off. Disregarding how many employed machine learning engineers also do side work on general intelligence projects, you’d probably get the same response from automobile engineer, someone with a track record and field expertise, talking to the Wright Brothers. Thinking about new things and new ideas doesn’t automatically make you wrong.
That recursive self-improvement is nothing more than a row of English words, a barely convincing fantasy.
Really? Because that’s a pretty strong claim. If I knew how the human brain worked well enough to build one in software, I could certainly build something smarter. You could increase the number of slots in working memory. Tweak the part of the brain that handles intuitive math to correctly deal with orders of magnitude. Improve recall to eidetic levels. Tweak the brain’s handling of probabilities to be closer to the Bayesian ideal. Even those small changes would likely produce a mind smarter than any human being who has ever lived. That, plus the potential for exponential subjective speedup, is already dangerous. And that’s assuming that the mind that results would see zero new insights that I’ve missed, which is pretty unlikely. Even if the curve bottoms out fairly quickly, after only a generation or two that’s STILL really dangerous.
Worst of all, you are completely unconvincing and do not even notice it because there are so many other people who are strongly and emotionally attached to the particular science fiction scenarios that you envision.
Really makes you wonder how all those people got convinced in the first place.
I myself employ a very strong heuristic, from years of trolling the internet: when a user joins a forum and complains about an out-of-character and strongly personal persecution by the moderation staff in the past, there is virtually always more to the story when you look into it.
Unfortunately, inserting complex novel gene sequences into every cell of an organism in a way that doesn’t just cause massive, global cancer is very hard problem. Making those sequences do what you want them to do, and not, say, kill the target organism is even harder. Especially since human anatomy isn’t well suited to the task, and would need to be modified. By the time we have the technology to do something like that, death is probably already a solved problem.
That said, I’ve used the premise in a science fiction book before. The main characters were members of Homo Sapiens Durabilis, and had genomes modified with tardigrade genetics. They could be pumped full of hydrogen sulfide, and reversibly dehydrated to death for long-term space travel, or during a medical emergency.
The lack of human space exploration really doesn’t help in terms of capturing the public’s fickle imagination. Sending robots is just not as exciting, even to me. Additionally, NASA’s failure to continue to show progress after the huge, dramatic victory of the moon missions was very bad, from a PR perspective. Many people feel that space is over, and that that chapter in humanity’s history is closed. Which is very sad, because I think the moon missions were one of the best things we’ve ever done, as a species.
In the beginning, there was nothing. The cosmos were void—timeless, and without form. And, lo, God pointed upon the abyss, and said ‘LET THERE BE ENERGY’ And there was energy. And God pointed to the energy, and said, ‘and let you be bound among yourselves that you may wander the void together, proton to neutron, and proton to proton, and let the electrons always seek their opposite number, within the appropriate energy barrier, and let the photons wander where they will.’ Lo, and god spoke to the stranger particles, for some time, but what He said was secret. And God saw hydrogen, and saw that it was good.
And God saw the particles moving at all different speeds, away from one another, and saw that it was bad, and God said ‘and let the cosmos be bent and cradle the particles, that they may always be brought back together, though they be one billion kilometers apart, within the appropriate energy barrier, of course. And let the curvature of space rise without end with the energy of velocity, that they all be bound by a common yoke.’ And god looked upon the spirals of gas, and saw that it was good.
And god took the gas and energy above, and the gas and energy below, and said ‘and you shall be matter, and you shall be antimatter, and your charges shall ever be in conflict, and never the twain shall meet, except in very small quantities.’ And so there was the matter and the antimatter.
And God saw the cosmos stretching out to a single future, and said ‘And let you all be amplitude configurations, that you may not know thyself from the thy neighbor, and that the future may expand without end.’ And god saw the multiverse, and saw that it was good.
I think the difference is that she provided resources to allow him to explore his curiosity. A helicopter parent would have chosen the interests, and then chosen the way in which those interests were explored.
Quirrelmort is vulnerable to acausal sex
Because the way you interact with small children is wildly, radically different from the way that either adults or children interact with their own peers. This is also a trend I’ve observed much more widely than just myself. Homeschooled children come out weird unless their parents are very, very aggressive about socialization, much more so than most people would consider reasonable.
I donated out of an irrational sense of kinship. I hope she makes it.
Your four criteria leave an infinite set of explanations for any phenomenon. Including, yes, George the Giant. That’s why we have the idea of Occam’s razor—or, more formally, Solomonoff Induction. Though I suppose, depending on the data available to the tribe, the idea of giant humans might not be dramatically more complicated than plate tectonics. It isn’t like they postulated a god of earthquakes or some nonsense like that. At minimum, however, they are privileging the George the Giant hypotheses over the other equally-complicated plausible explanations. The real truth is that they don’t have enough data to come up with the real answers. They need to start recording data and studying the natural world. They can probably figure it out in a few hundred years if they really put their backs into it.
Less Wrong is not a cult, so long as our meetups don’t include a Matrioshka brain.
I’m an aspiring human
“politics is isomorpic to politics” is obviusly false
In the new version of Newcomb’s problem, you have to choose between a box containing sex and a box containing the coherent extrapolated volition of Pinkie Pie
If it were me, I’d split your list after reductionism into a separate ebook. Everything that’s controversial or hackles-raising is in the later sequences. A (shorter) book consisting solely of the sequences on cognitive biases, rationalism, and reductionism could be much more a piece of content somebody without previous rationalist intentions can pick up and take something valuable away from. The later sequences have their merits, but they are absolutely counterproductive to raising the sanity waterline in this case. They’ll label your book as kooky and weird, and they don’t, in themselves, improve their readers enough to justify the expense. People interested in the other stuff can get the companion volume.
You could label the pared down volume something self helpey like ‘Thinking Better: The Righter, Smarter You.” For goodness sake, don’t have the word ‘sequences’ in the title. That doesn’t mean anything to anyone not already from LW, and it won’t help people figure out what it’s about.
EDIT: Other title suggestions—really just throwing stuff at the wall here
Rationality: Art and Practice
The Rational You
The Art of Human Rationality
Black Belt Bayesian: Building a Better Brain
The Science of Winning: Human Rationality and You
Science of Winning: The Art and Practice of Human Rationality (I quite like this one)