Is atheism a “religion”? Is transhumanism a “cult”?
My favorite example is, Is a fetus a person?
Is atheism a “religion”? Is transhumanism a “cult”?
My favorite example is, Is a fetus a person?
I’ll do this test on any AI I create. . . . This should be safe.
Not in my humble opinion it is not, for the reasons Eliezer has been patiently explaining for many years.
creating mutual information between your utility function and the world without changing your utility function.
By that definition a punch in the nose is art.
Because I don’t even know what I want from that future.
Well, I hope you will stick around, MichaelG. Most people around here IMHO are too quickly satisifed with answers to questions about what sorts of terminal values properly apply even if the world changes drastically. A feeling of confusion about the question is your friend IMHO. Extreme scepticism of the popular answers is also your friend.
The fact that a great variety of experiments were done that might have found a nonlocal effect, but no nonlocal effect was ever found does not make you pause before you post that?
Sorry for reading your question in an uncharitable way and for lecturing you, manuel. You have made me aware that the name of this blog is less than ideal because it admits an unfortunate second interpretation (namely, “stamping out bias in others”).
They’re rare, Ben, but they walk the Earth.
I love this blog. Best blog ever.
Eliezer seems to do most of the moderation
It does not seem that way from where I am standing: although I comment more on posts by Eliezer than on posts by Robin and although I am one of the most persistent critics of Eliezer’s plans and moral positions, none of my comments on Eliezer’s posts were unpublished, but 3 of my comments on Robin’s posts were.
Note that I do not think Robin did anything wrong. Contrary to what many commentators believe, unpublishing comments is necessary IMHO to keep the quality of the comments high enough that busy thoughtful people continue to read them. (In fact, if I thought there was a chance he might agree to do it, I would ask Robin to edit or moderate my own posts on my own blog.)
He says he isn’t ready to write code. If you don’t try to code up a general artificial intelligence you don’t succeed, but you don’t fail either.
Would people stop saying that! It is highly irresponsible in the context of general AI! (Well, at least the self-improving form of general AI, a.k.a., seed AI. I’m not qualified to say whether a general AI not deliberately designed for self-improvement might self-improve anyways.)
Noodling around with general-AI designs is the most probable of the prospective causes of the extinction of Earth-originating intelligence and life. Global warming is positively benign in comparison.
Eliezer of course will not be influenced by taunts of, “Show us the code,” but less responsible people might be.
Suppose you learn of a powerful way to steer the future into any target you choose as long as that target is specified in the language of mathematics or with the precision needed to write a computer program. What target to choose? One careful and thoughtful choice would go as follows. I do not have a high degree of confidence that I know how to choose wisely, but (at least until I become aware of the existence of nonhuman intelligent beings) I do know that if there exists wisdom enough to choose wisely, that wisdom resides among the humans. So, I will choose to steer the future into a possible world in which a vast amount of rational attention is focused on the humans, on human knowledge and on the potential that the humans have for affecting the far future. This vast inquiry will ask not only what future the humans would create if the humans have the luxury of avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also what future would be created by whatever intelligent agents (“choosers”) the humans would create for the purpose of creating the future if the humans had the luxury . . . and also what future would be created by whatever choosers would be created by whatever choosers the humans would create . . . This “looping back” can be repeated many times.
I managed to avoid “desire”, “want” and “volition”. Unfortunately I only have time to write one of these today. I would do well to write a dozen.
The American War of Independence did not begin as a revolt against the idea of kings, but rather a revolt against one king who had overstepped his authority and violated the compact.
And then someone suggested a really wild idea...
Er, it is not as if there did not already exist many European governments without kings and not as if the English did not fight a civil war (1642-51) many of the losers of which emmigrated to American colonies to escape persecution for their anti-monarchist views and for religious opinions that were persecuted largely because they correlated with anti-monarchist views.
I want to expand on Robin’s comment. Some have hypothesized that promoting crazy beliefs helps a ruling coalition keep hold of power because the coalition’s repressive efforts can be concentrated on the fraction of the population that shows signs of not believing the crazy beliefs. In other words, they can stay in power by cracking down on those who won’t get with the program.
To summarize, Michael Vassar offers Bison on the Great Plains as evidence that maybe farming was not clearly superior to hunting (and gathering) in the number of humans a given piece of land could support. Well, here is a quote on the Bision issue:
The storied Plains Indian nomadic culture and economy didn’t emerge until the middle of the eighteenth century. Until they acquired powerful means to exploit their environment—specifically the horse, gun, and steel knife—Indians on the plains were sparsely populated, a few bands of agrarians hovering on the margins of subsistence. Their primary foods were maize, squash, and beans. Hunting bison on foot was a sorry proposition and incidental to crops. It couldn’t support a substantial population.
Though Eliezer does not say it explicitly today, the totality of his public pronouncements on laughter leads me to believe that he considers laughter an instrinsic good of very high order. I hope he does not expect me to accept the highness of the probability of the rareness of humor in the universe as evidence for humor’s intrinsic goodness. After all, spines are probably very rare in the universe, too. At least spines with 32 (or however many humans have) vertebrae are.
Eliezer does not explicitly say today that happiness is an intrinsic good, but he does contrast pebble sorting with “the human vision of a galaxy in which agents are running around experiencing positive reinforcement.”
I take it Eliezer does not wish to see the future light cone tiled with tiny computers running Matt Mahoney’s Autobliss 1.0. Pray tell me, What is wrong with such a future that is not also wrong with a future in which the resources of the future light cone are devoted to helping humans run around and experience positive reinforcement? Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human. Well, my reply to that is that it is relatively easy to make Autobliss more complex. We can even employ an evolutionary algorithm to create the complexity, increasing the resemblance between Autobliss 2.0 and humans. Eliezer probably has a reply to that, too. But when does this dialog reach the point where it is obvious that the distinction that makes humans intrinsically valuable and Autobliss 1.0 not valuable is being chosen so as to have the desired consequence? And did we not have a sermon some day in the last couple of weeks about how it is bad to gather evidence for a desired conclusion while ignore evidence against the conclusion?
Could we please take the true confessions to private email?
Vassar: I think that the scientific lineages phenomenon requires more than a sentence or two of attention. Half of Nobel Prizes go to the doctoral students of other Nobel Laureates
Eliezer: This is insanity. Does no one know what they’re teaching?
The possibility that knowledge is more easily transmitted face-to-face than through books is no cause for despair. It might however be cause to increase the likelihood that you will contact the author to request a face-to-face meeting when you come across a good piece of writing on an important subject. I’d meet with you at no cost to you with the sole goal of helping you understand something I know, and I expect that there are many like me.
Newton and Darwin were perfectionists, and again most psychologists are idiots.
Do the words “atomic theory” have a single unambiguous meaning in the context you reply to? Or do you know somehow (telepathy?) the precise referent the writer refers to by the words?
Come on, Mellway. Search for a charitable interpretation of the writer’s words. Do not stop your search till you have found an interpretation of the words that makes the sentence non-foolish and non-false.