I will say that not everything which ends is a mistake, but that should not be taken to endorse having children—you’re already pregnant, aren’t you.
hairyfigment
Isn’t that just conflation of training data with fundamental program design? I’m no expert, but my impression is that you could train GPT-1 all you want and it would never become GPT-3.
Addendum: I don’t think we should be able to prove that Life Gliders lack values, merely because they have none. That might sound credible, but it may also violate the Von Neumann-Morgenstern Utility Theorem. Or did you mean we should be able to prove it from analyzing their actual causal structure, not just by looking at behavior?
Even then, while the fact that gliders appear to lack values does happen to be connected to their lack of qualia or “internal experience,” those look like logically distinct concepts. I’m not sure where you’re going with this.
I don’t think planaria have values, whether you view that truth as a “cop-out” or not. Even if we replace your example with the ‘minimal’ nervous system capable of having qualia—supposing the organism in question doesn’t also have speech in the usual sense—I still think that’s a terrible analogy. The reason humans can’t understand worms’ philosophies of value is because there aren’t any. The reason we can’t understand what planaria say about their values is that they can’t talk, not because they’re alien. When we put our minds to understanding an animal like a cat which evolved for (some) social interaction, we can do so—I taught a cat to signal hunger by jumping up on a particular surface, and Buddhist monks with lots of time have taught cats many more tricks. People are currently teaching them to hold English conversations (apparently) by pushing buttons which trigger voice recordings. Unsurprisingly, it looks like cats value outcomes like food in their mouths and a lack of irritating noises, not some alien goal that Stephen Hawking could never understand.
If you think that a superhuman AGI would have a lot of trouble inferring your desires or those of others, even given the knowledge it should rapidly develop about evolution—congratulations, you’re autistic.
If I understand correctly, you freely admit that current law says this falls under the Jones Act—which is supported by the US military and even some of your cohort—but you believe we can carve out an exception just by changing the little-known Dredge Act. Why do you believe this?
You claim the Jones Act is just about shipping, but Wikipedia quotes the law thusly, emphasis added:
″...it is declared to be the policy of the United States to do whatever may be necessary to develop and encourage the maintenance of such a merchant marine, and, in so far as may not be inconsistent with the express provisions of this Act, the Secretary of Transportation shall, in the disposition of vessels and shipping property as hereinafter provided, in the making of rules and regulations, and in the administration of the shipping laws keep always in view this purpose and object as the primary end to be attained.”
(The source appears to be part of the Department of Transportation.)
Sort of reminds me of that time I missed out on a lucid dream because I thought I was in a simulation. In practice, if you see a glitch in the Matrix, it’s always a dream.
I find it interesting that we know humans are inclined to anthropomorphize, or see human-like minds everywhere. You began by talking about “entities”, as if you remembered this pitfall, but it doesn’t seem like you looked for ways that your “deception” could stem from a non-conscious entity. Of course the real answer (scenario 1) is basically that. You have delusions, and their origin lies in a non-conscious Universe.
The second set of brackets may be the disconnect. If “their” refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.
Rather than looking for value you can salvage from his work, or an ‘interpretation consistent with modern science,’ please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don’t have time for all of them.
If that still doesn’t help you see where I’m coming from, consider that reality is constantly changing and “the evolutionary process” usually happened in environments which no longer exist.
Without using terms such as “grounding” or “basis,” what are you saying and why should I care?
I repeat: show that none of your neurons have consciousness separate from your own.
Why on Earth would you think Searle’s argument shows anything, when you can’t establish that you aren’t a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don’t you need to rely on functionalism or a similar premise?
What about it seems worth refuting?
The Zombie sequence) may be related. (We’ll see if I can actually link it here.) As far as the Chinese Room goes:
I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can’t be conscious.
Searle talks about “understanding” Chinese. Now, the way I would interpret this word depends on context—that’s how language works—but normally I’d incline towards a Bayesian interpretation of “understanding” as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
Some versions of the “Chinese Gym” have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it’s comparable to a man blindly following rules in a room, I don’t think Searle could refute this. (I also don’t think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Do you know what the Electoral College is? If so, see here:
The single most important reason that our model gave Trump a better chance than others is because of our assumption that polling errors are correlated.
Arguably claims about Donald Trump winning enough states—but Nate Silver didn’t assume independence, and his site still gave the outcome a low probability.
Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don’t think we have a clear plan yet showing how we’ll reach that level of practicality.
Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark’s Mathematical Macrocosm hypothesis—or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of ‘Clearly the prior shouldn’t be that low.’
I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material.
(Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason—but not you. The chance of it wanting you in particular seems effectively nil.)
Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for “self-hating arithmetic” that proves arithmetic contradicts itself.
Completely unnecessary details here.
Not if they’re sufficiently different. Even within Bayesian probability (technically) we have an example in the hypothetical lemming race with a strong Gambler’s Fallacy prior. (“Lemming” because you’d never meet a species like that unless someone had played games with them.)
On the other hand, if an epistemological dispute actually stems from factual disagreements, we might approach the problem by looking for the actual reasons people adopted their different beliefs before having an explicit epistemology. Discussing a religious believer’s faith in their parents may not be productive, but at least progress seems mathematically possible.
How could correcting grammar be good epistemics? The only question of fact there is a practical one—how various people will react to the grammar coming out of your word-hole.
I’m using probability to represent personal uncertainty, and I am not a BB. So I think I can legitimately assign the theory a distribution to represent uncertainty, even if believing the theory would make me more uncertain than that. (Note that if we try to include radical logical uncertainty in the distribution, it’s hard to argue the numbers would change. If a uniform distribution “is wrong,” how would I know what I should be assigning high probability to?)
I don’t think you assign a 95% chance to being a BB, or even that you could do so without severe mental illness. Because for starters:
Humans who really believe their actions mean nothing don’t say, “I’ll just pretend that isn’t so.” They stop functioning. Perhaps you meant the bar is literally 5% for meaningful action, and if you thought it was 0.1% you’d stop typing?
I would agree if you’d said that evolution hardwired certain premises or approximate priors into us ‘because it was useful’ to evolution. I do not believe that humans can use the sort of pascalian reasoning you claim to use here, not when the issue is BB or not BB. Nor do I believe it is in any way necessary. (Also, the link doesn’t make this clear, but a true prior would need to include conditional probabilities under all theories being considered. Humans, too, start life with a sketch of conditional probabilities.)
OK, they gave him a greater chance than I thought of winning the popular vote. I can’t tell if that applies to the polls-plus model which they actually seemed to believe, but that’s not the point. The point is, they had a model with a lot of uncertainty based on recognizing the world is complicated, they explicitly assigned a disturbing probability to the actual outcome, and they praised Trump’s state/Electoral College strategy for that reason.
Yeah, that concept is literally just “harmful info,” which takes no more syllables to say than “infohazard,” and barely takes more letters to write. Please do not use the specialized term if your actual meaning is captured by the English term, the one which most people would understand immediately.