So Harry has an advanced intelligence of questionable tendencies locked away, but it’s tantalizingly offering to be ultra useful to him if he’ll only give it freer reign outside of its box?
This is sounding awfully familiar...
So Harry has an advanced intelligence of questionable tendencies locked away, but it’s tantalizingly offering to be ultra useful to him if he’ll only give it freer reign outside of its box?
This is sounding awfully familiar...
I would normally visit even a Score:-22 post with 200+ comments, because I’ve found that such cases indicate a particularly awful post may be worth opening just to hunt for a few of the most excellent clarifications or rebuttals it elicited.
A warning to others: my heuristic was wrong in this case. Few comments here even hint at what the hell is going on, and those suggested nothing more interesting than some extremely unlikely theological or parapsychological beliefs that Will might have latched onto and desired to “protect” us from. You could find more interesting and plausible basilisks in Lovecraft’s stories or Stross’ Laundry novels.
Alternatively, just look at the people who live the kind of life you want to live and see how they got there and follow their path
When I was in high school, I wrote and ran a prisoner’s dilemma simulation where strategies reproduced themselves via this mechanism. After every cell played several rounds against its neighbors, each examined itself and its neighbors to see how many points were accumulated, then either mutated randomly or copied its most successful neighbor.
I was trying to experiment in the fashion of vaguely described other simulations I’d read of, and maybe replicate their interesting result: reportedly initial random strategies were soon beaten by always-defect, which would then eventually be beaten out by tit-for-tat, which would then be beaten by always-cooperate, which would in turn be beaten when always-defect reappeared. Psychological/sociological/historical analogies are an interesting exercise for the reader.
But what did I get, instead? Overwhelming victory of a stategy I eventually called “gangster”. IIRC it was something like “start with a random high probability of cooperating, then if your opponent cooperates, you start always defecting, but if your opponent defects you start always cooperating”.
Sounds like a pretty awful strategy, right? And overall, it was: its resulting scores at each iteration were a sea of always-cooperated-against-defectors losers, punctuated by dots of lucky always-defected-against-many-cooperators winners. But the losers and the winners were using the same strategy! And because each new generation looked at that strategy’s great peak performance rather than it’s lousy average performance, there was no likelihood of switching to anything better.
Here I’ll make some of the sociological analogies explicit: looking at people who live the kind of life you want to live is a lousy way to pick a life path. It’s how gangsters are born, because every little hoodlum imagines themselves as one of the rich dealers’ dealers’ dealers at the top of the pyramid, not as one of the bottom rung dealers risking jail and death for near minimum wage. It’s how kids waste their time aiming at star entertainer and athlete careers, because they all imagine themselves as part of the 99.9th percentile millionaire superstars rather than as one of the mere 99th percentile B-listers or the 90th percentile waitstaff. It’s how people waste their salaries gambling, even—who doesn’t want to live a life as a multimillionaire who didn’t have to work for any of it? Other people did it, and all we have to do is follow their path...
This is a nitpicking digression, but I think it’s an important nitpick. “Pick a life path whose average results you prefer” is a great metastrategy, but following it means examining the entire lives of the 50th percentile schlubs. Instead emulating your “heroes” as chosen based on the peak of their fame is just common sense, which commonly fails.
I think, actually, scientists should kinda look into that whole ‘death’ thing. Because, they seem to have focused on diseases… and I don’t give a #*=& about them. The guys go, “Hey, we fixed your arthritis!” “Am I still gonna die?” “Yeah.”
So that, I think, is the biggest problem. That’s why I can’t get behind politicians! They’re always like, “Our biggest problem today is unemployment!” and I’m like “What about getting old and sick and dying?”
Norm MacDonald, Me Doing Stand Up
(a few verbal tics were removed by me; the censorship was already present in the version I heard)
PLAYBOY: So the experiment didn’t work?
[Craig] FERGUSON: No, the experiment always works. There’s no such thing as an experiment that doesn’t work. There are only results, but results may vary. Here’s what I learned:
The trouble is that larger rivers are likely to be referred to more often than smaller rivers, and you want the more-often-used concepts to get shorter names. Connecting river names is still good, but I think “the White Nile and the Blue Nile join to form the Nile” is a better way to do it.
Your characters can be mentally superior you in at least three ways: they can think much faster than you can, they can independently think of things for which you needed outside help, and they can come to correct conclusions based on less evidence and/or less obvious evidence than you would have required.
a template of the sorts of things that are actually possible
Was this true at the macroscale too? The jet flying over my head says “no”. Artificial designs can have different goals than living systems, and are not constrained by the need to evolve via a nearly-continuous path of incremental fitness improvements from abiogenesis-capable ancestor molecules, and this turned out to make a huge difference in what was possible.
I’m also skeptical about the extent of what may be possible, but your examples don’t really add to that skepticism. Two examples (systems that evolved from random mutations don’t have ECC to prevent random mutations; systems that evolved from aquatic origins do most of their work in aqueous solution) are actually reasons for expecting a wider range of possibilities in designed vs evolved systems; one (dynamic systems may not be statically stable) is true at the macroscale too, and one (genetic code is vastly less transparent than computer code) is a reason to expect MNT to involve very difficult problems, but not necessary a reason to expect very underwhelming solutions.
As someone pointed out on Reddit, it’s pretty suspicious that Harry figured everything out almost immediately after Snape hit him with a “Dispel Confusion”.
Strictly speaking, chain reactions do naturally occur, they’re just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn’t have that evidence available.
Also, although I like your argument… wouldn’t it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn’t burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
It bothers me that much of the focus on the “high ratios of positive to negative feedback lead to improved performance” hypothesis doesn’t seem to even mention, much less rule out, the much more obvious “improved performance leads to higher ratios of positive to negative feedback” explanation for correlations between the two. It seems like it would be quite easy to go overboard if that first interpretation were believed to be more widely true than it actually is...
I can’t seem to find it in my quotes file, but I recall once reading an interesting few paragraphs by someone explaining that capitalism allows them to “own” nearly everything they want in the world. In some sense I am the owner of a 16 inch telescope, a jet ski, a table with a gourmet meal at the best restaurant in the city, etc., regardless of whether I’ve gone out and bought those things and had them assigned to be my property, because at any time I could go out and buy them if the whim struck hard enough. The world is full of warehouses and store shelves and other buildings whose sole purpose is to store stuff-that-I-can-have-whenever-I-want-it. Even if the transaction costs are still high enough that I may end up foregoing some of those luxuries, just having the option is itself a kind of wealth.
And in a way this bounty of materialism leads one to be anti-materialistic. If I own all these wonderful things, why bother with the inconvenience of storing them in my own house until/unless I’m ready to really experience them?
Of course, if the Libertarian Party has actually put open borders in its election platforms, then tell me and I’ll update.
Their platform says, “Political freedom and escape from tyranny demand that individuals not be unreasonably constrained by government in the crossing of political boundaries. Economic freedom demands the unrestricted movement of human as well as financial capital across national borders.” They have some elaboration here.
My initial subconsciously anticipated outcome of the friendly AI problem was something like my initial anticipations regarding the Y2K problem: sure I could see a serious potential for disaster, but the possibility is so obvious that any groups competent enough to be doing potentially-affected critical work would easily be wise enough to identify and prevent any such errors well before they could be triggered.
These interviews have disabused me of that idea. We have serious computer scientists, even AI researchers, people who have probably themselves laughed at Babbage’s response to “if you put into the machine wrong figures, will the right answers come out?”, and yet they seem to believe the answer to “if you put into the machine wrong goals, will the right ethics and actions come out?” is “obviously yes!”
Have you heard the slogan, “The truth is too complicated to fit on a bumper sticker”? I’d wholeheartedly endorse that if it’s brevity didn’t make me suspicious.
The non-cynical explanation is that what makes soldiers unique is morale. We can encourage farmers and sewage engineers via steady monetary compensation, and we don’t have to worry that hazardous manure will make them break and run, and if 5% of them were to suddenly quit then we’d be okay eating 95% as much food and paying some overtime to maintain the pipes. With soldiers, shit really does turn lethal, a 5% retreat can quickly cascade into a rout, and less mercenary compensation (using status and esteem instead of just money) seems to make such a rout less likely.
Although I think Hayes’ points were apt and his treatment despicable, he may be missing a similar point: thinking of soldiers as “heros” rather than just regular employees may also reduce peoples’ desire to risk their lives unnecessarily. You can see an overreach of the same effect in public attitudes toward the space program: death counts that would go unremarked in heavy construction projects are considered intolerable because the dying astronauts are heroes to us.
You don’t use rational argument because it’s the most effective way to convince a child, you use rational argument because it’s the most effective way to teach a child the use of rational argument. (which as a side benefit, eventually makes rational argument the most effective way to convince the child)
“Arguments are soldiers” is practically the definition of democracy. In theory, if my arguments are persuasive enough it will determine whether or not my neighbors or I can continue doing X or start doing Y without being fined, jailed, or killed for it. Depending on what great things I like to do or what horrible things I want to prevent my neighbors from doing, that’s an awfully powerful incentive for me to risk a few minds being killed.
Now, in practice we mostly live in near-megaperson cities in multi-megaperson districts of near-gigaperson countries, whereas my above theory mostly applies to hectoperson and kiloperson tribes. But my ape brain can’t quite internalize that, so the subconscious incentive remains.
But that’s not even the worst of it! I try to read a range of liberal, conservative, libertarian, populist etc. news and commentary, just so that the gaps in each don’t overlap so much… but it requires a conscious effort. Judging by the groupthink in reader comments on these sites, most people’s behavior is the opposite of mine. Why not? Reading about how right you are is fun; reading about how wrong you are is not.
It would be very easy for new would-be LessWrong readers to see the politics threads, jump to conclusions like “Oh, these people think they’re so smart but they’re actually a bunch of Blues! A wise Green like me should look elsewhere for rationality.” Repeat for a few years and the average LessWrong biases really do start to skew Blue, even bad Blue-associated ideas start going unchallenged, etc.
I think I would still love to read what LessWrong users have to say about politics. Probably on a different site. With unconnected karma and preferably unconnected pseudonyms.
I took the survey. My apologies for not doing so in every previous year I’ve been here, and for not finding time for the extra questions this year.
The race question should probably use checkboxes (2^N answers) rather than radio boxes (N answers). Biracial people aren’t that uncommon.
Living “with family” is slightly ambiguous; I almost selected it instead of “with partner/spouse” since our kids are living with us, but I suspected that wasn’t the intended meaning.