pangel

Karma: 96
• An in­tu­ition is that red-black trees en­code 2-3-4 trees (B-trees of or­der 4) as bi­nary trees.

For a sim­pler case, 2-3 trees (Ie. B-trees of or­der 3) are ei­ther empty, a (2-)node with 1 value and 2 sub­trees, or a (3-)node with 2 val­ues and 3 sub­trees. The idea is to in­sert new val­ues in their sorted po­si­tion, ex­pand 2-nodes to 3-nodes if nec­es­sary, and bub­ble up the ex­tra val­ues when a 3-node should be ex­panded. This keeps the tree bal­anced.

A 2-3-4 tree just gen­er­al­ises the above.

Now the in­tu­ition is that red means “I am part of a big­ger node.” That is, red nodes rep­re­sent the val­ues con­tained in some higher black node. If the black node rep­re­sents a 2-node, it has no red chil­dren. If it rep­re­sents a 3-node, it has one red child, and if it rep­re­sents a 4-node, it has 2 red chil­dren.

In this con­text, the “rules” of the red-black trees make com­plete sense. For in­stance we only count black trees when com­par­ing branch heights be­cause those rep­re­sent the ac­tual nodes. I’m sure that with a bit of work, it’s pos­si­ble to make com­plete sense of the in­ser­tion/​dele­tion rules through the B-tree lens but I haven’t done it.

edit: I went through the in­ser­tion rules and they do make com­plete sense if you think about a B-tree while you read them.

2 Dec 2016 10:21 UTC
5 points
• Although I ap­pre­ci­ate the par­allel, and am skep­ti­cal of both, the men­tal paths that lead to those some­what re­lated ideas are se­ri­ously dis­similar.

• Could you ex­pand on this?

...there are rea­sons why a cap­i­tal­ist econ­omy works and a com­mand econ­omy doesn’t. Th­ese rea­sons are rele­vant to eval­u­at­ing whether a ba­sic in­come is a good idea.

• Sorry, “fine” was way stronger than what I ac­tu­ally think. It just makes it bet­ter than the (pos­si­bly straw) al­ter­na­tive I men­tioned.

• No. Thanks for mak­ing me no­tice how rele­vant that could be.

I see that I haven’t even thought through the ba­sics of the prob­lem. “power over” is felt when­ever scarcity leads the wealthier to take prece­dence. Okay, so to try to gen­er­al­ise a lit­tle, I’ve never been re­ally hit by the scarcity that ex­ists be­cause my de­sires are (for one rea­son or an­other) ad­justed to my means.

I could be a lot wealthier yet have crav­ings I can’t af­ford, or be poorer and still con­tent. But if what I wanted kept hit­ting a wealth ceiling (a spe­cific type, one due to scarcity, such that in­creas­ing my wealth and ev­ery­one else’s in pro­por­tion wouldn’t help), I’d start car­ing about rel­a­tive wealth re­ally fast.

• I see it as a ques­tion of prefer­ence so I know by never hav­ing felt envy, etc. at some­one richer than me just for be­ing richer. I only feel in­ter­ested in my wealth rel­a­tive to what I need or want to pur­chase.

As noted in the com­ment thread I linked, I could start car­ing if some­one’s rel­a­tive wealth gave them power over me but I haven’t been in this situ­a­tion so far (stuff like board­ing pri­or­ity for first-class tick­ets are a minor ex­am­ple I did ex­pe­rience, but that’s never both­ered me).

• Re­spond­ing to a point about the rise of ab­solute wealth since 1916, this ar­ti­cle makes (not very well) a point about the im­por­tance of rel­a­tive wealth.

Com­par­ing folks of differ­ent eco­nomic strata across the ages ig­nores a sim­ple fact: Wealth is rel­a­tive to your peers, both in time and ge­og­ra­phy.

In par­tic­u­lar, I sincerely do not care about my rel­a­tive wealth. I used to think that was uni­ver­sal, then found out I was wrong. But is it typ­i­cal? To me it has profound im­pli­ca­tions about what kind of eco­nomic world we should strive for—if most folks are like me, the cur­rent sys­tem is fine. If they are like some peo­ple I have met, a flat­ter real wealth dis­tri­bu­tion, even at the price of a much, much lower mean, could be prefer­able.

I’m in­ter­ested in any thoughts you all might have on the topic :)

• ...peo­ple have already set up their fal­lback ar­gu­ments once the sol­dier of ‘...’ has been knocked down.

Is this re­ally good phras­ing or did you man­age to nat­u­rally think that way? If you do it au­to­mat­i­cally: I would like to do it too.

It of­ten takes me a long time to rec­og­nize an ar­gu­ment war. Un­til that mo­ment, I’m con­fused as to how any­one could be un­fazed by new in­for­ma­tion X w.r.t. some topic. How do you de­tect you’re not hav­ing a dis­cus­sion but are walk­ing on a bat­tlefield?

• I think prac­ti­tion­ers of ML should be more wary of their tools. I’m not say­ing ML is a fast track to strong AI, just that we don’t know if it is. Sev­eral ML peo­ple voiced re­as­surances re­cently, but I would have ex­pected them to do that even if it was pos­si­ble to de­tect dan­ger at this point. So I think some­one should find a way to make the field more care­ful.

I don’t think that some­one should be MIRI though; sta­tus differ­ences are too high, they are not in­sid­ers, etc. My best bet would be a promi­nent ML re­searcher start­ing to speak up and giv­ing de­tailed, plau­si­ble hy­po­thet­i­cals in pub­lic (I mean near-fu­ture hy­po­thet­i­cals where some er­ror cre­ates a lot of trou­ble for ev­ery­one).

• I meant it in the sense you un­der­stood first. I don’t know what to make of the other in­ter­pre­ta­tion. If a con­cept is well-defined, the ques­tion “Does X match the con­cept?” is clear. Of course it may be hard to an­swer.

But sup­pose you only have a vague un­der­stand­ing of an­ces­try. Ac­tu­ally, you’ve only re­cently coined the word “an­ces­tor” to point at some blob of thought in your head. You think there’s a use­ful idea there, but the best you can for now is: “some­one who re­lates to me in a way similar to how my dad and my grand­mother re­late to me”. You go around tel­ling peo­ple about this, and some­one re­sponds “yes, this is the brute fact from which the co­nun­drum of an­ces­try start”. An other tells you you ought to stop us­ing that word if you don’t know what the refer­ent is. Then they go on to say your defi­ni­tion is fine, it doesn’t mat­ter if you don’t know how some­one comes to be an an­ces­tor, you can still talk about an an­ces­tor and make sense. You have not gone through all the tribe’s ini­ti­a­tion rit­u­als yet, so you don’t know how you re­late to grey wolves. Maybe they’re your an­ces­tors, maybe not. But the other says : “At least, you know what you mean when you claim they are or are not your an­ces­tors.”.

Then your lit­tle sisters drops by and says: “Is this rock one of your an­ces­tors?”. No, cer­tainly not. “OK, didn’t think so. Am I one of your an­ces­tors?”. You feel about it for a minute and say no. “Why? We’re re­ally close fam­ily. It’s very similar to how dad or grandma re­late to you.” Well, you didn’t in­clude it in your origi­nal defi­ni­tion, but some­one younger than you can definitely not be your an­ces­tor. It’s not that kind of “similar”. A bit of time and a good num­ber of fam­ily mem­bers later, you have a bet­ter defi­ni­tion. Your first defi­ni­tion was just two ex­am­ples, some­thing about “re­lat­ing”, and the word “similar” thrown in to mean “and ev­ery­one else who is also an an­ces­tor.” But similar in what way?

Now the word means “the small­est set such that your par­ents are in it, and any par­ent of an an­ces­tor is an an­ces­tor”...”union the el­ders of the tribe, dead or al­ive, and a cou­ple of no­ble an­i­mal species.” Maybe a few gen­er­a­tions later you’ll drop the sec­ond term of the defi­ni­tion and start talk­ing about genes, what­ever.

My “fuzziest start­ing point” was re­ally fuzzy, and not a good defi­ni­tion. It was one ex­am­ple, some­thing about be­ing able to “ex­pe­rience” stuff, and the word “similar” thrown in to mean “and ev­ery­one else who is con­scious.” I may (kind of) know what I mean when I say a rock is not con­scious, since it doesn’t ex­pe­rience any­thing, but what do I mean ex­actly when I say that a dog isn’t con­scious?

I don’t think I know what I mean when I say that, but I think it can help to keep us­ing the word.

P.S. The fi­nal an­swer could be as in the an­ces­tor story, a defi­ni­tion which closely matches the ini­tial in­tu­ition. It could also be some­thing re­ally weird where you re­al­ize you were just con­fused and stop us­ing the word. I mean, the life force of vi­tal­ism was prob­a­bly a brute fact for a long time.

• As an in­stance of the limits of re­plac­ing words with their defi­ni­tions to clar­ify de­bates, this looks like an im­por­tant con­ver­sa­tion.

The fuzziest start­ing point for “con­scious­ness” is “some­thing similar to what I ex­pe­rience when I con­sider my own mind”. But this doesn’t help much. Some­one can still claim “So rocks prob­a­bly have con­scious­ness!”, and an­other can re­spond “Cer­tainly not, but brains grown in labs likely do!”. Ar­gu­ing from phys­i­cal similar­ity, etc. just re­lies on the other per­son shar­ing your in­tu­itions.

For some con­cepts, we dis­agree on defi­ni­tions be­cause we don’t know ac­tu­ally know what those con­cepts re­fer to (this doesn’t in­clude con­cepts like “art”, etc.). I’m not sure what the best way to talk about whether an en­tity pos­sesses such a con­cept is. Are there ex­ist­ing ar­ti­cles/​dis­cus­sions about that?

• Straus­sian think­ing seems like a deep well full of sta­tus moves !

• Level 0 - Laugh at the con­spir­acy-like idea. Shows you are in the pack.

• Level 1 - As Strauss does, ex­plain it /​ pre­sent in­stances of it. Shows you are the guru.

• Level 2 - Like Thiel, hint at it while play­ing the Straus­sian game. Shows you are an ini­ti­ate.

• Level 3 - Crit­i­cize it for failing too of­ten (bad think­ing at­trac­tor, ideas that are hard to check and de­ploy usual ra­tio­nal­ity tools on). Shows you see through the phyg’s dis­tor­tion field.

• You prob­a­bly already agreed with “Ghosts in the Ma­chine” be­fore read­ing it since ob­vi­ously, a pro­gram ex­e­cutes ex­actly its code even in the con­text of AI. Also ob­vi­ously, the pro­gram can still ap­pear to not do what it’s sup­posed to if “sup­posed” is taken to mean to pro­gram­mer’s in­tent.

Th­ese state­ments don’t ig­nore ma­chine learn­ing; they im­ply that we should not try to build an FAI us­ing cur­rent ma­chine learn­ing tech­niques. You’re right, we un­der­stand (pro­gram + pa­ram­e­ters learned from dataset) even less than (pro­gram). So while the out­side view might say: “cur­rent ma­chine learn­ing tech­niques are very pow­er­ful, so they are likely to be used for FAI,” that piece of in­side view says: “ac­tu­ally, they aren’t. Or at least they shouldn’t.” (“learn” has a pre­cise op­er­a­tional mean­ing here, so this is un­re­lated to whether an FAI should “learn” in some other sense of the word).

Again, whether a de­vel­op­ment has been suc­cess­ful or promis­ing in some field doesn’t mean it will be as suc­cess­ful in FAI, so imi­ta­tion of the hu­man brain isn’t nec­es­sar­ily good here. Rea­son­ing by anal­ogy and think­ing about evolu­tion is also un­likely to help; na­ture may have given us “goals”, but they are not goals in the same sense as : “The goal of this func­tion is to add 2 to its in­put,” or “The goal of this pro­gram is to play chess well,” or “The goal of this FAI is to max­i­mize hu­man util­ity.”

• I have met peo­ple who ex­plic­itly say they pre­fer a lower gap be­tween them and the bet­ter-offs over a bet­ter ab­solute level for them­selves. IIRC they were more con­cerned about ‘fair­ness’ than about what the pow­er­ful might do to them. They also be­lieved that most would agree with them (I be­lieve the op­po­site).

• Gentzen’s Cut Elimi­na­tion The­o­rem for Non-Logicians

Knowl­edge and Value, Tu­lane Stud­ies in Philos­o­phy Vol­ume 21, 1972, pp 115-126

• Be­ing in a situ­a­tion some­what similar to yours, I’ve been wor­ry­ing that my low­ered ex­pec­ta­tions about oth­ers’ level of agency (with ele­vated ex­pec­ta­tions as to what con­sti­tutes a “good” level of agency) has an in­fluence on those I in­ter­act with: if I as­sume that peo­ple are some­what in­fluenced by what oth­ers ex­pect of them, I must con­clude that I should be­have (as far as they can see) as if I be­lieved them to be as ca­pa­ble of agency as my­self, so that their ac­tual level of agency will im­prove. This would would work on me, for in­stance I’d be more gen­er­ally prone to take ini­ti­a­tive if I saw trust in my peers’ eyes.