London Meetup 05-Jun-2011 - very rough minutes

This was posted to the Lon­don LessWrong mailing list, but I am cross­post­ing here, as per David Ger­ard’s sug­ges­tion, in case any­one else finds this in­ter­est­ing.

Th­ese notes are from my per­spec­tive, so things will be miss­ing (as some are added).

So here’s my notes:

Bit­coin—Mostly how it’s quite in­ter­est­ing, but an­noy­ing that we can’t trans­fer money in from the UK. My­self and ci­pher­goth were the in­ter­ested par­ties. If any­one has any ideas, let us know.

Euthy­phro Dilemma and Mo­ral Real­ism—The first re­li­gion-themed con­ver­sa­tion, mostly on the sorts of an­swers that come up to the dilemma and what con­sti­tutes moral re­al­ism any­way.

Evolu­tion­ar­ily Stable Strate­gies—The dis­cus­sion of moral re­al­ism nat­u­rally led to what the na­ture of moral­ity is and how evolu­tion gave rise to it.

Learn­ing De­ci­sion The­ory & Pro­ject Euler—Not sure how we got here, but I men­tioned my de­sire that the peo­ple work­ing on de­ci­sion the­ory would make a Pro­ject Euler-type in­tro­duc­tion to the ma­te­rial, so the rest of us can even­tu­ally join the con­ver­sa­tion. I should prob­a­bly write this up as a sep­a­rate dis­cus­sion post.

Ra­tion­al­ity as Land­grab, and Defi­ni­tions of Ra­tion­al­ity—Ap­par­ently some high-rank­ing figures in the gen­eral fu­tur­ist cluster dis­like LessWrong for ‘ap­pro­pri­at­ing the term ra­tio­nal­ity’. There may or may not be a point there, but we started dis­cussing how the term can be defined, prefer­ably in a LW-in­de­pen­dent man­ner.

Liber­tar­i­anism & LessWrong—There seems to be a high con­cen­tra­tion of liber­tar­i­ans on LW, and it seems that the ban on talk­ing poli­tics has kept this from be­ing dis­cussed much. Which brings us to...

Talk­ing Poli­tics on LessWrong—There seems to be this norm against talk­ing poli­tics, which was in­her­ited by other on­line com­mu­ni­ties. How­ever, LessWrong is very much not like other com­mu­ni­ties. We can dis­cuss re­li­gion and philos­o­phy with­out flame­wars break­ing out, so why not try poli­tics too? Peo­ple on LW have been known to change their minds, so there is a good chance we will gen­er­ate more light than heat.

De­scribing LW & Chang­ing our minds—Leon­hart de­scribed the site as ‘an In­ter­net fo­rum where peo­ple oc­ca­sion­ally apol­o­gise and change their minds’. Every­one else felt this was a great for­mu­la­tion that should be noted down. Dis­cus­sion on what we have changed our minds on on LW followed

His­toric­ity of Je­sus—Back on the re­li­gious track, we dis­cussed how athe­ists are of­ten former Chris­ti­ans who looked into the His­toric­ity of Je­sus. Cases in point—taryneast’s rel­a­tives and Luke­prog.

Mak­ing pe­pole ad­mit cached thoughts—More or less what it says on the tin. What it is and if any­one’s done it (not re­ally).

Is the term ‘Dark Arts’ mean­ingful? - Per­haps one of the few dis­cus­sions where there was ac­tive de­bate. A cou­ple of good defi­ni­tions for ‘dark arts’ came up, in­clud­ing ‘tech­niques that if the other per­son knew you were ap­ply­ing them, they would be pissed off’. My per­sonal defi­ni­tion was ‘con­vinc­ing tech­niques in­de­pen­dent of the pay­load’. Which is to say, tricks any­one can use to con­vince the un­trained about al­most any­thing.

Meth­ods of Ra­tion­al­ity meetup—By this point we’d moved on to the next pub. The dis­cus­sion was whether to do a MoR meetup (yes) and how we would go about set­ting it up (co­or­di­nat­ing with Eliezer to have a date set be­fore he posts the next chap­ter). What re­mains is ac­tu­ally do­ing any of this.

Plau­si­bil­ity vs. Pos­si­bil­ity—David Ger­ard’s idea. The ideas that seem plau­si­ble should raise a red flag since that may be due to the con­junc­tion fal­lacy, re­duc­ing the pos­si­bil­ity of them ac­tu­ally be­ing true.

Biweekly Meetup Dates—It has been de­cided by the coun­cil of el­ders (aka, those who both­ered to turn up) that the biweekly mee­tups will be on the 1st and 3rd Sun­day of each month, with ev­ery 4th one be­ing a ‘big’ bi­monthly meetup.

Psy­chol­ogy & Science—Is psy­chol­ogy a proper sci­ence? (some of it yes, some of it no).

Race & In­tel­li­gence—Another de­bated topic. On the one hand, it’s un­likely that in­tel­li­gence would re­main sta­ble while so many other at­tributes vary among races. On the other David Ger­ard men­tioned re­cent re­search raises ques­tions about the stud­ies that showed such differ­ences. On the third hand, any­one se­ri­ously re­search­ing the topic with­out a view to dis­prov­ing it will have their ca­reer de­stroyed, so, yeah...

Prevalence of Ba­sic Knowl­edge—An anec­dote by me about some fairly ed­u­cated ac­quain­tances that had ba­sic mis­con­cep­tions about evolu­tion (oddly, not with re­li­gious mo­tive, I think), and a warn­ing not to con­sider the gen­eral pub­lic’s ed­u­ca­tion lev­els too high due to the Typ­i­cal Mind Fal­lacy.

Com­edy as Anti-Com­part­men­tal­iza­tion—Another pet the­ory of mine. I was puz­zled by the amount of athe­ist co­me­di­ans out there, who peo­ple pay to see tell them that their re­li­gion is ab­surd. (Yes, Chris­tian co­me­di­ans ex­ist too. Search YouTube. I dare you.) So my the­ory is that hu­mour serves as a space where pat­terns and data from differ­ent fields are al­lowed to be su­per­im­posed on one an­other. Think of it as an anti-com­part­men­tal­iza­tion habit. Due to our brain de­sign, com­part­men­tal­iza­tion is the de­fault, so hu­mour may be a hack to counter that. And we re­ward those who do it well with high sta­tus be­cause it’s valuable. Maybe we should have tran­shu­man­ist/​ra­tio­nal­ist stand-up co­me­di­ans? We sure have a lot of in­con­sis­ten­cies to point out.

Spread of Athe­ism—The above de­vel­oped into this. Has athe­ism sat­u­rated it’s au­di­ence, and will it sta­bil­ise? No clear out­come, I guess we’ll have to wait and see. I cer­tainly hope not.

Wikipe­dia’s Episte­mol­ogy—How Wikipe­dia de­ter­mines truth. I’ll let David Ger­ard tell us what that was about

The Lar­rikin-Wowser Dy­namic—Kristoff men­tioned this the­ory on how so­cieties work through this fun­da­men­tal ten­sion. He can prob­a­bly say more on this than I can.

The My­ers-Kurzweil ar­gu­ment—It turns out, the win­ner differs by how you frame the claims made. As far as I am con­cerned, of these two, who­ever wins, we lose.

The Black Box ex­per­i­ment—The dis­cus­sion turned to rais­ing chil­dren, and I men­tioned this ex­per­i­ment on how the chil­dren of other pri­mates seem to do some things bet­ter than hu­man chil­dren do, and what that tells us about our learn­ing pro­cess. YouTube vid: http://​​​​watch?v=pIAoJsS9Ix8

Neuro-Lin­guis­tic Pro­gram­ming: Does it do any­thing? - DG says no, but it works by the power of tel­ling peo­ple what to do.

End of notes.

That was a lot of text, if you made it down to here, you have my sincere con­grat­u­la­tions.