# TheMajor

Karma: 459
Page 1
• Just to share my two cents on the mat­ter, the dis­tinc­tion be­tween ab­stract vec­tors and maps on the one hand, and columns with num­bers in them (con­fus­ingly also called vec­tors) and ma­tri­ces on the other hand, is a cen­tral headache for Lin­ear Alge­bra stu­dents across the globe (and by ex­ten­sion also for the lec­tur­ers). If the ap­proach this book takes works for you then that’s great to hear, but I’m wary of hacks’ like this that only sup­ply a par­tial view of the dis­tinc­tion. In par­tic­u­lar ma­trix-vec­tor mulit­pli­ca­tion is some­thing that’s used al­most ev­ery­where, if you need sev­eral trans­la­tion steps to make use of this that could be a se­ri­ous ob­sta­cle. Also the base map that limerott men­tions is of cen­tral im­por­tance from a cat­e­gory-the­o­retic point of view and is es­sen­tial in cer­tain more ad­vanced fields, for ex­am­ple in differ­en­tial ge­om­e­try. I’m there­fore not too keen on leav­ing it out of a Lin­ear Alge­bra in­tro­duc­tion.

Un­for­tu­nately I don’t re­ally know what to do about this, like I said this topic has always caused ma­jor con­fu­sion and the trade-off be­tween com­plete­ness and con­cise­ness is ex­tremely com­pli­cated. But do be­ware that, based on only my un­der­stand­ing of your post, you might still be miss­ing im­por­tant in­sights about the dis­tinc­tion be­tween nu­mer­i­cal lin­ear alge­bra and ab­stract lin­ear alge­bra.

• I think ‘com­pe­tent’ should in this con­text mean some­thing like ‘has the abil­ity to, af­ter be­ing pointed to a gap in the mar­ket, build and/​or keep func­tional a com­pany that fills this gap’. This agrees fully with what you said, there is an ex­treme lot of wig­gle room be­tween ‘to­tal re­tard’ and this sense of com­pe­tent (in fact, I think al­most ev­ery­body lives in this wig­gle room). Fur­ther­more, I think it makes sense to naively think that the abun­dance of suc­cess­ful com­pa­nies sug­gests a lot of peo­ple are com­pe­tent in this sense, whereas I claim this is not the case.

• Very in­ter­est­ing ob­ser­va­tions! Per­son­ally I’d per­haps phrase it the other way around, not ‘in­com­pe­tence is kil­ling cor­po­ra­tions’ but more some­thing like ‘what changed in the past 70 years that al­lowed peo­ple to build long-liv­ing cor­po­ra­tions back then and not now, as­sum­ing to­day’s reg­u­lar com­pany deaths are caused by in­com­pe­tence?’. My per­sonal guess is that ei­ther back when these long-liv­ing com­pa­nies were founded (~1890’s) there was much more low-hang­ing fruit on the mar­ket, al­low­ing less effi­cient com­pa­nies to still sur­vive, or al­ter­na­tively that to­day’s eco­nomic en­vi­ron­ment is much more risk-tol­er­ant so the se­lec­tion for com­pe­tence hap­pens much more *af­ter* found­ing a com­pany.

I agree fully with the gov­ern­ment bu­reau­cracy re­mark, al­though I sus­pect there are a ton of other very im­por­tant effects at work there too (for ex­am­ple, out of all or­gani­sa­tions I ex­pect gov­ern­ments in par­tic­u­lar to have high ac­countabil­ity and reg­u­lar run-ins with Ch­ester­ton’s fence, both of which in­crease bu­reau­cratic load).

• I per­son­ally think we don’t need to posit a mechanism that ex­plains why peo­ple’s wrong be­liefs don’t cause im­me­di­ate dis­aster for com­pa­nies. In my wor­ld­view this is fully ex­plained by se­lec­tion effects in the mar­ket, both at the level of or­gani­sa­tions and at the level of in­di­vi­d­ual em­ploy­ees. Since long-term views are very hard to link to in­di­vi­d­ual out­comes, the se­lec­tion pres­sure is weaker here.

I’d like to point out that this does sug­gest that or­gani­sa­tions and com­pa­nies fail and go bankrupt reg­u­larly, we just don’t hear that much about the quick failures (which I think fits rea­son­ably well with ob­ser­va­tions, but I haven’t looked into this all that much).

This is in fact also an/​my an­swer to the non-rhetor­i­cal ques­tion why any­thing works at all. I dis­agree with Kirk­patrick in at­tribut­ing this to in­di­vi­d­u­als, which seems to sug­gest there is some class of mil­lions of man­agers who have at­tained some mys­ti­cal level of com­pe­tence that some­how doesn’t scale to groups.

• This is part of the mean­ing of ‘util­ity’. In real life we of­ten have risk-averse strate­gies where, for ex­am­ple, 100% chance at 100 dol­lars is preferred to 50% chance of los­ing 100 dol­lars and 50% chance of gain­ing 350 dol­lars. But, un­der the as­sump­tion that our risk-averse ten­den­cies satisfy the co­her­ence prop­er­ties from the post, this sim­ply means that our util­ity is not lin­ear in dol­lars. As far as I know this cap­tures most of the situ­a­tions where risk-aver­sion comes into play: of­ten you sim­ply can­not tol­er­ate ex­tremely nega­tive out­liers, mean­ing that your ex­pected util­ity is mostly dom­i­nated by some large nega­tive terms, and the best pos­si­ble ac­tion is to min­i­mize the prob­a­bil­ity that these out­comes oc­cur.

Also there is the fol­low­ing: con­sider the case where you are re­peat­edly offered bets of the ex­am­ple you give (B ver­sus C). You know this in ad­vance, and are al­lowed to re­design your de­ci­sion the­ory from scratch (but you can­not change the defi­ni­tion of ‘util­ity’ or the bets be­ing offered). What crite­ria would you use to de­ter­mine if B is prefer­able to C? The law of large num­bers(/​cen­tral limit the­o­rem) states that in the long run with prob­a­bil­ity 1 the op­tion with higher ex­pected value will give you more utilons, and in fact that this num­ber is the only num­ber you need to figure out which op­tion is the bet­ter pick in the long run.

The tricky bit is the ques­tion whether this also ap­plies to one-shot prob­lems or not. Maybe there are ra­tio­nal strate­gies that use, say, the ag­gre­gate me­dian in­stead of the ex­pected value, which has the same limit be­havi­our. My in­tu­ition is that this clashes with what we mean with ‘prob­a­bil­ity’ - even if this par­tic­u­lar prob­lem is a one-off, at least our strat­egy should gen­er­al­ise to all situ­a­tions where we talk about prob­a­bil­ity 12, and then the law of large num­bers ap­plies again. I also sus­pect that any agent that uses more in­for­ma­tion to make this de­ci­sion than the ex­pected value to de­cide (in par­tic­u­lar, oc­ca­sion­ally de­liber­ately chooses the op­tion with lower ex­pected util­ity) can be cheated out of utilons with clever ad­ver­sar­ial se­lec­tions of offers, but this is just a guess.

• I think your first re­mark is ex­actly the point. If the vis­its are use­less then this is a crappy doc­tor scam­ming money and time out of pa­tients and in­surance com­pa­nies, if the vis­its are im­por­tant then ask­ing OP’s friend to come in (for be­ing over 4 months late on a 3-month checkup) sounds very rea­son­able to me. I think Zyryab’s sug­ges­tion of ask­ing a doc­tor to Tur­ing Test this makes a lot of sense—maybe the check­ups are more valuable in cer­tain life stages/​de­mo­graph­ics/​early af­ter di­ag­no­sis? Maybe the checkup is some­thing more com­pli­cated than record­ing the HbA1c lev­els? I’m sur­prised to hear that with­out out­side med­i­cal in­for­ma­tion the doc­tor is guilty un­til proven in­no­cent.

• I’m re­ally sur­prised this is be­ing down­voted so much.

As far as I can tell (and frankly I don’t care enough to put se­ri­ous effort to­wards find­ing more in­for­ma­tion, but I do note no­body in the com­ments started with “I am a doc­tor” or “After talk­ing about this with my own doc­tor, …”) OP’s friend was in a life-threat­en­ing situ­a­tion, the solu­tion to which is a re­newed in­sulin pre­scrip­tion. On top of that, the doc­tor/​med­i­cal es­tab­lish­ment en­forces the rule that peo­ple (only young peo­ple? only peo­ple who re­cently de­vel­oped di­a­betes? There could be a good med­i­cal rea­son here, I don’t know) with Type I Di­a­betes have reg­u­lar check­ups.

Now I imag­ine there are all sorts of rea­sons for want­ing to skip this checkup. Maybe the checkup isn’t needed, and is just a money scam (small aside: if my doc­tor tells me I need a reg­u­lar checkup, this is not my first thought. But in­di­vi­d­ual situ­a­tions can vary). Maybe the doc­tor’s sched­ule is so un­rea­son­able that it’s im­pos­si­ble to make an ap­point­ment. There could be thou­sands of valid rea­sons. The prob­lem as I see it is that, from the point of view of both the doc­tor and the nurse, they are only ne­go­ti­at­ing over the checkup. You men­tion right at the start that the nurse offered a solu­tion (“drop ev­ery­thing and come see your doc­tor to­mor­row”) - from that point on the situ­a­tion was no longer life-threat­en­ing! There was no re­al­is­tic sce­nario in which this would cost your friend more than the plans they made for the next day! You were just hag­gling over what is more im­por­tant, your friend’s sched­ule or the rules set by the med­i­cal es­tab­lish­ment that you need an ac­tive pre­scrip­tion to get in­sulin and you need a checkup to re­new your pre­scrip­tion. Guess which one the nurse is go­ing to find more im­por­tant.

I un­der­stand if it feels like your friend is be­ing black­mailed by the doc­tor (and in fact it seems like they are), but by re­fus­ing to visit the next day you are the ones who es­ca­lated the situ­a­tion. And then es­ca­lated even fur­ther by threat­en­ing with me­dia ex­po­sure. I think from the point of view of the nurse your friend is show­ing rather hos­tile be­havi­our. I’ll take the liberty of go­ing through the phone call as you posted it, filling in how I ex­pect nurses to act:

The nurse tells my friend he needs to go see his doc­tor, be­cause it has been seven months, and the doc­tor feels he should see his doc­tor ev­ery three.

Prob­a­bly stan­dard pro­ce­dure. At any rate this de­ci­sion it out of the nurse’s hands, so they are just pro­vid­ing in­for­ma­tion here.

My friend replies that he agrees he should see his doc­tor, and he has made an ap­point­ment in a few weeks when he has the time to do that.
The nurse says that he can’t get his pre­scrip­tion re­filled un­til he sees the doc­tor.

Still stan­dard. Nurses don’t get to over­rule con­di­tions doc­tors set for med­i­ca­tion, if the doc­tor says a checkup is needed then the nurse has no way of hand­ing over in­sulin.

My friend ex­plains that he does not have the time to drop what he is do­ing and see the doc­tor the next day. That he is happy to see the doc­tor in a few weeks. But that un­til then, he re­quires in­sulin to live.
The nurse says that he can’t get his pre­scrip­tion re­filled un­til he sees the doc­tor. That if he wants it ear­lier he can find an­other doc­tor.

Still the same is­sue. The nurse doesn’t have the au­thor­ity to over­rule the con­di­tions set by the doc­tor. Also I’m miss­ing a sen­tence here, who in­tro­duced talk­ing to the doc­tor the very next day?

My friend ex­plains again that he does not have the time to see any doc­tor the next day, nor can one find a doc­tor on one day’s no­tice in rea­son­able fash­ion. And that he has already made an ap­point­ment, and needs in­sulin to live. And would like to speak with the doc­tor.
The nurse re­fuses to get the pre­scrip­tion filled. The nurse does not offer to let him speak to the doc­tor, and says that he can ei­ther wait, make an ap­point­ment for the next day, or find a new doc­tor.

So ap­par­ently mak­ing an ap­point­ment one a one-day no­tice is very doable on the doc­tor’s side. By this point you are solidly hag­gling about time, not medicine. I also think the nurse could have let you speak with the doc­tor here. But I think it’s also plau­si­ble that they get/​did in the past get phone calls from all kinds of en­ti­tled weirdos who re­fuse to show up to ap­point­ments, and at this mo­ment it’s re­ally not clear your friend is not one of them. Why would their day plans be more im­por­tant?

My friend points out that with­out in­sulin, he will die. He asks if the nurse wants him to die. Or what the nurse sug­gests he do in­stead, rather than die.
This seems not to get through to the nurse, be­cause my friend asks these ques­tions sev­eral times. The nurse does not offer to re­fill the pre­scrip­tion, or let my friend talk to the doc­tor.
My friend says that if the doc­tor does not give him ac­cess to life sav­ing medicine and in­stead leaves him to die, he will post about it on so­cial me­dia.
The nurse now de­cides, for the first time in the con­ver­sa­tion, that my friend should per­haps talk to his doc­tor.

Really? Your friend es­ca­lates from “I don’t want to visit you to­mor­row” to “that means you must want me to die”, which of course the sen­si­ble nurse ig­nores, and your strat­egy was to re­peat it a few more times? Yeah, you re­ally showed them there. I bet the nurse im­me­di­ately re­al­ised they were wrong the first time, and con­nected you through with the doc­tor be­fore you got to the third rep­e­ti­tion. From their point of view you’ve re­fused a good solu­tion to the prob­lem and are now just bug­ging them to make your life eas­ier (who likes go­ing to check­ups? No­body. So who hag­gles about not want­ing to show up? Well, not ev­ery­body, but more than just your friend I bet). And at that point your strat­egy is to es­ca­late even more by threat­en­ing me­dia ex­po­sure, and put even more pres­sure on that poor nurse? I’m not sur­prised the doc­tor claimed you are black­mailing them af­ter this.

What was your goal of the con­ver­sa­tion with the nurse in the first place? You need a doc­tor’s pre­scrip­tion for the in­sulin, so shouldn’t you have aimed for talk­ing with the doc­tor? And if that was your goal, what pur­pose did it serve to tighten the screws on the nurse? You should have acted like a model pa­tient and calmly re­quested you speak with the doc­tor, who can (and did) over­rule the nor­mal med­i­cal pro­cess just to give you life-sav­ing medicine.

I guess that be­came a far longer monologue than I planned, I’m not go­ing to go through the phone call with the doc­tor be­cause it’s just more of the same. I think OP is in the wrong here, at the very least in their in­ter­ac­tion with the nurse. And I do agree that this is a bad med­i­cal sys­tem, but you re­ally can’t throw the co-pay costs, the lack of au­to­matic pre­scrip­tion ex­ten­sions/​suffi­ciently large pre­scrip­tions to last you a long time and your in­ter­ac­tion with the nurse and doc­tor on one heap and pre­tend this is the fault of “the Amer­i­can med­i­cal sys­tem”. The over­all struc­ture sucks, but some of these peo­ple are just lo­cal ac­tors who can­not make a change and your friend threat­ened them to not have to change their sched­ule.

• 3 May 2019 16:54 UTC
13 points
AF
in reply to: Vika's comment

I have a bit of time on my hands, so I thought I might try to an­swer some of your ques­tions. Of course I can’t speak for TurnTrout, and there’s a de­cent chance that I’m con­fused about some of the things here. But here is how I think about AUP and the points raised in this chain:

• “AUP is not about the state”—I’m go­ing to take a step back, and pre­tend we have an agent work­ing with AUP rea­son­ing. We’ve speci­fied an ar­cane set of util­ity func­tions (based on air molecule po­si­tions, well-defined hu­man hap­piness, con­tinued ex­is­tence, what­ever fits in the math­e­mat­i­cal frame­work). Next we have an ac­tion A available, and would like to com­pute the im­pact of that ac­tion. To do this our agent would com­pare how well it would be able to op­ti­mize each of those ar­cane util­ity func­tions in the world where A was taken, ver­sus how well it would be able to op­ti­mize these util­ity func­tions in the world where the rest ac­tion was taken in­stead. This is “not about state” in the sense that the im­pact is de­ter­mined by the change in the abil­ity for the agent to op­ti­mize these ar­cane util­ities, not by the change in the world state. In the par­tic­u­lar case where the util­ity func­tion is speci­fied all the way down to sen­sory in­puts (as op­posed to el­e­ments of the world around us, which have to be in­ter­preted by the agent first) this doesn’t ex­plic­itly re­fer to the world around us at all (al­though of course im­plic­itly the ac­tions and sen­sory in­puts of the agent are part of the world)! The thing be­ing mea­sured is the change in abil­ity to op­ti­mize fu­ture ob­ser­va­tions, where what is a ‘good’ ob­ser­va­tion is defined by our ar­cane set of util­ity func­tions.

• “overfit­ting the en­vi­ron­ment”—I’m not too sure about this one, but I’ll have a crack at it. I think this should be in­ter­preted as fol­lows: if we give a pow­er­ful agent a util­ity func­tion that doesn’t agree perfectly with hu­man hap­piness, then the wrong thing is be­ing op­ti­mized. The agent will shape the world around us to what is best ac­cord­ing to the util­ity func­tion, and this is bad. It would be a lot bet­ter (but still less than perfect) if we had some way of forc­ing this agent to obey gen­eral rules of sim­plic­ity. The idea here is that our bad proxy util­ity func­tion is at least some­what cor­re­lated with ac­tual hu­man hap­piness un­der ev­ery­day cir­cum­stances, so as long as we don’t sud­denly in­tro­duce a mas­sively pow­er­ful agent op­ti­miz­ing some­thing weird (oops) to mas­sively change our lives we should be fine. So if we can give our agent a limited ‘bud­get’ - in the case of fit­ting a curve to a dataset this would be akin to the num­ber of free pa­ram­e­ters—then at least things won’t go hor­ribly wrong, plus we ex­pect these sim­pler ac­tions to have less un­in­tended side-effects out­side the do­main we’re in­ter­ested in. I think this is what is meant, al­though I don’t re­ally like the ter­minol­ogy “overfit­ting the en­vi­ron­ment”.

• “The long arms of op­por­tu­nity cost and in­stru­men­tal con­ver­gence”—this point is ac­tu­ally very in­ter­est­ing. In the first bul­let point I tried to ex­plain a lit­tle bit about how AUP doesn’t di­rectly de­pend on the world state (it de­pends on the agent’s ob­ser­va­tions, but with­out an on­tol­ogy that doesn’t re­ally tell you much about the world), in­stead all its gears are part of the agent it­self. This is re­ally weird. But it also lets us sidestep the is­sue of hu­man value learn­ing—if you don’t di­rectly in­volve the world in your im­pact mea­sure, you don’t need to un­der­stand the world for it to work. The real ques­tion is this one: “how could this im­pact mea­sure pos­si­bly re­sem­ble any­thing like ‘im­pact’ as it is in­tu­itively un­der­stood, when it doesn’t in­volve the world around us?” The an­swer: “The long arms of op­por­tu­nity cost and in­stru­men­tal con­ver­gence”. Keep in mind we’re defin­ing im­pact as change in the abil­ity to op­ti­mize fu­ture ob­ser­va­tions. So the point is as fol­lows: you can pick any ab­surd util­ity func­tion you want, and any ab­surd pos­si­ble ac­tion, and odds are this is go­ing to re­sult in some amount of at­tain­able util­ity change com­pared to tak­ing the null ac­tion. In par­tic­u­lar, pre­cisely those ac­tions that mas­sively change your abil­ity to make big changes to the real world will have a big im­pact even on ar­bi­trary util­ity func­tions! This sen­tence is so key I’m just go­ing to re­peat it with more em­pha­sis: the ac­tions that mas­sively change your abil­ity to make big changes in the world—i.e. mas­sive de­creases of power (like shut­ting down) but also mas­sive in­creases in power—have big op­por­tu­nity costs/​benefits com­pared to the null ac­tion for a very wide range of util­ity func­tions. So these get as­signed very high im­pact, even if the util­ity func­tion set we use is ut­ter hokus­pokus! Now this is pre­cisely in­stru­men­tal con­ver­gence, i.e. the claim that for many differ­ent util­ity func­tions the first steps of op­ti­miz­ing them in­volves “make sure you have suffi­cient power to en­force your ac­tions to op­ti­mize your util­ity func­tion”. So this gives us some hope that TurnTrout’s im­pact mea­sure will cor­re­spond to in­tu­itive mea­sures of im­pact even if the util­ity func­tions in­volved in the defi­ni­tion are not at all like hu­man val­ues (or even like a sen­si­ble cat­e­gory in the real world at all)!

• “Wire­head a util­ity func­tion”—this is the same as op­ti­miz­ing a util­ity func­tion, al­though there is an im­por­tant point to be made here. Since our agent doesn’t have a world-model (or at least, shouldn’t need one for a min­i­mal work­ing ex­am­ple), it is plau­si­ble the agent can op­ti­mize a util­ity func­tion by hi­jack­ing its own in­put stream, or some­thing of the sorts. This means that its at­tain­able util­ity is at least par­tially de­ter­mined by the agent’s abil­ity to ‘wire­head’ to a situ­a­tion where tak­ing the rest ac­tion for all fu­ture timesteps will pro­duce a se­quence of ob­ser­va­tions that max­i­mizes this spe­cific util­ity func­tion, which if I’m not mis­taken is pretty much spot on the clas­si­cal defi­ni­tion of wire­head­ing.

• “Cut out the mid­dle­man”—this is similar to the first bul­let point. By defin­ing the im­pact of an ac­tion as our change in the abil­ity to op­ti­mize fu­ture ob­ser­va­tions, we don’t need to make refer­ence to world-states at all. This means that ques­tions like “how differ­ent are two given world-states?” or “how much do we care about the differ­ence be­tween two two world-states?” or even “can we (al­most) undo our pre­vi­ous ac­tion, or did we lose some­thing valuable along the way?” are or­thog­o­nal to the con­struc­tion of this im­pact mea­sure. It is only when we add in an on­tol­ogy and start in­ter­pret­ing the agent’s ob­ser­va­tions as world-states that these ques­tions come back. In this sense this im­pact mea­sure is com­pletely differ­ent from RR: I started to write ex­actly how this was the case, but I think TurnTrout’s ex­pla­na­tion is bet­ter than any­thing I can cook up. So just ctrl+F “I tried to nip this con­fu­sion in the bud.” and read down a bit.

• I think the ev­i­dence pre­sented is way too weak to sup­port the type of con­clu­sions drawn in this piece. I mean re­ally, we’re com­put­ing dou­bling times by tak­ing log­a­r­ithms of es­ti­mated GDP, in­sert­ing an ar­bi­trary offset in our defi­ni­tion of the hori­zon­tal axis and then plot­ting THAT on a log-log scale? What were you ex­pect­ing to find?

More speci­fi­cally: the hori­zon­tal la­bels of the most re­cent data points are heav­ily in­fluenced by the par­tic­u­lar choice of 2020 offset. I’ve taken the liberty of re­peat­ing (I hope) Scott’s anal­y­sis with the data from the pa­per, and swap­ping the offset to 2050 or even 2100 bunches the last data points a lot closer to­gether, al­low­ing a lin­ear fit to pretty much pass through them. I think some ar­gu­ment can be made that we need a higher time re­s­olu­tion in an era with a dou­bling time of ~20 years com­pared to an era with a dou­bling time of ~500 years, but I’m still not happy with how sen­si­tive this anal­y­sis is and would love to hear why 2020 is a bet­ter choice than 2100.

Also I no­tice that Scott left a bunch of data points from the pa­per out of the graph. I can live with ex­clud­ing the re­ally early ones (be­fore 10000 B.C.), but why do you skip over the ones near 0 A.D.? The 1100-1200′s? And where are the data points with nega­tive dou­bling times (i.e. de­clin­ing GDP)? Maybe I missed it but I don’t see men­tion of these at all.

• Yes, I think you’re right. Per­son­ally I think this is where the char­i­ta­ble read­ing comes in. I’m not aware of Ein­stein speci­fi­cally stat­ing that there have to be hid­den vari­ables in QM, only that he ex­plic­itly dis­agreed with the non­lo­cal­ity (in the sense of gen­eral rel­a­tivity) of Copen­hagen. In the ab­sence of ex­per­i­men­tal proof that hid­den vari­ables is wrong (through the EPR ex­per­i­ments) I think hid­den vari­ables was the main con­tender for a “lo­cal QM”, but all the ar­gu­ments I can find Ein­stein sup­port­ing are more gen­eral/​philo­soph­i­cal than this. In my opinion most of these crit­i­cisms still ap­ply to the Copen­hagen In­ter­pre­ta­tion as we un­der­stand it to­day, but in­stead of sup­port­ing hid­den vari­ables they now sup­port [all mod­ern lo­cal QM in­ter­pre­ta­tions] in­stead.

Or more ab­stractly: Ein­stein backed a cat­e­gory of the­o­ries, and the main con­tender of that cat­e­gory has been solidly busted (on­go­ing de­bate about hid­den vari­ables blah blah blah I dis­agree). But even to­day I think other the­o­ries in that pool still come ahead of Copen­hagen in like­li­hood, so his sup­port of the cat­e­gory as a whole is jus­tified.

• I feel like I’m walk­ing into a trap, but here we go any­way.

Ein­stein dis­agreed with some very spe­cific parts of QM (or “QM as it was un­der­stood at the time”), but also em­braced large parts of it. Fur­ther­more, on the parts Ein­stein dis­agreed with there is still to this day on­go­ing con­fu­sion/​dis­agree­ment/​lack of con­sen­sus (or, if you ask me, plain mis­takes be­ing made) among physi­cists. Dis­cussing in­ter­pre­ta­tions of QM in gen­eral and Ein­stein’s role in them in par­tic­u­lar would take way too long but let me just offer that, de­spite pop­u­lar me­dia ex­ag­ger­a­tions, with min­i­mal char­i­ta­ble read­ing it is not clear that he was wrong about QM.

I know far less about Ein­stein’s work on a unified field the­ory, but if we’re will­ing to treat ab­sence of ev­i­dence as ev­i­dence of ab­sence here then that is a fair mark against his record.

• I think this is an in­ter­est­ing idea, but doesn’t re­ally in­ter­sect with the main post. The marginal benefits of reach­ing a galaxy ear­lier are very very huge. This means that if we are ever in the situ­a­tion where we have some probes fly­ing away, and we have the op­tion right now to build faster ones that can catch up, then this makes the old probes com­pletely ob­so­lete even if we give the new ones iden­ti­cal in­struc­tions. The (sunk) cost of the old probes/​ex­tra new probes is in­signifi­cant com­pared to the gain from ear­lier ar­rival. So I think your strat­egy is dom­i­nated by not send­ing probes that you feel you can catch up with later.

• Well, I still don’t have any ex­pe­rience with this. But maybe pos­si­ble av­enues in­clude:

• Look­ing into mod­er­a­tion rules.

• In­clud­ing some kind of rep­u­ta­tion/​point/​re­ward sys­tem, and other meth­ods to keep your users en­gaged.

• Track­ing met­rics on the growth of the Site, and ideally hav­ing some ad­vance ex­pec­ta­tions/​plans on how to re­spond to differ­ent rates of growth/​de­cline.

• A more rad­i­cal ap­proach might be to give up the phase 2 and be­yond in their en­tirety, and set­tle for a tar­get au­di­ence of peo­ple close enough to you that you can rea­son­ably trust them.

The sur­vivor­ship bias is a very valid point, but [not do­ing re­search on how to make web­sites grow] is also a poor strat­egy. Per­son­ally I’d still look into the ad­vice, but I’m afraid what you’re try­ing to do is sim­ply very difficult.

• Epistemic sta­tus: wor­ried about effort/​time lost.

I am by no means ex­pe­rienced with any of this, and se­ri­ously con­sid­ered not writ­ing any­thing at all. But it only takes me a bit of time (an hour max) to write why I feel why the odds are very strongly against you, and if you are se­ri­ous about pur­su­ing this idea then even if at low prob­a­bil­ity that my com­ment is helpful to you it’s worth writ­ing it on av­er­age. So here we go.

Dur­ing my read of the post, top-to-bot­tom, at the part

On mat­ters of truth, it needs to sup­port epistemic ar­gu­ments for why we should be­lieve or not be­lieve par­tic­u­lar claims. On mat­ters of ac­tion, it needs to provide im­por­tant pro/​cons of tak­ing that ac­tion. Site must have a method of al­low­ing the best ar­gu­ments to rise to the top.

my in­ter­nal monologue went “The first bit is difficult but per­haps pos­si­ble. The sec­ond is a mess. Oh dear, the third is ba­si­cally im­pos­si­ble!”. The sen­tence im­me­di­ately af­ter, ex­plain­ing this func­tion­al­ity would be the bare ba­sics, shocked me quite a lot. I think aiming for the quoted sec­tion is nigh-im­pos­si­ble, and then we haven’t started on the pos­si­ble ad­di­tional fea­tures you men­tion. Your post strongly re­minds me of Ben­jamin Hoff­man’s piece on An­glerfish (in my opinion worth read­ing in full), and also a bit of a seg­ment (near the start) in one of Eliezer’s posts on se­cu­rity mind­set—where the char­ac­ter Am­ber makes the mis­take of think­ing that the crit­i­cal part of her startup is the tech­nol­ogy, where re­ally it is the se­cu­rity. I think in a similar man­ner your Site would, be­sides de­pend­ing on the UI, the back-end, the mar­ket­ing etc. also de­pend crit­i­cally on its abil­ity to con­tinue grow­ing dur­ing cer­tain crit­i­cal phases, and the lack of dis­cus­sion on this as a plau­si­ble failure mode is mak­ing me rather pes­simistic.

In my mind, con­di­tional on Site even­tu­ally op­er­at­ing as in­tended, it should grow through sev­eral phases. First you have a low num­ber of users (~100 reg­u­lar users? Sorry, I don’t have ex­pe­rience with this) who ba­si­cally filtered in from your so­cial cir­cles, and are able to ag­gre­gate their opinions/​thoughts as in­tended. Then in the next phase Site grows more pop­u­lar as peo­ple no­tice this is a valuable source of truth/​plans/​spec­u­la­tion, and they provide new ques­tions and an­swers cov­er­ing broader top­ics. After that there should be some third phase where Site is di­verse and big enough that all those ex­tra fea­tures you men­tioned might be­come plau­si­ble to im­ple­ment (I’ll come back to this later).

My prob­lem lies with the sec­ond phase. Ben­jamin’s piece sug­gests that as soon as Site is big enough to have any real value, this im­me­di­ately cre­ates in­cen­tives for out­siders to try to abuse/​free-ride on the pro­ject (for ex­am­ple through ma­nipu­lat­ing the ques­tions or vot­ing). This would be worse on dis­cus­sions on *ac­tions*, which is why at the start I men­tioned that that is more difficult than dis­cussing *truth*. Your wish to keep Site crowd-sourced makes it more difficult to guard against this phe­nomenon, and to me Eliezer’s writ­ing on se­cu­rity mind­set sug­gests that if you don’t treat this prob­lem as cen­tral the odds are strongly against you. It is un­clear to me what mo­ti­vates peo­ple to keep com­ing back to Site in this sec­ond phase if they dis­agree with a large part of the de­mo­graphic/​con­sen­sus, or in gen­eral why echo-cham­ber effects would not ap­ply. In fact, it is un­clear to me why peo­ple would spend time par­ti­ci­pat­ing in dis­cus­sions out­side their im­me­di­ate in­ter­ests at all (see also for ex­am­ple evap­o­ra­tive cool­ing).

Lastly I think a large part of Site would only func­tion af­ter you have some crit­i­cal mass of users to have suffi­cient dis­cus­sion on a lot of differ­ent top­ics. This is trou­bling as it means those parts ex­ist­ing at all is con­di­tional on Site be­ing a suc­cess. In the spirit of “If you’re not grow­ing you’re shrink­ing” I think a lot more time and effort should be fo­cused on figur­ing out how to ob­tain and keep a user­base, and in­tro­duc­ing fancy fea­tures is down­stream from this.

Sorry for be­ing so crit­i­cal and non­con­struc­tive. I don’t know how to solve any of these prob­lems, but like I said at the start it felt like a wrong strat­egy to just stay quiet. I hope I’m wrong about most/​all of this, and let me as a clo­sure men­tion again that I don’t have ex­pe­rience with this at all.

• In one of my so­cial groups ‘send out a Doo­dle’ has be­come a meme in­di­cat­ing that a com­mit­tee has failed at or­ganis­ing it­self/​is never go­ing to be pro­duc­tive, and the chair­man of the com­mit­tee (thank­fully we always as­sign this role) has to take ac­tion right now and sug­gest a meet­ing time and place.

In my per­sonal ex­pe­rience Doo­dle is use­ful but also one of those triv­ial in­con­ve­niences (read­ing an email, click­ing a link, check­ing 10 sug­gested times against your cal­en­dar, wait­ing for an­other email tel­ling you a times­lot has been cho­sen, putting the time into your agenda), which is a down­side.

• I don’t think this is differ­ent at all. It just sounds like no­body took charge in ex­plic­itly defin­ing the sub-com­mit­tees, so in­stead the so­cially savvy com­mit­tee mem­bers self-or­ganised to ac­tu­ally get things* done.

*these things’ may or may not al­ign with the pur­pose of the com­mit­tee.

• Pri­son­ers and boxes: yeah, we are prob­a­bly think­ing of the same solu­tion. Vg vaiby­irf gur ce­bonovyvgl bs svaq­vat n x-plpyr va na neov­genel ryrzrag bs gur crezhgngvba tebhc ba a ryrzragf.

Bat­tle­ships: that’s the in­tended solu­tion, yes. I don’t know of any nicer one

• V’z abg fher vs jr’er raq­vat hc jvgu gur fnzr fgen­vtug yvar gub­htu? Zn­lor zl frg bs genafpraqragny rdhngvbaf unf gur fvz­cyr fby­hgvba bs gur gna­trag, ju­vpu V’ir bireyb­bxrq, ohg V gu­vax gur natyr bs gur yvar j.e.g. gur pvepyr bs en­qvhf 1/​z fub­hyq qr­craq ba gur png’f fcrrq.