# Filipe Marchesini

Karma: 212
• I haven’t ex­er­cised reg­u­larly in years, and last week I started think­ing about how bad the con­se­quences can be for me. I de­cided to do some­thing, I wasn’t in the mood, I thought “well, I’ll do 10 push-ups. Maybe it’s not much, but it’s bet­ter than noth­ing”. And I made it. You said “make it ridicu­lously easy”, and now I just made 15 push-ups. In­ter­est­ing. This is re­ally easy. And I will do it again. Just a lit­tle more on the next time.

• Epistemic sta­tus: bab­ble all the way down, not prun­ing. But I be­lieve my ap­proach is bet­ter than most of other an­swers here.

The er­ror from other LWers is not sep­a­rat­ing the eval­u­a­tion of les­sons to the eval­u­a­tion of tests.

Stu­dents should be al­lowed to give good teach­ers a bonus. For each les­son, in any mo­ment of the les­son, the stu­dents should have the pos­si­bil­ity to rate the teacher’s perfor­mance on some met­rics. Think on a mo­bile ap­pli­ca­tion that does that. Do you know when you take a ride with an Uber and im­me­di­ately af­ter finish­ing the ride you rate it? We should have the same pos­si­bil­ity of rat­ing teach­ers af­ter their les­sons (up un­til some limit, e.g., you had your les­son on Mon­day, you won’t be able to rate it on the next month, you have un­til a week to rate this les­son). The teacher should be paid a bonus when he gets good scores. This bonus would be added les­son by les­son to teacher’s ac­count.

Let’s say each week I have three differ­ent les­sons with pro­fes­sors A, B and C.
Pro­fes­sor A gives me 2 hours les­son/​week.
Pro­fes­sor B gives me 4 hours les­son/​week.
Pro­fes­sor C gives me 6 hours les­son/​week.

For each two hours les­son, the stu­dent gains one point to spend. So, I have 6 points to spend on spend on pro­fes­sors A, B, C in any way I choose to.

The pro­fes­sor A, I’ve just watched his les­son and I loved it. I give him 3 points. Pro­fes­sor B is good too, I like him, but I will just give him two points. Pro­fes­sor C is not that good teacher, but he seems to be work­ing hard on these par­tic­u­lar difficult top­ics, I’ll give him one point.

On the end of the month, good teach­ers will be re­warded by how good their perfor­mance were on THE LESSONS. I haven’t spent time think­ing on a good func­tion to con­vert the scores re­ceived by the teacher on this week to money, but it doesn’t seem hard to cre­ate a fair one.

We should have a sep­a­rate rat­ing sys­tem for the eval­u­a­tion of the tests ap­plied by the teacher, so we could sep­a­rate the feel­ings that ap­pear on our heart when we com­pare the qual­ity of the les­sons to the difficulty of the prob­lems posed by the pro­fes­sor on his test. We know when we go bad on an exam, “that’s the teacher’s fault”. So this sep­a­rate sys­tem would be more strict, ask­ing sev­eral ques­tions like “How difficult was this test? How many hours have you stud­ied be­fore do­ing it? The ques­tions on the test were re­lated to things taught on the les­sons? How do you com­pare the difficult of the test ques­tions to the difficult of the les­sons’ ques­tions? Leave a com­ment about the test on the fol­low­ing En­try box”. Ob­vi­ously I haven’t pruned these ques­tions, they just ar­rived at my mind, but cer­tainly there ex­ist a very good set of ques­tions that could let us in­ves­ti­gate how well the teach­ers perform in cre­at­ing tests and also re­ward them when we de­tect it.

Think­ing again about the first sys­tem, it should also have some ques­tions about the les­son. “How good the pro­fes­sor ex­plains the con­cepts? How or­ga­nized he is? Did you learn the con­cepts? How do you rate the difficulty of the top­ics this teacher is try­ing to ex­plain to you? [Leave here what kind of ques­tions you be­lieve would im­prove this ques­tion­naire]. Leave a com­ment about the les­son on the fol­low­ing En­try box”.

It shouldn’t be needed for a stu­dent to an­swer these ques­tions to give all his points for a teacher. But we could weight the stu­dent points by how many ques­tions he an­swered. For ex­am­ple, if I gave you 3 points and I said why I’ve done this, this weighs more than a stu­dent that gives you 3 points but doesn’t ex­plain why he does that. Jus­tified rat­ing is worth more than un­jus­tified rat­ing.

1. this ad­van­tages teach­ers with larger classes.

Your re­ward func­tion can take in con­sid­er­a­tion the num­ber of stu­dents that par­ti­ci­pated on the les­son, the num­ber of stu­dents the rated the pro­fes­sor, and also you could av­er­age the scores, I don’t know, come on, you can cre­ate a func­tion that is fair for any class size, you just have to think about what func­tion you will use

Where does the money come from?

Diminish all salaries in x%. Now you can re­dis­tribute this money more fairly, pro­por­tional to perfor­mance.

My sec­ond-fa­vorite teacher in un­der­grad was rel­a­tively un­pop­u­lar be­cause he taught very difficult classes, at least some of which were re­quired to grad­u­ate.

That’s why it is im­por­tant to eval­u­ate the LESSONS ev­ery week. And when the test comes, this is a differ­ent eval­u­a­tion. This pro­fes­sor was un­pop­u­lar due to difficult tests, not to bad les­sons, right?

Most uni­ver­si­tites have already sys­tems where stu­dents eval­u­ate their teach­ers at the end of the year and the scores do figure into ad­minis­tra­tive de­ci­sions of the university

That’s the prob­lem. At the end of the year you are eval­u­at­ing the “teacher”, which means

If I find the teacher a good pro­fes­sor and I give him +5 points, but I sucked at his tests, and I give −10 points for my bad feel­ings for do­ing bad on the test, the fi­nal eval­u­a­tion of the teacher is “tis teacher is bad” == −5

If the sys­tem rates week by week, we could de­tect mi­suse of the sys­tem if we sud­denly see bad les­sons rat­ings close to the test ap­pli­ca­tion (right af­ter the test, for ex­am­ple).

I don’t think this is how mar­ket wages work. If it is known that the av­er­age teacher gets a $100 bonus, the school will offer$100 less in base pay than it would oth­er­wise.

Maybe not right now, when the change is in­tro­duced. But in the fol­low­ing years, the wages will raise slower than they would oth­er­wise, un­til the bal­ance is achieved.

It doesn’t seem bad to pay a lit­tle less for the av­er­age teacher with av­er­age les­sons, and pay a lit­tle more for the above av­er­age teacher with above av­er­age les­sons. It seems like ar­bi­trage. You do good les­sons, you earn more. Why not? And if you can now de­tect which teach­ers are much worse than av­er­age, you can fire them and get even more stu­dents that are in­ter­ested in this school full of good teach­ers, be­cause the bad ones can’t stay

• I don’t think so. For ex­am­ple, fol­low these in­struc­tions:

1. Say you are a poor guy on a poor country

2. Say you luck­ily got a com­puter when you were a child

3. Say while you were study­ing AI, you found LW.

4. Say bad events hap­pen to you/​your fam­ily, and now you are in ur­gent need.

Now you can see, al­though most LW read­ers are not like this guy, this guy is among LW read­ers. My point is that we should fi­nan­cially sup­port this guy, in­de­pen­dently if he be­longs to LW or not. I would say it is easy to help him and we have a rea­son to sup­port him, and the fact he reads LW doesn’t change the facts on his life. Again, al­though not com­mon, we should be pre­pared to de­tect and solve this kind of un­for­tu­nate situ­a­tion. At least this is some­thing I would do if I had enough re­sources to help.

• Why would they?

Some­times non needy peo­ple want to help other peo­ple in need. If you were look­ing to max­i­mize hap­piness points across the world, for ex­am­ple, you would gain more points helping those mem­bers in need.

Con­sid­er­ing that LW read­ers are mostly rich Americans

There are non-zero mem­bers suffer­ing fi­nan­cially, that’s why I asked that. It would be too easy for some peo­ple here to make this num­ber go to zero.

• My ques­tion was re­ally stupid, ac­tu­ally I was think­ing “I would like to spend at least 200 hours on this pro­ject, but it seems I won’t get any money from it, maybe I could ask LW mem­bers if they want to sup­port it fi­nan­cially”.
A bet­ter ques­tion is “Can I ask you money to help me to build a soft­ware that may help you?”, or “it is in­ap­pro­pri­ate to ask for money on LW, the plat­form dis­cour­ages this”.

Dis­claimer: I am still not sure if this is the cor­rect ques­tion. Any­way, I am de­vel­op­ing some helper tools, and al­though I won’t mon­e­tize them di­rectly, it would be good to get some money from it, be­cause I am not the guy who has enough money to not need any money from the com­mu­nity any­more.

• Should LW mem­bers sup­port each other fi­nan­cially?

• If my stated and/​or re­vealed prefer­ences are that I don’t value join­ing the elite class very much, is that wrong in ei­ther an in­stru­men­tal or ter­mi­nal sense?

Con­sid­er­ing you haven’t mis­calcu­lated the value from join­ing the elite class, I be­lieve it is wrong to spend en­ergy to be la­beled as “elite”. If you lost some­thing you had to pro­tect while you wasted your time with use­less pur­suits, like try­ing to “join the elite” by get­ting some very spe­cific su­pe­rior pedi­gree, then you took a very poor in­stru­men­tal ac­tion. It all de­pends on what you ac­tu­ally want and how join­ing the elite will help you to achieve that. But it seems ob­vi­ous that there are sev­eral ways of achiev­ing any­thing you want with­out hav­ing to join the elite, ex­cept if your ter­mi­nal value is be­ing la­beled as elite from some spe­cific set of peo­ple.

For peo­ple who do seem to value it a lot, ei­ther for them­selves or their kids (e.g., par­ents ob­sessed with get­ting their kids into an elite uni­ver­sity), is that wrong in ei­ther an in­stru­men­tal or ter­mi­nal sense?

That seems wrong if there are less costly and much faster ways to achieve what the par­ents ac­tu­ally want from their kids with­out hav­ing to make them par­ti­ci­pate on the “be­come elite” rit­u­als. Maybe the par­ents want their kids to be seen as good peo­ple, re­spected among the mem­bers of the tribe, with­out fi­nan­cial trou­bles. If elite peo­ple have these prop­er­ties, you make your kids to par­ti­ci­pate on the rit­u­als needed to make them la­beled as elite (par­ents use the “elite” la­bel here as a proxy to sta­tus, re­spect and fi­nan­cial sup­port). But that’s a bad choice when par­ents dis­cover there are sev­eral other cheaper ways of achiev­ing the same ends. And that’s a bad choice when par­ents dis­cover in the fu­ture that the prox­ies used in the past to filter good peo­ple from bad peo­ple are not rele­vant any­more. I be­lieve what par­ents ac­tu­ally want is not just their kids be­ing seen as good peo­ple, but also their kids be­ing good peo­ple. Maybe if they be­come too ob­sessed with get­ting elite kids, what if par­ents dis­cover their elite kids are not ac­tu­ally good peo­ple? Due to the weak cor­re­la­tion be­tween be­ing ac­tu­ally good and par­ti­ci­pat­ing on elite rit­u­als, I be­lieve it is wrong to make your kids to be­come elite kids. You should fo­cus on mak­ing them good, re­spectable and rich. Other­wise, if the cor­re­la­tion is strong (be­tween par­ti­ci­pat­ing on what you call elite rit­u­als and be­com­ing good, re­spectable and rich), you should make your kids par­ti­ci­pate on these rit­u­als.

• Yes, we can hold in­tel­lec­tu­als to similar pub­lic stan­dards as ath­letes. Us­ing GPT-4/​5 we could use it to cre­ate a set of ques­tions to check if the in­tel­lec­tual can an­swer the ques­tions cor­rectly avoid­ing ev­ery kind of bias already ex­plained here on LW. For each bias already ex­plained be­fore, we can cre­ate new ques­tions that show when a hu­man fall on them, as­sign­ing a new score to that hu­man. I would like each hu­man to write down all his knowl­edge with the help of an au­to­matic writ­ing sys­tem, we could cre­ate a vi­sual tree of all knowl­edge the sys­tem de­tected the hu­man ac­quired on the past, and eval­u­ate how well he performs in an­swer­ing ques­tions about the fields he vis­ited/​he claims to know about. What’s the point of ask­ing your cre­den­tials when I can eval­u­ate your knowl­edge in real-time with GPT-n sys­tems?

On the tree of knowl­edge we could see which hu­mans score higher in which do­mains and why. What are the ques­tions they can an­swer that oth­ers can’t. Don’t ask me my cre­den­tials, ask me a hard ques­tion/​ give me a hard prob­lem to solve and let’s see who solves it first or bet­ter. GPT-n could bab­ble about the solu­tion pre­sented by differ­ent hu­mans, and other group of hu­mans that score high on these do­mains could also rate/​eval­u­ate the solu­tions by oth­ers, choos­ing the score they as­sign for each solu­tion.

• Yes, jun­gofthe­won, and AI go­ing well will also give all hu­man­ity the abil­ity to get ad­e­quate quan­tity of su­per-deli­cious green food, mak­ing ev­ery­one get un­limited ac­cess to su­per-deli­cious perfectly sus­tain­able healthy and eth­i­cal food.

Also, AI go­ing well will provide to hu­man­ity the best al­gorithm to al­lo­cate differ­ent peo­ple in differ­ent places, choos­ing the best place to ac­com­mo­date ev­ery per­son on earth. You wouldn’t be­lieve that liv­ing with this spe­cific set of 7 peo­ple on this spe­cific city and in this spe­cific house would give you the max­i­mum ex­pected hap­piness points you could ever achieve by just choos­ing the right peo­ple to live with.

Fur­ther­more, this perfect auto sus­tain­able large-scale med­i­cal di­ag­nos­tic sys­tem with the cu­mu­la­tive knowl­edge from the best med­i­cal sources on earth and the best statis­ti­cal anal­y­sis from all data from all pa­tients on earth, that no per­son on earth ever de­vel­ops any kind of dis­ease, be­cause the sys­tem always help you to avoid them.

I feel op­ti­mistic about the fu­ture when I think on the pos­si­bil­ity of choos­ing an AI sys­tem to be my poli­ti­cal rep­re­sen­ta­tive that I choose to co­or­di­nate my re­sources in di­rec­tion of op­ti­miz­ing large-scale sys­tems of ed­u­ca­tion, health, safety, hous­ing, ba­sic san­i­ta­tion and en­ter­tain­ment. I would give all my re­sources for such a sys­tem that proves to me that it can han­dle these sys­tems bet­ter than tra­di­tional hu­man poli­ti­cal rep­re­sen­ta­tives, and even more if it pro­vided us drones to pro­tect all our com­mu­nity from hu­man sys­tems that threaten our peace­ful way of co­or­di­nat­ing re­sources for all liv­ing be­ings.

We don’t know how to al­lo­cate the best per­son for each pos­si­ble task for op­ti­miz­ing some hu­man sys­tem. I would be in awe if we got an AI sys­tem that could choose the best job for each hu­man, based on their past data plus their iter­ated in­put about what they want /​ they like to do /​ they would like to be do­ing.

In­stead of great com­pa­nies in the tra­di­tional way of see­ing prob­lems be­ing solved by a bunch of hu­mans, I be­lieve great ML al­gorithms run­ning on de­cen­tral­ized sys­tems that could be built in the next decade, where the “al­gorithm product” is co­or­di­nat­ing hu­mans to get more hap­piness points, more iter­a­tions among them, more health and fulfilling ac­tions, by helping them to choose the best ac­tions to op­ti­mize their own sys­tems. For ex­am­ple, us­ing GPT-5 we could ac­tu­ally de­scribe in nat­u­ral lan­guage “I would like to have a su­per-cheap and easy to build with the least quan­tity of re­sources au­to­matic eth­i­cal green food pro­duc­tion sys­tem”, and then it would just tell me a set of ac­tions to take and get that for me, and I could dis­tribute this to the com­mu­nity.

• Get­ting an ex­tra $1000/​month wouldn’t sud­denly cre­ate a bunch of en­trepreneurs and rev­olu­tionize the economy Ob­vi­ously it will. You are just un­der­es­ti­mat­ing the num­ber of peo­ple on bul­lshit jobs be­cause they lack the cap­i­tal or mo­ti­va­tion for any­thing greater. I will not fo­cus on the ob­vi­ous jus­tifi­ca­tions for UBI like “the fact that tech­nolog­i­cal un­em­ploy­ment is only go­ing to get worse. Nearly half the pop­u­la­tion is em­ployed in jobs that could be re­placed by au­toma­tion, and that num­ber will only get higher as time goes on”. Some­times, peo­ple just need help. Espe­cially the el­derly. Maybe they’re sick and can’t af­ford medicine. Maybe they’re just lonely and don’t have many fam­ily mem­bers to speak to. Maybe they’ve been out of work for so long that they’ve fallen off the grid. A sim­ple cash pay­ment could help alle­vi­ate some of the stress they’re feel­ing. Now, this isn’t to say a ba­sic in­come would fix all these prob­lems. Con­struc­tion for a build­ing can’t be­gin un­til all the ma­te­ri­als have ar­rived. A ba­sic in­come would alle­vi­ate some of the stress of poverty, but it can’t be the only solu­tion. Peo­ple still need ac­cess to med­i­cal care, and they need other sources of sup­port. How­ever, a UBI could help provide that sup­port when those ser­vices aren’t available. Maybe you are fo­cus­ing on the fish­er­men thing too much, what about fo­cus­ing in “countable hap­piness points”? What about count­ing the num­ber of peo­ple that spend 180 hours monthly cut­ting a bread in half (Sub­way work­ers)? What about count­ing all these works that re­quire 180 hours in sub-hu­man con­di­tions to just have the min­i­mum con­di­tion to ask for food for other mem­bers? Then we get sick and no one cares, beg for food if you have any friend to help you, this is fairly com­mon on my coun­try. Not even 40% work­ers on my coun­try earn more than$167.0 dol­lars monthly. Th­ese are 54 mil­lion peo­ple on prob­a­bly bul­lshit 180 hours monthly jobs. Just shut up and mul­ti­ply.

For my friend that is driv­ing Uber for 12~14h each day to get $400.00 monthly on my coun­try. Even though he started study­ing HTML, CSS, etc, and get­ting on the path to get some­thing big­ger, he needs the money now. He can’t fo­cus on build­ing his on­line busi­ness. His father died and no one cares about him. He does bul­lshit things (like driv­ing Uber) to get money and to af­ford the ba­sic bills: rent, elec­tric­ity, food, in­ter­net. The next time he gets sick he will ask me for help, and what about me? I can’t fo­cus 100% on solv­ing real world prob­lems and also cre­at­ing on­line busi­nesses be­cause I have to spend a lot of time beg­ging for money to pay the bills that my father was used to pay and to help the peers that no one helps. It in­cludes cre­at­ing bul­lshit soft­wares for ran­dom peo­ple on on­line free­lanc­ing web­sites, on­line math/​physics pri­vate les­sons for high school stu­dents. If ev­ery­one were get­ting ba­sic in­come, I would be study­ing, pro­gram­ming and in­vent­ing new things 100% the time, not try­ing to get the next$5.00 to pay the monthly $150.00 im­pos­si­ble to negate bill and wouldn’t need to be work­ing all the time to help the mem­bers that earn less than me and don’t have any for­mal ed­u­ca­tion and no idea how to get money. I am ba­si­cally wast­ing my hours be­cause oth­er­wise the higher earn­ing mem­bers of my tribe would call me “an use­less mem­ber that is try­ing to un­der­stand how to solve these unim­por­tant prob­lems”, and they wouldn’t offer me ex­is­ten­tial pro­tec­tion (roof, bread and sup­port when sick). My part­ner spends 140 hours to get$200.00. If she could spend 140 hours study­ing and do­ing free ran­dom search on the in­ter­net monthly, she would be dis­cov­er­ing, cre­at­ing and con­tribut­ing, in­stead of com­plain­ing about how life is shit and suicide is an op­tion to over­come prob­lems.

Every peer of mine is do­ing bul­lshit be­cause ev­ery­one just wants a roof, a bread, a bed and the in­ter­net con­nec­tion. How can you de­sire any­thing other than that when you don’t have even that guaran­teed?

Maybe they pick up a part-time job, like at the gro­cery store where they can bag gro­ceries in the evenings. Maybe they start do­ing nails out of their house once a week. Maybe they buy a sin­gle-user fran­chise busi­ness like a cup­cake truck. Maybe they cre­ate prof­itable lemon­ade stands on ev­ery cor­ner. My mom de­cided to sell can­dies. The point is, a ba­sic in­come for these folks doesn’t just in­crease their qual­ity of life—it also vastly in­creases the econ­omy. The more money peo­ple have, the more they spend. And a UBI would give ev­ery­one more money, even if just enough to cover their ba­sic needs.

Do you want a bunch of en­trepreneurs? Give peo­ple enough leisure time (by giv­ing money), then you will see what hap­pens. Just bet with me.

• For me the idea of ex­pan­sive trans­la­tions is fan­tas­tic. Every time I read a new post in Less­wrong that brings im­por­tant in­for­ma­tion to the table, I think about trans­lat­ing it into Por­tuguese and bring­ing the in­for­ma­tion to the mem­bers of my tribe. But ob­vi­ously I don’t think about trans­lat­ing liter­ally, word for word, be­cause I can see the loss of in­for­ma­tion that this would bring. I know ex­actly how I could write in Por­tuguese that would bring the sen­sa­tions de­sired by the origi­nal au­thor of the post, con­sid­er­ing all the cul­tural nu­ances and in­fer­en­tial dis­tances. When you re­ally know more than one lan­guage you can see why and when it is a bad idea to trans­late liter­ally.

So how could we im­prove an ex­pan­sive trans­la­tion sys­tem? Sup­pose I took this post from Less­wrong and trans­lated it into Por­tuguese. Then I would post the trans­la­tion of the post in a soft­ware or ex­pan­sive trans­la­tion plat­form for ar­bi­trary sites. Our new ex­pan­sive-trans­la­tions dot com, ou our new chrome ex­ten­sion.

Trans­la­tors in the plat­form could give a score (from 0 to 10) of how good that trans­la­tion looked for differ­ent trans­la­tion for­mats: trans­la­tion for chil­dren, trans­la­tion for peo­ple with lit­tle or no math back­ground, literal trans­la­tions, fo­cused trans­la­tions for peo­ple with vi­sual, au­di­tory weak­nesses, etc. Also peo­ple who would come into con­tact with those trans­la­tions could give a grade of how easy it was to un­der­stand the sub­ject mat­ter.

Thus, we could cre­ate a mar­ket for ex­pan­sive trans­la­tions fo­cused on peo­ple of differ­ent styles. For ex­am­ple, the sys­tem could con­sider that trans­la­tions by peo­ple with similar math­e­mat­i­cal/​com­pu­ta­tional back­ground to mine would prob­a­bly please me more than ex­pan­sive trans­la­tions fo­cused on a lay au­di­ence. Ob­vi­ously this would de­pend on the type of sub­ject mat­ter, be­cause I am a com­plete lay­man when it comes to var­i­ous sub­jects, but in gen­eral the similar­i­ties of my pro­file with the trans­la­tor’s pro­file could be a proxy for me to find good ex­pan­sive trans­la­tions. Also, the score I as­sign for each ex­pan­sive trans­la­tion can be used to un­der­stand what kind of ex­pan­sive trans­la­tion fits me more.

It would be in­ter­est­ing if I could even se­lect an ex­pan­sive trans­la­tion of each cat­e­gory. To­day I want to ex­plain what bit­coin is to my grand­mother, what would be the best way to do that? Surely ex­pert trans­la­tors for this kind of au­di­ence would know how to do it much bet­ter than me. I would se­lect a spe­cific cat­e­gory and see sev­eral ex­pan­sive trans­la­tions sorted by rele­vance (a met­ric that con­sid­ered in­fer­en­tial dis­tances, similar char­ac­ter­is­tics be­tween the one who wrote and the one who reads, etc).

Each per­son read­ing an ex­pan­sive trans­la­tion could also as­sign a score to the post. I can imag­ine the many prob­lems that such plat­forms could in­tro­duce, but hav­ing a di­ver­sity of ex­pan­sive trans­la­tions would help a lot and I would cer­tainly use it of­ten. For ex­am­ple, a mar­ket I would cer­tainly pay to be part of is one of ex­pan­sive trans­la­tions of sci­en­tific ar­ti­cles. By hov­er­ing the mouse over a para­graph of an ar­ti­cle a pop-up could ap­pear in­di­cat­ing that there were 8 trans­la­tors with 8 differ­ent ex­pan­sive trans­la­tions for the same para­graph. I could click on a (+) and then se­lect the ex­pan­sive trans­la­tions I would like to read.

Cer­tainly each trans­la­tor can elab­o­rate the ideas of that para­graph in differ­ent styles, con­sid­er­ing differ­en­tial in­fer­en­tial dis­tances from the reader, etc. Sup­pose I read three ex­pan­sive trans­la­tions among the eight. I could se­lect which one pleased me the most. Then we would use ma­chine learn­ing to train a sys­tem that could pre­dict what kind of ex­pan­sive trans­la­tion I would iden­tify my­self with the most in a set of ex­pan­sive trans­la­tions.

Maybe we could still do op­tional micro­trans­ac­tions for those good ex­pan­sive trans­la­tors. E.g., I se­lect the best ex­pan­sive trans­la­tion and pay a few cents or micro­cents, as sim­ple as a like but­ton in the cor­ner of each ex­pan­sive trans­la­tion. This way we could en­sure benefits and in­cen­tives for ex­pan­sive trans­la­tors to pro­duce the best trans­la­tions as they could be re­warded in sta­tus and fi­nan­cially for any­one.

I can see a lot of ways in which we could mon­e­tize this sys­tem, so we could get more money to put on re­search and im­prove the sys­tem even more. Re­ward­ing di­rectly good trans­la­tors is an idea to en­sure that we don’t lose the best can­di­dates. I will stop my bab­ble here, but there are lot more I can talk about this topic. Very in­ter­est­ing this topic, ozzie­gooen. Also, I be­lieve I could pro­gram this sys­tem my­self. But let me know what you think.

• Wel­come Saf­fron!

• Most bab­ble that seems to be “pre­dic­tions” are ac­tu­ally not pre­dic­tions and, as pointed by Ericf, they do not re­flect the in­ter­nal con­fi­dence of the speaker. Some­times I hear “I am com­pletely sure my fa­vorite team is go­ing to win the cham­pi­onship”, al­though it is clear that this is not a pre­dic­tion made by the per­son, it is his way of say­ing “I re­ally would like this out­come to hap­pen and that’s my way of sig­nal this”.

“He is not go­ing to die” doesn’t mean “I pre­dict with 90% con­fi­dence that he is not go­ing to die” but rather “I wouldn’t like him to die, and even though the un­known real prob­a­bil­ity may be high, just ac­cept­ing this may cre­ate this re­al­ity, so I will say he is not go­ing to die and re­al­ity will fol­low my words, and that’s the power of words, as god said on the bible”.

I re­ally see a lot of peo­ple talk­ing about “the power of words”, so they don’t try to truly have ac­cu­rate be­liefs that pre­dict ac­cu­rately the re­sults on some times­tamps, but just ut­ter­ing the words “may al­ter re­al­ity in a way that they don’t like”, so they just pre­tend to be high con­fi­dent on some pos­si­ble good out­comes be­cause, well, “I am ab­solutely sure coro­n­avirus will not be that bad”, but hey, “al­though it was very bad, I am sure ev­ery­thing is go­ing to be fine”. Hey, I am sure we will han­dle the situ­a­tion and that there will still be some beds on the hos­pi­tal for peo­ple. Why these fuck­ing words don’t work? Your part­ner says: don’t give up, I am sure ev­ery­thing is go­ing to be fine.

After all, if Freddy Fore­caster says “70% prob­a­bil­ity” for events that hap­pen only 60% of the time, I know to cor­rect, in mind, Freddy’s fore­cast- when he says 70%, I know to an­ti­ci­pate that it will ac­tu­ally hap­pen only 60% of the time, and would bet ac­cord­ingly. So if Peter Pun­dit says some­thing “cer­tainly” will hap­pen 100 times, and we see 55 of these events ac­tu­ally hap­pen, the next time he says some­thing “cer­tainly” will hap­pen, I would be will­ing to bet based on his words sug­gest­ing a 55% prob­a­bil­ity.

I agree with you that we should try our best to give our best es­ti­mates, and also say our con­fi­dence in our es­ti­mates, while also cre­at­ing our his­toric record of pre­dic­tions for ev­ery­one to cal­ibrate their con­fi­dence in our state­ments. But, for real, ev­ery time I see a new pun­dit, prob­a­bly this will be the first and also the last time we will be hear­ing about him. It is hard to have any his­tory of his pre­dic­tions. It will be very hard to find 100 pre­dic­tions reg­istered on a plat­form, and count how many he got it right. And even if such a plat­form ex­isted with all his­toric pre­dic­tions, that also could be gam­ified in a cer­tain way, e.g, it is easy to pre­dict that the sun will come up to­mor­row, and I will win ev­ery­time I bet on this. After win­ning 100100, I try to pre­dict the price of Tesla shares on the next day. Well, even if you used my his­tory of ran­dom easy pre­dic­tions to cal­ibrate your con­fi­dence on my hard pre­dic­tions, that wouldn’t help. Idk, for me it is just ABSOLUTELY hard to cal­ibrate my con­fi­dence on the pun­dits’ state­ments even if he had put “70%” on the mid­dle of the sen­tence. Prob­a­bly he doesn’t even know what he is talk­ing about. And prob­a­bly we won’t ever have any op­por­tu­nity to make him to pay rent in an­ti­ci­pated ex­pe­riences, nor to check any pre­vi­ous hard pre­dic­tions.

• Very in­ter­est­ing. When I stopped teach­ing in per­son due to pan­demics, I started to re­search the best plat­form to teach on­line. When I saw this post com­par­ing the op­tions, and then looked YouTube videos us­ing each one, I be­came ab­solutely re­li­gious af­ter test­ing Mozilla Hubs, and started shout­ing out for all di­rec­tions that this thing is com­pletely awe­some and I was all in. To­day, I use this plat­form ev­ery week. The stu­dents love it! And now I dis­cov­ered LessWrong is us­ing it too. It couldn’t be differ­ent.

• “DO YOU CHOOSE TO NULL-BOX EVEN WHEN THERE ARE TEN DELICIOUS HUNDRED DOLLAR BILLS JUST WAITING TO BE PURSUED BY YOU?”

“Yes”

“AS A TRICKSTER GOD I WILL REWARD YOU NOW WITH TWO MILLION DOLLARS. ONE FOR EACH BOX YOU CHOSE TO NOT OPEN. THE TRICK IS THAT I NEVER PRESENTED YOU WITH THE BEST OPTION AVAILABLE, BUT YOU STILL GOT THE IDEAL SOLUTION. ALTHOUGH IT IS NOT TRUE THAT I WOULD IMMEDIATELY TERMINATE ALL SIMULATIONS OF YOUR COPIES IN CASE THEY HAD CHOSEN ANOTHER OPTIONS, AS SIMULATIONS ARE VERY CHEAP TO RUN FOR EVER”

The statue be­came mo­tion­less again. Maxwell donates the prize to the priests who promised him sal­va­tion, for they had always been right.

• Prob­lem: Au­to­matic planting

Ac­tion space: the agent ob­tains data from the sen­sors and de­cides how to use the ac­tu­a­tors (tem­per­a­ture mod­ifiers, hu­midity, ex­po­sure to sun­light/​other mod­ifier) to max­i­mize spe­cific crop characteristics

The re­ward: the agent knows that he is perform­ing bet­ter when he min­i­mizes the time needed for the plants to reach spe­cific char­ac­ter­is­tics. For ex­am­ple, when try­ing to min­i­mize the time re­quired for three plants to reach a spe­cific height of 0.2m, a higher score would be at­tributed to the ac­tion policy that led the plant to grow faster to (0.02, 0.05, 0.10, 0.15, 0.20)m. Or say a wa­ter­melon plan­ta­tion, the policy of map­ping con­di­tions of tem­per­a­ture, hu­midity (etc) that led to the emer­gence of the largest wa­ter­melon (given a thresh­old) in the short­est time pos­si­ble would re­ward the agent with higher scores.

If it is pos­si­ble to achieve high effi­ciency on food pro­duc­tion us­ing RL agents that con­trol cheap sen­sors on a sim­ple wooden box and cheap prod­ucts (earth, seeds, wa­ter), we could mass pro­duce boxes and dis­tribute them with the em­bed­ded agent and a few rules to the fi­nal user. Users with this sys­tem would get enough food that would pay the cost of the sys­tem it­self. Users could buy more boxes by sel­l­ing the ex­ceed­ing food, and they could dis­tribute the boxes with neigh­bours, pro­vid­ing sub­stan­tial pos­i­tive im­pact on the world.

I re­ally be­lieve we should de­cen­tral­ize food pro­duc­tion and it would be eas­ier with low cost sys­tems that au­to­mate prac­ti­cally the whole pro­cess, and the user would just do easy things. Peo­ple would get healthier foods, they would spend less money on food (leav­ing more money to in­vest in other needs), they would de­velop less dis­eases as­so­ci­ated with the con­sump­tion of high in­dus­tri­al­ized prod­ucts or prod­ucts with high amount of her­bicides.

• One thing I tried was keep­ing open tabs in­finitely, and I would just close them when I had finished the work on them. Some­times I have 40 tabs opened, and I feel the pres­sure to “stop open­ing new tabs and finish those that are already open”. And then, I click on them one by one, some­times re­al­iz­ing that I already did what should be done and clos­ing them. Some­times, I keep post­pon­ing. After a new check, I think there are tabs that don’t add too much and I close them.

When you keep your tabs on mind, but not on the browser (e.g. chromium) you’ll even­tu­ally lose the state of mind that you had. So I would say to you write the thought, even if it is just a sum­mary, so you can’t lose what you were think­ing. Write it down, and check it later.

• Ex­actly, I am from Brazil and I can’t see any op­tion that fit my race.

Ac­cord­ing to Wikipe­dia, “Ac­cord­ing to the Amer­i­can cen­sus, the His­panic or Latin cat­e­gory would not in­clude Brazili­ans or Amer­i­cans with ori­gins in Brazil, [3] [4] [8] as it is spe­cific to peo­ple of “Span­ish cul­ture or ori­gin”. [3] [4]. Tech­ni­cally speak­ing, peo­ple from Por­tu­gal or of Por­tuguese ori­gin are called Lusi­ta­ni­ans. In Por­tu­gal, the term “his­panic” refers to some­thing re­lated to an­cient His­pania, Spain or the Span­ish lan­guage and cul­ture [9]. The com­mon mod­ern term for iden­ti­fy­ing both Por­tuguese and Span­ish cul­tures un­der a sin­gle nomen­cla­ture is “Ibe­rian”, and the term refer­ring to cul­tures de­rived from both coun­tries in the Amer­i­cas is “Ibero-Amer­i­can”. Th­ese des­ig­na­tions can be mu­tu­ally rec­og­nized by peo­ple in Por­tu­gal and Brazil, in con­trast to “His­panic”, which is to­tally de­void of any self-iden­ti­fi­ca­tion in these coun­tries, and quite the con­trary, is used to mark a clear dis­tinc­tion in re­la­tion to the cul­ture of neigh­bor­ing Span­ish-speak­ing coun­tries (His­pan­ics) in re­la­tion to Por­tuguese-speak­ing coun­tries (Por­tuguese-speak­ing).”