LW2.0: Technology Platform for Intellectual Progress

This post pre­sents one lens I (Ruby) use to think about LessWrong 2.0 and what we’re try­ing to ac­com­plish. While it does not cap­ture ev­ery­thing im­por­tant, it does cap­ture much and ex­plains a lit­tle how our dis­parate-seem­ing pro­jects can com­bine into a sin­gle co­her­ent vi­sion.

I de­scribe a com­pli­men­tary lens in LessWrong 2.0: Com­mu­nity, Cul­ture, & In­tel­lec­tual Progress.

(While the stated pur­pose of LessWrong is to be a place to learn and ap­ply ra­tio­nal­ity, as with any min­i­mally speci­fied goal, we could go about pur­su­ing this goal in mul­ti­ple ways. In prac­tice, my­self and other mem­bers of the LessWrong team care about in­tel­lec­tual progress, truth, ex­is­ten­tial risk, and the far-fu­ture and these broader goals drive our vi­sions and choices for LessWrong.)

Our great­est challenges are in­tel­lec­tual problems

It is an un­der­state­ment to say that we can as­pire to the world be­ing bet­ter than it is to­day. There are myr­iad forms of un­nec­es­sary suffer­ing in the world, there are the utopias we could in­habit, and, di­rectly, there is an all too real chance that our civ­i­liza­tion might wipe it­self out in the next few years.

Whether we do it in a state of ter­ror, ex­cite­ment, or both—there is work to be done. We can im­prove the odds of good out­comes. Yet the challenge which faces us isn’t rol­ling up our sleeves and “putting in hard work”—we’re mo­ti­vated—it’s that we need to figure out what ex­actly it is we need to do. Our prob­lems are not of do­ing, but know­ing.

Which in­ter­ven­tions for global poverty are most cost-effec­tive? What is the like­li­hood of a deadly pan­demic or nu­clear war? How does one in­fluence gov­ern­ment? What poli­cies should we want gov­ern­ments to adopt? Is it bet­ter for me to earn-to-give or do di­rect work? How do we have a healthy, func­tion­ing com­mu­nity? How do we co­op­er­ate from groups small to large? How does one build a safe AGI? Will AGI take­offs be fast or slow? How do we think and rea­son bet­ter? Which ques­tions are the most im­por­tant to an­swer? And so on, and so on, and so on.

One of our great­est challenge is an­swer­ing the ques­tions be­fore us. One of our great­est needs is to make more in­tel­lec­tual progress: to un­der­stand the world, to un­der­stand our­selves, to know how to think, and to figure out what is true.

Tech­nolo­gies for in­tel­lec­tual progress

While hu­mans have been im­prov­ing our un­der­stand­ing the world for hun­dreds of thou­sands of years, our rate of progress has in­creased each time we evented new tech­nolo­gies which fa­cil­i­tate even more in­tel­lec­tual progress.

Such tech­nolo­gies for in­tel­lec­tual progress in­clude: speech, writ­ing, libraries, en­cy­clo­pe­dias, libraries, micro­scopes, lec­tures, con­fer­ences, schools, uni­ver­si­ties, the sci­en­tific method, statis­tics, peer re­view, Dou­ble Crux, the in­ven­tion of logic, the iden­ti­fi­ca­tion of log­i­cal fal­la­cies, white­boards and black­boards, flow charts, re­search­ing fund­ing struc­tures, spread­sheets, type­writ­ers, the In­ter­net, search en­g­ines, blog­ging, Wikipe­dia, Stack­Ex­cange and Quora, col­lab­o­ra­tive edit­ing such as Google Docs, the field of heuris­tics and bi­ases, episte­mol­ogy, ra­tio­nal­ity, and so on.

I am us­ing the term tech­nol­ogy broadly to in­clude all things which did not ex­ist nat­u­rally which we hu­mans de­signed and im­ple­ment to serve a func­tion, in­clud­ing ideas, tech­niques, and so­cial struc­tures. What unifies the above list is that each item helps us to or­ga­nize or share our knowl­edge. By build­ing on the ideas of oth­ers and think­ing col­lec­tively, we ac­com­plish far more than we ever could alone. Each of the above, per­haps among other things, has been a tech­nol­ogy which in­creased hu­man­ity’s rate of in­tel­lec­tual progress.

LessWrong as a tech­nol­ogy plat­form for in­tel­lec­tual progress

I see no ab­solutely no rea­son to think that the above list of tech­nolo­gies for in­tel­lec­tual progress is any­where near com­plete. There may even be rel­a­tively low-hang­ing fruit ly­ing around that hasn’t been picked up since the in­ven­tion of the In­ter­net a mere thirty years ago. For ex­am­ple, the aca­demic jour­nal sys­tem, while now on­line, is mostly a digi­tized form of the pre-In­ter­net sys­tems not tak­ing ad­van­tage of all the new prop­er­ties of the in­ter­net such as effec­tively free and in­stan­ta­neous dis­tri­bu­tion of ma­te­rial.

My un­der­stand­ing of the vi­sion for LessWrong 2.0 is that we are a team who builds new tech­nolo­gies for in­tel­lec­tual progress and that LessWrong 2.0 is the tech­nol­ogy plat­form upon which we can build these tech­nolo­gies.

Which tech­nolo­gies might LessWrong 2.0 build?

Hav­ing stated the vi­sion for LessWrong 2.0 is to be a tech­nol­ogy plat­form, it’s worth list­ing ex­am­ples of thing we might build (or already are).

Open Ques­tions Re­search Platform

In De­cem­ber 2018, we launched a beta of our Open Ques­tions plat­form. Click to see cur­rent ques­tions.

  • Make the goal of an­swer­ing im­por­tant ques­tions ex­plicit on LessWrong.

  • Provide af­for­dance for ask­ing, for know­ing, and for pro­vid­ing.

  • Provide in­fras­truc­ture to co­or­di­nate on what the most im­por­tant prob­lems are.

  • Provide in­cen­tives to spend days, weeks, or months re­search­ing an­swers to hard ques­tions.

  • Low­er­ing the bar­ri­ers to con­tribut­ing to the com­mu­nity’s re­search out­put, e.g. you don’t have to be hired to an or­ga­ni­za­tion to con­tribute.

  • Build­ing a com­mu­nal repos­i­tory of knowl­edge upon which ev­ery­one can build.

  • Ap­ply­ing our com­mu­nity’s in­ter­ests, tech­niques, cul­ture, and truth-seek­ing com­mit­ment to uniquely high-qual­ity re­search on su­per tricky prob­lems.

In the open­ing of this doc­u­ment, I as­serted that hu­man­ity’s great­est challenges are in­tel­lec­tual prob­lems, that is, knowl­edge we need to build and ques­tions we need to an­swer. It makes sense that we should make ex­plicit that we want to ask, pri­ori­tize, and an­swer im­por­tant ques­tions on the site. And to fur­ther make it ex­plicit that we are aiming to build ex­plicit com­mu­nity of peo­ple who work to an­swer these im­por­tant ques­tions.

The core func­tion­al­ity of LessWrong to date has been peo­ple mak­ing posts and com­ment­ing on them. Authors write posts which are the in­ter­sec­tion of their knowl­edge and com­mu­nity’s over­all in­ter­ests, per­haps there will be some­thing of a theme at times. We haven’t had an ob­vi­ous af­for­dance that you can speci­fi­cally re­quest some­one else gen­er­ate or share con­tent about a ques­tion you have. We haven’t had a way that peo­ple can eas­ily see which ques­tions other peo­ple have and which they could help with. And we over­all haven’t had a way for the com­mu­nity to co­or­di­nate on which ques­tions are most im­por­tant.

As part of the plat­form, we can build new in­cen­tive sys­tems to make it worth peo­ple’s time to spend days or weeks re­search­ing an­swers to hard ques­tions.

The plat­form could provide an ac­cessible way for new peo­ple to start­ing con­tribut­ing to the com­mu­nity’s re­search agenda. Get­ting hired at a re­search org is very difficult, the plat­form could provide a path­way for far more peo­ple to con­tribute, es­pe­cially if we com­bine it with a re­search tal­ent train­ing pipeline.

By the plat­form be­ing on­line and pub­lic, it would be­gin to build a repos­i­tory of shared knowl­edge upon which oth­ers can con­tinue to build. Hu­man­ity’s knowl­edge comes from our shar­ing knowl­edge and build­ing upon each other’s work. The more we can do that, the bet­ter.

Last, Open Ques­tions is a way to turn our com­mu­nity’s spe­cial­ized in­ter­ests (AI, AI safety, ra­tio­nal­ity, self-im­prove­ment, ex­is­ten­tial risk, etc.), our cul­ture, tech­niques and tools (Bayesian episte­mol­ogy, quan­ti­ta­tive mind­set, statis­ti­cal liter­acy, etc.), and truth-seek­ing com­mit­ment to uniquely high-qual­ity re­search on su­per tricky prob­lems.

See this com­ment thread for a de­tailed list of rea­sons why it’s worth cre­at­ing a new ques­tions plat­form when oth­ers already ex­ist.

Mar­ket­place for In­tel­lec­tual Labor

  • Stan­dard benefits of a mar­ket: matches up peo­ple want to hire work with peo­ple who want to perform work, there­fore caus­ing more valuable work to hap­pen.

Pos­si­ble ad­van­tages over Open Ques­tion:

  • Mar­ket­place is a stan­dard thing peo­ple are used to, eas­ier to drive adop­tion than a very novel ques­tions plat­form.

  • Po­ten­tially re­duces un­cer­tainty around pay­ments/​in­cen­tives.

  • Can help with trust.

  • Can provide more pri­vacy (pos­si­bly a good thing, pos­si­bly not).

  • Less of a two-sided mar­ket­place challenge.

  • Diver­sifies the range of work which can be traded, e.g. hiring, proofread­ing, lit-re­views, writ­ing code.

A re­lated idea to Open Ques­tions (es­pe­cially if peo­ple are paid for an­swers) is that of a gen­eral mar­ket­place place where peo­ple can sell and buy in­tel­lec­tual la­bor in­clud­ing tasks like tu­tor­ing, proofread­ing es­says, liter­a­ture re­views, writ­ing code, or full-blown re­search.

It might look like TaskRab­bit or Craigslist ex­cept spe­cial­ized for in­tel­lec­tual la­bor. The idea be­ing that this would cause more valuable work to hap­pen than oth­er­wise would, and progress on im­por­tant things to made.

A more de­tailed de­scrip­tion of the Mar­ket­place idea can be found in my doc­u­ment, Re­view of Q&A.

Ta­lent Pipeline for Research

  • A re­quire­ment for in­tel­lec­tual re­search is peo­ple ca­pa­ble of do it.

  • An ad­e­quate sup­ply of skill re­searchers is es­pe­cially re­quired for an Open Ques­tions re­search plat­form.

  • LessWrong could po­ten­tially build ex­per­tise in do­ing good re­search and train­ing oth­ers to do it.

  • We could in­te­grate our trainees into the Open Ques­tions plat­form.

A re­quire­ment for in­tel­lec­tual progress is that there are peo­ple ca­pa­ble of do­ing it, so gen­er­ally we want more peo­ple ca­pa­ble of do­ing good re­search.

It may es­pe­cially be a re­quire­ment for the Open Ques­tions Plat­form to suc­ceed. One of my pri­mary un­cer­tain­ties about whether Open Ques­tions can work is whether we will have enough peo­ple will­ing and able to con­duct re­search to an­swer ques­tions. This leads to the idea that LessWrong might want to set up a train­ing pipeline that helps peo­ple who want to be­come good re­searchers train up. We could build up ex­per­tise in both good re­search pro­cess and in teach­ing that pro­cess to peo­ple. We can offer to in­te­grate our trainees into the LessWrong Open Ques­tion plat­form.

A re­search train­ing pipeline might be a non-stan­dard defi­ni­tion of “tech­nol­ogy”, but I think it still counts, and en­tirely fits within the over­all frame of LessWrong. In LW2.0: Cul­ture, Com­mu­nity, and In­tel­lec­tual Progress, I list train­ing as one of LessWrong’s core ac­tivi­ties.

Col­lab­o­ra­tive Doc­u­ments a la Google Docs

  • Com­mu­ni­ca­tions tech­nolo­gies are pow­er­ful and im­por­tant caus­ing more and differ­ent work to ac­com­plished than would be oth­er­wise.

  • Google Docs rep­re­sents a pow­er­ful new tech­nol­ogy, and we could im­prove on it even fur­ther with our own ver­sion of col­lab­o­ra­tive doc­u­ments..

  • I ex­pect this to re­sult in sig­nifi­cant gains to re­search pro­duc­tivity.

There have been suc­ces­sive gen­er­a­tions of tech­nol­ogy which make the gen­er­a­tion and com­mu­ni­ca­tion of ideas eas­ier. One lineage might be: writ­ing, the type­writer, Microsoft Word, email, Google Docs. Each has made com­mu­ni­ca­tion eas­ier and more effi­cient. Speed, leg­i­bil­ity, ease of dis­tri­bu­tion, and ease of edit­ing have made each suc­ces­sive tech­nol­ogy more pow­er­ful.

I would ar­gue that some­times the effi­ciency gains with these tech­nolo­gies are so sig­nifi­cant that they en­able qual­i­ta­tively new ways to com­mu­ni­cate.

With Google Docs, mul­ti­ple col­lab­o­ra­tors can ac­cess the same doc­u­ment (syn­chronously or asyn­chronously), this doc­u­ment is kept in sync, col­lab­o­ra­tors can make or sug­gest ed­its, and they com­ment di­rectly on spe­cific text. Con­sider how this was not re­ally pos­si­ble at all with Microsoft Word plus email at­tach­ments. You might at most send a doc­u­ment for feed­back from one per­son, if you’d made ed­its in the mean­time, you’d have merge them with their re­vi­sions. If you sent the doc­u­ment to two peo­ple via email at­tach­ment, they wouldn’t see each oth­ers feed­back. And so on.

Google Doc­u­ments, though we might take it for granted by now, was a sig­nifi­cant im­prove­ment in how we can col­lab­o­rate. It is gen­er­ally use­ful, but also es­pe­cially use­ful to do those do­ing gen­er­a­tive in­tel­lec­tual work to­gether who can share, col­lab­o­rate, and get feed­back in a way not pos­si­ble with pre­vi­ous tech­nolo­gies.

Yet as good as Google Docs is, it could be bet­ter. The small things add up. You can com­ment on any text, but it is difficult to have any sub­stan­tial dis­cus­sion in com­ment chain due to length re­stric­tions and how the com­ment chains are dis­played. There isn’t a built in com­ment sec­tion for the whole doc­u­ment rather than spe­cific text. Sup­port for foot­notes is limited. There isn’t sup­port for La­tex. This is only a start­ing list of op­ti­miza­tions which could make Google Docs an even bet­ter tool for re­search and gen­eral in­tel­lec­tual progress.

There could be fur­ther benefits from hav­ing this tool in­te­grated with LessWrong. Easy and im­me­di­ate pub­lish­ing of doc­u­ments to posts, ac­cess to a com­mu­nity of col­lab­o­ra­tors and feed­back givers. Through the tool en­courag­ing peo­ple to do their work on LessWrong, we could be­come the archive and repos­i­tory for the com­mu­nity’s in­tel­lec­tual out­put.

Pre­dic­tion Markets

  • Mak­ing pre­dic­tions is a core ra­tio­nal­ity skill.

  • Pre­dic­tion mar­kets ag­gre­gate in­di­vi­d­ual opinions to get an even bet­ter over­all pre­dic­tion.

  • LessWrong could build or part­ner with an ex­ist­ing pre­dic­tion mar­ket pro­ject.

First, mak­ing good pre­dic­tions is a core ra­tio­nal­ity skill, and one of the best ways to en­sure you make good pre­dic­tions is to have some­thing rid­ing on them, e.g. a bet. Se­cond, ag­gre­gat­ing the (fi­nan­cially-backed) pre­dic­tions of mul­ti­ple peo­ple is of­ten an ex­cel­lent gen­er­ate over­all pre­dic­tions that are bet­ter than those of in­di­vi­d­u­als.

Given the above, we could imag­ine that LessWrong, as a tech­nol­ogy plat­form for in­tel­lec­tual progress, should be in­te­grated with a pre­dic­tion mar­ket and as­so­ci­ated com­mu­nity of fore­cast­ers. There have been many past at­tempts at pre­dic­tion mar­kets, some ex­ist­ing ones, and few more nascent ones. I don’t know if LessWrong should set up its own new ver­sion, or per­haps seek to part­ner with an ex­ist­ing pro­ject.

I haven’t thought through this idea much, but it’s an idea the team has had.