Review of Q&A [LW2.0 internal document]

Context

1. This is the first in a se­ries of in­ter­nal LessWrong 2.0 team doc­u­ments we are shar­ing pub­li­cly (with min­i­mal edit­ing) in an effort to help keep the com­mu­nity up to date with what we’re think­ing about and work­ing on.

2. Caveat! This is in­ter­nal doc­u­ment and does not rep­re­sent any team con­sen­sus or con­clu­sions; it was writ­ten by me (Ruby) alone and ex­presses my in-progress un­der­stand­ing and rea­son­ing. To the ex­tent that the mod­els/​ar­gu­ments of the other team mem­bers are in­cluded here, they’ve been filtered through me and aren’t nec­es­sar­ily cap­tured with high fidelity or strong en­dorse­ment. Since it was writ­ten on March 17th, it isn’t even up to date with my own thinking

3. I, Ruby (Ruben Bloom), am tri­al­ling with the LessWrong 2.0 team in a gen­er­al­ist/​product/​an­a­lyt­ics ca­pac­ity. Most of my work so far has been try­ing to help eval­u­ate the hy­poth­e­sis that Q&A is a fea­si­ble mechanism to achieve in­tel­lec­tual progress at scale. I’ve been talk­ing to re­searchers; think­ing about use-cases, per­sonas, and jobs to be done; and ex­am­in­ing the data so far.
.

.

Epistemic sta­tus: this is one of the ear­lier doc­u­ments I wrote in think­ing about Q&A and my think­ing has de­vel­oped a lot since, es­pe­cially since in­ter­view­ing mul­ti­ple re­searchers across EA orgs. Sub­se­quent doc­u­ments (to be pub­lished soon) have much more de­vel­oped thoughts.

In par­tic­u­lar, sub­se­quent docs have a much bet­ter anal­y­sis of the un­cer­tain­ties and challenges of mak­ing Q&A work that this one. This doc­u­ment is worth read­ing in ad­di­tion to them mostly for an in­tro­duc­tion to think­ing about the differ­ent kinds of ques­tions, our goals, and how things are go­ing so far.

Origi­nally writ­ten March 17th

I’ve been think­ing a lot about Q&A the past week since it’s a ma­jor pri­or­ity for the team right now. This doc con­tains a dump of many of my thoughts. In think­ing about Q&A, it also oc­curred to me that an ac­tu­ally mar­ket­place for in­tel­lec­tual la­bor could do a lot of good and is strong in a num­ber of places where Q&A is weak. This doc­u­ment also de­scribes that vi­sion and why I think it might be a good idea.

1. Ob­ser­va­tions of Q&A so Far.

First off, pul­ling some stats from my an­a­lyt­ics re­port (num­bers as of 2019-03-11):

How long has Q&A been live?
Since 2018-12-07. Just about 3 months as of 2019-03-11 (94 days)
How many ques­tions?
94 ques­tions pub­lished + 20 drafts
How many an­swers?
191 an­swers, 171 di­rect com­ments on answers
How many peo­ple ask­ing ques­tions?
59 dis­tinct user­names posted ques­tions (in­clud­ing the LW team)
How many peo­ple an­swer­ing ques­tions?
117 unique user­names post­ing answers
172 unique user­names who an­swered or posted di­rect com­ment on question
How many peo­ple en­gag­ing over­all?
In­clud­ing ques­tions, an­swers, and com­ments, 226 user­names have ac­tively en­gaged with Q&A.

Note: “viewCount” is a lit­tle un­re­li­able on LW2 (I think it might dou­ble-count some­times); “num_dis­tinct_view­ers” refers only to logged-in view­ers.

Spread­sheet of Ques­tions as of 2019-03-08

List of Q&A Uncertainties

See Ap­pendix for all my work­ings on the Whiteboard

Q&A might be a sin­gle fea­ture/​product in the UI and in the code­base, but there are mul­ti­ple dis­tinct uses for the sin­gle fea­ture. Differ­ent peo­ple try­ing to ac­com­plish differ­ent things. Go­ing over the ques­tions, I see rough clusters, listed pretty much in or­der of de­scend­ing prevalence:

  1. Ask­ing for recom­men­da­tions, feed­back, and per­sonal ex­pe­rience.

    1. “Which self-help has helped you?”, “Is there a way to hire aca­demics hourly?”

  2. Ask­ing for per­sonal ad­vice.

    1. “What’s the best way to im­prove my English pro­nun­ci­a­tion?”

  3. Ask­ing con­cep­tual/​philos­o­phy/​ide­ol­ogy/​mod­els/​the­ory type ques­tion.

    1. “What are the com­po­nents of in­tel­lec­tual hon­esty?”

  4. Ask­ing for opinions

    1. “How does OpenAI’s lan­guage model af­fect our AI timeline es­ti­mates?”

  5. Ask­ing for help study­ing a topic.

    1. “What are some con­crete prob­lems about log­i­cal coun­ter­fac­tu­als?”, “Is the hu­man brain a valid choice choice of Univer­sal Tur­ing Ma­chine . . . ?”

  6. Ask­ing gen­eral re­search/​lit-re­view-ish ques­tions (not sure how to name)

    1. “Does anti-malaria char­ity de­stroy the lo­cal anti-malaria in­dus­try?”, “Un­der­stand­ing In­for­ma­tion Cas­cades”, “How large is the fal­lout area of the biggest cobalt bomb we can build?”

  7. Ask­ing open re­search-type ques­tions (not sure to how name this cluster)

    1. “When is CDT Dutch-Book­able?”, “How does Gra­di­ent Des­cent In­ter­act with Good­hart?”, “Un­der­stand­ing In­for­ma­tion Cas­cades”

Th­ese ques­tions are roughly or­dered from “high prevalence + eas­ier to an­swer” to “low prevalence + harder to an­swer”.

A few things stick out. I know the team has no­ticed already, but want to list them here any­way is part of the big­ger ar­gu­ment. The ques­tions which are most preva­lent are those which are:

  1. rel­a­tively quick to ask, e.g. write a few para­graphs at most.

  2. there is a [rel­a­tively] large pop­u­la­tion of peo­ple who are qual­ified to an­swer.

  3. the kind of ques­tions peo­ple are used to ask­ing el­se­where, e.g. CFAR Alumni Mailing List, Face­book, Red­dit, LessWrong (posts and com­ments) Quora, Stack­Ex­change.

  4. the kinds of ques­tions for which there are ex­ist­ing fo­rums, as above.

  5. they can an­swered pri­mar­ily us­ing the an­swerer’s ex­ist­ing knowl­edge, e.g. peo­ple who an­swer ad­vanced math prob­lems but us­ing their ex­ist­ing un­der­stand­ing.

  6. the ques­tions can be an­swered in a sin­gle ses­sion at one’s com­puter, of­ten with­out need­ing even to open an­other browser tab.

What is ap­par­ent is that ques­tions which break from the above trends, e.g. ques­tions which can be hard to ex­plain (tak­ing a long to write up), re­quire skill/​ex­per­tise to an­swer, can’t be an­swered purely from an an­swerer’s ex­ist­ing knowl­edge (un­less by fluke they’re ex­pert in a niche area), and re­quire more effort than sim­ply typ­ing an an­swer or ex­pla­na­tion—these ques­tions are re­ally of a very differ­ent kind. They’re a very differ­ent cat­e­gory and both ask­ing and an­swer­ing such ques­tions is a very differ­ent ac­tivity from ask­ing the other kind.

What we see is that LessWrong’s Q&A is do­ing very well with the first kind—the kind of ques­tions peo­ple are already used to ask­ing and an­swer­ing el­se­where. There’s been roughly a ques­tion per day for the three months Q&A has been live, but the over­whelming ma­jor­ity are re­quests for recom­men­da­tions and ad­vice, opinions, and philo­soph­i­cal dis­cus­sion. Only a small minor­ity (no more than a cou­ple dozen) are solid re­search-y ques­tions.

There’ve been a few of the “help me un­der­stand”/​con­fu­sions type you might see on Stack­Ex­change (which I think are real good). And a few pure re­search-y type ques­tions, but around half of those were asked by the LessWrong team and friends. Around 10% of ques­tions, re­ally on the or­der of 10 ques­tions or less in the last three months by my count.

I think these lat­ter ques­tions are more the sort we’d judge to be “ac­tual se­ri­ous in­tel­lec­tual progress”, or at least, those are the ques­tions we’d love to see peo­ple ask­ing more. They’re the kinds of ques­tions that pre­dom­i­nantly the LessWrong team is cre­at­ing rather than users.

2. Our vi­sion for Q&A is get­ting peo­ple to do a new and effort­ful thing. That’s hard.

The pre­vi­ous sec­tion can be sum­ma­rized as fol­lows:

  • Q&A has been get­ting used since it was launched, but pri­mar­ily by peo­ple do things they were already used to do­ing el­se­where. And things which are rel­a­tively low effort.

  • The vi­sion for Q&A is scal­ing up in­tel­lec­tual progress on im­por­tant prob­lems. Do­ing real re­search. Peo­ple tak­ing their large ques­tions, carv­ing of pieces, peo­ple go­ing off and mak­ing their own con­tri­bu­tions of re­search (with­out hiring and all that over­head).

The thing about the LW vi­sion for Q&A is that it means get­ting peo­ple to do a new and differ­ent thing from what they’re used to, plus that thing is way more effort. It’s not im­pos­si­ble, but it is hard.

It’s not a new and bet­ter way to do some­thing they’re already do­ing, it’s a new thing they haven’t even dreamt of. More­over, it looks like some­thing else which they are used to, e.g. Quora, Stack­Ex­change, Face­book—so that’s how they use it and how they ex­pect other to use it by de­fault. The term “cat­e­gory cre­ation” comes to mind, if that means any­thing. AirBnB was new cat­e­gory. LessWrong is try­ing to cre­ate a new cat­e­gory, but it looks like ex­ist­ing cat­e­gories.

3. Boun­ties: the po­ten­tial solu­tion and its challenges

The most straight­for­ward way to get peo­ple to ex­pend effort is to pay them. Or cre­ate the pos­si­bil­ity of pay­ment. Hence boun­ties. Done right, I think boun­ties could work, but I think it’s go­ing to be a tough up­hill bat­tle to im­ple­ment them in a way which does work.

[Edited: Rae­mon has asked a ques­tion about in­cen­tives/​boun­ties for an­swer­ing “hard ques­tions.” Fit­ting in this paradigm here, we’d re­ally value fur­ther an­swers.]

4. Challenges fac­ing boun­ties (and Q&A in gen­eral)

  • Even if we have a sys­tem which works well, it’s go­ing to be new and differ­ent and we’re go­ing to have to work to get users to un­der­stand it and adopt it. A lot user ed­u­ca­tion and train­ing.

    • The clos­est analogue I can think of is Kag­gle com­pe­ti­tions, but they’re still pretty differ­ent: clear ob­jec­tive eval­u­a­tion, you build trans­par­ently valuable skills, it feels good to get a high rank even if you don’t win, there are ca­reer re­wards just for par­ti­ci­pat­ing and do­ing rel­a­tively well.

  • Uncer­tainty around pay­ment. Peo­ple might do a lot of work for money, but the in­cen­tive is much weaker if you’re un­sure if you’ll get paid. Peo­ple de­cide whether it’s based on EV, not the ab­solute num­ber of dol­lars pledged.

    • And you might not be both­ered to read and un­der­stand com­pli­cated bounty rules.

  • Peo­ple with ques­tions and plac­ing boun­ties might not trust the re­search qual­ity of who­ever ran­dom per­son hap­pens to an­swer.

    • A math­e­mat­i­cal re­view can be checked, but it’s harder to that with lit re­views and gen­er­al­ist re­search.

    • Eval­u­at­ing re­search qual­ity might re­quire a sig­nifi­cant frac­tion of the effort re­quired to do the re­search in the first place.

  • Peo­ple with ques­tions might usu­ally have a deeper, va­guer, gen­eral ques­tion they’re try­ing to an­swer. They want the ac­tual thing an­swered, not a par­tic­u­lar sub-ques­tion which may or may not be an­swered. Eli spoke of de­siring that some­one would be­come ex­pert in a topic so they could then ask them lots of ques­tions about it.

  • With Q&A, it’s challeng­ing for the asker and an­swerer to have a good feed­back loop as the an­swerer is work­ing. It would seem to be harder for the the an­swerer to ask­ing clar­ify­ing ques­tions and share in­ter­me­di­ate re­sults (and thereby get feed­back), and harder for the asker to ask fur­ther fol­low-up ques­tion. This gets worse once there are mul­ti­ple peo­ple work­ing on the ques­tion, all po­ten­tially need­ing fur­ther time and at­ten­tion from the asker in or­der to do a good job.

  • Q&A (even with boun­ties) face the two-sided mar­ket­place prob­lem. Ques­tion askers aren’t go­ing to bother writ­ing up large and difficult to ex­plain ques­tions if they don’t ex­pect to get an­swered. (Even less so if they try once and fail to get a re­sponse worth the effort). Po­ten­tial an­swer­ers aren’t go­ing to make it a habit to re­search for a plat­form which doesn’t have many real re­search ques­tions (mostly stuff about freeze dried mus­sel pow­der and English pro­nun­ci­a­tion and what not).

5. What would it take to get it to work

Think­ing about the challenges, it seems it could be made to work if we fol­low­ing hap­pens:

  • We suc­cess­fully get both askers and an­swer­ers to un­der­stand that LW’s Q&A is some­thing dis­tinctly differ­ent from other things they’re used.

    • UI changes, tags, re­nam­ing things, etc. might all help, plus ex­plana­tory posts and hands-on train­ing with peo­ple.

    • Mak­ing be­com­ing a ques­tion an­swerer a high-sta­tus thing would cer­tainly help. If Luke or Eliezer or Nate were seen us­ing the plat­form, might give it a lot of en­dorse­ment/​le­gi­t­i­macy.

  • We suc­cess­fully in­cen­tivize ques­tion an­swer­ers to ex­pend the nec­es­sary effort to an­swer re­search ques­tions.

    • This is partly through mon­e­tary re­ward, but might also in­clude hav­ing them be­lieve that they’re ac­tu­ally helping on some­thing im­por­tant, are ac­tu­ally get­ting sta­tus. (Weekly or monthly prizes for best an­swers—sep­a­rate from boun­ties—might be a way to do that. Or heck, a leader­board for Q&A con­tri­bu­tions ad­justed by karma.)

  • We get ques­tion askers to ac­tu­ally be­lieve they can get real se­ri­ous progress on ques­tions for Q&A.

    • Easiest to do once we have some ex­am­ples. Proof of con­cept goes a long way. Get a few wins and we talk about them with re­searchers, show them that it works.

      • It’s get­ting those first few ex­am­ples which is go­ing to be hard­est. As they say, the first ten clients always re­quire hus­tle.

  • We en­sure that ques­tion an­swer­ers have a pos­i­tive ROI ex­pe­rience for all the time spent writ­ing up ques­tions, read­ing the re­sponse, etc., etc.

  • We some­how ad­dress con­cerns that re­search might not be re­li­able be­cause you don’t fully trust the re­search abil­ity of peo­ple on the in­ter­net—es­pe­cially not when you’re try­ing to make im­por­tant de­ci­sions on the ba­sis of your re­search.

Even then, I think get­ting it to work will de­pend on un­der­stand which re­search ques­tions can be well han­dled by this kind of sys­tem.

6. My Uncer­tain­ties/​Questions

Much of what I’m say­ing here is com­ing from think­ing hard about Q&A for sev­eral days, us­ing mod­els from star­tups in gen­eral, and some limited user in­ter­ac­tion. I could just be wrong about sev­eral of the as­sump­tions be­ing used above.

Some of the key ques­tions I want an­swered to be more sure of mod­els are:

  • What are the be­liefs/​pre­dic­tions/​an­ti­ci­pa­tions about Q&A of our ideal ques­tion askerer’s?

    • In par­tic­u­lar, do they think it could ac­tu­ally help them with their work? If not, why not? If yes, how?

    • Is trust a real is­sue for them? Are wor­ried about re­search qual­ity?

    • Do they have “dis­crete” ques­tions they can ask, or is it usu­ally some deeper topic they want some­one to spend sev­eral days be­com­ing an ex­pert on?

  • What is the will­ing­ness of ideal ques­tion an­swer­ers to an­swer ques­tions on Q&A?

    • Which in­cen­tives mat­ter to them? (Im­pact, sta­tus, money) How well do they view cur­rent Q&A as meet­ing them?

      • Do they feel like they’re ac­tu­ally do­ing valuable work in an­swer­ing ques­tions?

There are other ques­tions, but that’s a starter.

7. Alter­na­tive Idea: Mar­ket­place for In­tel­lec­tual Labor

Once you’re talk­ing about pay­ing out boun­ties for peo­ple re­search­ing an­swers, you’re most of the way to­wards just out­right hiring peo­ple to do work. A mar­ket­place. TaskRab­bit/​Craigslist for in­tel­lec­tual la­bor. I can see that be­ing a good idea.

How it would work

  • Peo­ple want­ing to be hired have “pro­files” on LW which in­clude any­thing rele­vant to their abil­ity to do in­tel­lec­tual la­bor. Links to CV, LinkedIn, ques­tions an­swered on Q&A, karma, etc.

    • The pro­files may be pub­lic or pri­vate or semi-pri­vate.

  • Peo­ple seek­ing to hire in­tel­lec­tual la­bor can cre­ate “tasks”

  • Work can be as­signed in two di­rec­tions.

    • Hir­ers can post their tasks pub­li­cly and then peo­ple bid/​offer to work on the task.

    • Hir­ers can browse the list of peo­ple who have cre­ated pro­files and reach out to peo­ple they’re in­ter­ested in hiring, with­out ever need­ing to make their task or re­search pub­lic.

Why this is a good idea

  • A mar­ket­place is a re­ally stan­dard thing, peo­ple already have mod­els and ex­pec­ta­tions for how they work and how to in­ter­act with them. In this case, it’s just a mar­ket­place for a par­tic­u­lar kind of thing, oth­er­wise the me­chan­ics are what peo­ple are used to. Say “TaskRab­bit for re­search/​in­tel­lec­tual la­bor” and I bet peo­ple will get it.

    • Also mar­ket­place and work­ing for money is more prob­a­bly the right genre for work­ing hard for sev­eral days and de­ter­minis­ti­cally get­ting paid. The thing about Q&A is that it’s some­what try­ing to get peo­ple to do se­ri­ous work via some­thing which looks a lot like the things they do recre­ation­ally.

  • It re­duces un­cer­tainty around pay­ment/​in­cen­tives. The two par­ties ne­go­ti­ate a con­tract (per­haps pay­ment hap­pens via LW) and the worker knows that they will get paid as much as in any real job they might be hired for.

  • It solves the trust thing. 1) The hir­ers get to se­lect who they trust with their re­search ques­tions, it’s not open to any­one. The pro­files are helpful for this, as a care­ful hirer can care­fully go through some­one’s qual­ifi­ca­tion and past work to see if they trust them.

    • LessWrong could even cre­ate a “pipeline” of skill around the mar­ket­place for in­tel­lec­tual la­bor. Peo­ple start with sim­ple, low-trust tasks and as they prove them­selves and get good re­views, they’re more at­trac­tive.

  • It ad­dresses pri­vacy. You might not be will­ing to place your re­search ques­tions on the pub­lic in­ter­net, but you might be will­ing them to trust a care­fully vet­ted sin­gle per­son who you hire.

  • It ad­dresses the two-sided mar­ket­place challenge. The for­mat al­lows you to build each side some­what asyn­chronously.

    • Find a few peo­ple and con­vince them to cre­ate a few work tasks they’d like done but aren’t ur­gent (ap­prox­i­mately ques­tions). Once they’re up there, you can say “yes, we’ve got some tasks that Paul Chris­ti­ano would like an­swers on”

    • Find peo­ple who would be in­ter­ested in the right kind of work, get them to cre­ate pro­files. They don’t have to com­mit to do­ing any par­tic­u­lar work, they’re just there in case some­one else wants to reach out to hire them. (One could imag­ine mak­ing it be­have like Re­ciproc­ity.)

  • It lets you hire for things like tu­tor­ing.

    • Eli men­tioned how much he val­ues 1:1 in­ter­ac­tion and tu­tor­ing. When he’s got a con­fu­sion, he seeks a tu­tor. That’s not some­thing Q&A re­ally sup­ports, but a mar­ket­place for in­tel­lec­tual la­bor could.

    • It could be an effi­cient way for peo­ple look­ing for knowl­edge from an ex­pert to be able to find one who is available and at the right price.

      • I’ve seen spread­sheets over the years of EA’s reg­is­ter­ing their names, in­ter­ests, and skills. I don’t know if peo­ple ever used them, but it does seem neat if there was just a locked-in ser­vice which was di­rec­tory of ex­perts on var­i­ous top­ics that you could pay to ac­cess.

  • [Added] It di­ver­sifies the range of work you can hire for.

    • It seems good if peo­ple do­ing re­search work can hire peo­ple to for­mat la­tex, proofread, edit, and gen­er­ally han­dle tasks which frees them up to more core re­search.

  • [Added] It doesn’t limit the medium of the work to be­ing the LessWrong plat­form. Once an ar­range­ment is made, the hirer and worker are free to work in per­son, work via Google Doc, Skype, or what­ever else is most con­ve­nient and na­tive to their work. In con­trast, Q&A makes the for­mat UI ex­pe­rience of the plat­form a bot­tle­neck on com­mu­ni­ca­tion of re­search.

    • Need­ing to write up re­sults for­mally in a way that is suit­able for the pub­lic is also a costly step that is avoided in 1-to-1 work ar­range­ment.

    • It does seem that CKed­i­tor could dove­tail re­ally nice with col­lab­o­ra­tion via the mar­ket­place, as­sum­ing oth­er­wise peo­ple are us­ing Google Docs. Once the re­search con­tent is already on LW, we can stream­line the pro­cess of mak­ing it pub­lic.

      • Re­search be­ing con­ducted in Google docs and then pol­ished and share might be a much more nat­u­ral flow than need­ing peo­ple to con­duct re­search in what­ever tools and then trans­late it into the for­mat of com­ments/​an­swers.

        • Another idea: build­ing in things like cita­tion man­age­ment and other stuff to the LW Google Doc’s and build­ing a gen­er­ally great research

You could build up sev­eral dozen or hun­dred worker (la­borer?) pro­files be­fore you ap­proach highly ac­claimed re­searchers and say “hey, we’ve got a list of peo­ple will­ing to offer in­tel­lec­tual la­bor”, in­ter­ested in tak­ing a look? Or “we’ve got tasks from X, Y, Z—would you like to look and see if you can help?”

[redacted]: “I’d help [highly re­spected per­son] with pretty much what­ever.” Right now [highly re­spected per­son] has no easy way to reach out to peo­ple who might be able to do work for them. I’m sure X and Y [redacted] wouldn’t mind a bet­ter way for peo­ple to lo­cate their ser­vices.

In the ear­lier stages, LessWrong could do a bit of match­mak­ing. Us­ing our knowl­edge and con­nec­tions to link up suit­able peo­ple to tasks.

Ex­ist­ing ser­vices like this (where the plat­form is kind of a match­maker) such as TaskRab­bit and Handy strug­gle be­cause peo­ple use the plat­form ini­tially to find some­one, e.g. a house cleaner, but then by­pass the mid­dle­man to book sub­se­quent ser­vices. But we’re not try­ing to make money off of this, we don’t need to be in the mid­dle. If a task hap­pens be­cause of the LW mar­ket­place and then two peo­ple have an on­go­ing work re­la­tion­ship—that is fan­tas­tic.

Crazy places where this leads

You could imag­ine this end­ing up with LessWrong play­ing the role of some meta-hirer/​re­cruit­ing agency type thing. Peo­ple cre­ate pro­files, up­load all kinds of info, get in­ter­viewed—and then they are rated and ranked within the sys­tem. They then get matched with suit­able tasks. Pos­si­bly only 5-10% of the en­tire pool ever gets work, but it’s more sur­face area on the hiring prob­lem within EA.

80k might offer ca­reer ad­vice, but they’re not a re­cruit­ing agency and they don’t place peo­ple.

Why it might not be that great (un­cer­tain­ties)

It might turn out that all the challenges of hiring peo­ple gen­er­ally ap­ply when hiring just for more limited tasks, e.g. trust­ing them to do a good job. If it’s too much has­sle to vet all the pro­files vy­ing to work on your task, learn how to in­ter­act with a new per­son around re­search, etc., then peo­ple won’t do it.

If it turns out that it is re­ally hard to dis­cretize in­tel­lec­tual work, then a mar­ket­place idea is go­ing to face the same challenges as Q&A. Both would re­quire some solu­tion of the same kind.

I’m sure there’s a lot more to go here. I’ve only spent a cou­ple of hours think­ing about this as of 317.

Q&A + Mar­ket­place: Synergy

I think there could be some good syn­er­gies. Ways in which each blend into each other and sup­port each other. Some­thing I can imag­ine is that there’s a “dis­count” on in­tel­lec­tual la­bor hired if those en­gaged in the work al­low it to be made pub­lic on LW. The work done through the mar­ket­place gets “im­ported” as a Q&A where fur­ther peo­ple can come along and com­ment and provide feed­back.

Or some­one is an­swer­ing your ques­tion and like what they’ve said, but you want more. You could is­sue an “in­vite” to hire them them to work more on your task. Here you’d get the benefits of a pub­li­cly posted ques­tion any­one can work plus the benefits of a ded­i­cated per­son you’re pay­ing and work­ing closely with. This per­son, if they be­come an ex­pert in the topic, could even be­gin man­ag­ing the ques­tion thread free­ing up the im­por­tant per­son who asked the ques­tion to be­gin with.

8. Ap­pendix: Q&A White­board Workings