Disincentives for participating on LW/​AF

I was in a re­search re­treat re­cently with many AI al­ign­ment re­searchers, and found that the vast ma­jor­ity of them do not par­ti­ci­pate (post or com­ment) on LW/​AF or par­ti­ci­pate to a much lesser ex­tent than I would pre­fer. It seems im­por­tant to bring this up and talk about whether we can do some­thing about it. Un­for­tu­nately I didn’t get a chance to ask them about why that is the case as there were other things to talk about, so I’m go­ing to have to spec­u­late here based on my per­sonal ex­pe­riences and folk psy­chol­ogy. (Per­haps the LW team could con­duct a sur­vey and get a bet­ter pic­ture than this.)

  • Crit­i­cism that feels overly harsh or is oth­er­wise psy­cholog­i­cally unpleasant

  • Downvotes

  • Not get­ting enough up­votes as one feels deserved

  • Not get­ting enough engagement

  • More ad­ver­sar­ial (zero-sum) na­ture of pub­lic dis­cus­sion /​ prefer­ring pri­vate dis­cus­sions for more col­lab­o­ra­tive nature

  • Feel­ing like “los­ing” a de­bate when the other per­son gets more up­votes than you

  • More effort needed to write com­ments than to talk to peo­ple IRL

  • Not real-time /​ time lag be­tween replies

  • Feel­ing ig­nored when some­one stops responding

  • Others?

  • ETA: Po­ten­tially leav­ing a record of be­ing wrong in public

One meta prob­lem is that differ­ent peo­ple have differ­ent sen­si­tivi­ties to these dis­in­cen­tives, and hav­ing enough dis­in­cen­tives to filter out low-qual­ity con­tent from peo­ple with low sen­si­tivi­ties just nec­es­sar­ily means some po­ten­tial high-qual­ity con­tent from peo­ple with high sen­si­tivi­ties are also kept out. But it seems like there are still some things worth do­ing or ex­per­i­ment­ing with. For ex­am­ple:

  • Sup­port for real-time col­lab­o­ra­tive dis­cus­sion which sub­se­quently get posted and voted upon as one unit (with votes go­ing equally to both par­ti­ci­pants)

  • Dis­abling down­votes on AF

  • Hav­ing more in­di­ca­tions/​re­minders of how much post­ing to LW/​AF benefits the in­di­vi­d­ual posters and the wider com­mu­nity, in terms of mak­ing in­tel­lec­tual progress and spread­ing good ideas. I’m not sure what form this could take, but maybe things like an in­di­ca­tion of how many times a post is read.

  • My pre­vi­ous fea­ture sug­ges­tion to help with the “feel­ing ig­nored” problem

  • Be­ing less crit­i­cal of new users and en­gag­ing more pos­i­tively with them

There’s a sep­a­rate is­sue that some peo­ple don’t read LW/​AF as much as I would pre­fer but I have much less idea what is go­ing on there.

On a tan­gen­tially re­lated topic, is LW mak­ing any prepa­ra­tions (such as think­ing about what to do) for a seem­ingly not very far fu­ture where au­to­mated opinion in­fluencers are widely available as hirable ser­vices? I’m imag­in­ing some AI that you can hire to scan in­ter­net dis­cus­sion fo­rums and make posts or replies in or­der to change read­ers’ be­liefs/​val­ues in some speci­fied di­rec­tion. This might be very crude at the be­gin­ning but could already greatly de­grade the ex­pe­rience of par­ti­ci­pat­ing on pub­lic dis­cus­sion fo­rums.