Of course LW is itself an attempt at collective rationality
In particular, it seems like it is a remarkably unexamined, unplanned attempt. Surely we’ve learned some ways to improve it. Surely there are better approaches out there than “hey, Reddit seems to work ok, let’s modify a couple things, call it good, and leave it alone for a while”.
Not that I know how to improve it. Predictably, I have a few complaints and a few minor tweaks to suggest, but I’d really prefer a more evidence-based approach than that. Actually, I don’t even really know what process I would advocate for improving LW, let alone what the actual improvements would be that would come from that process.
There is plenty of talk, less data, and only very tiny amounts of tested changes. Surely the rationalist approach to solving a problem like this should involve empirical examination, not just armchair discussions.
Definitely agreed.
In particular, it seems like it is a remarkably unexamined, unplanned attempt. Surely we’ve learned some ways to improve it. Surely there are better approaches out there than “hey, Reddit seems to work ok, let’s modify a couple things, call it good, and leave it alone for a while”.
Not that I know how to improve it. Predictably, I have a few complaints and a few minor tweaks to suggest, but I’d really prefer a more evidence-based approach than that. Actually, I don’t even really know what process I would advocate for improving LW, let alone what the actual improvements would be that would come from that process.
As far as I see we do have plenty of meta discussion that examine LW.
There is plenty of talk, less data, and only very tiny amounts of tested changes. Surely the rationalist approach to solving a problem like this should involve empirical examination, not just armchair discussions.
LW isn’t very big and as such it’s not clear whether there are strong returns to experimenting with software changes.