I’m super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.
Fundamentally, I believe that a major component of LW’s decline isn’t in the primary article and isn’t being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.
These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.
My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer’s writing.
The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.
Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.
Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called “The Singularity Institute For Artificial Intelliegence”. Then they started worrying that AI would turn out bad by default, and dropped the ”...For Artificial Intelligence” part. Then a late arriving brand-taker-over (“Singularity University”) bought their name for a large undisclosed amount of money and the real research started happening under the new name “Machine Intelligence Research Institute”.
So basically my hope for “grit with respect to species level survival in the face of the singularity” rests in gritty individual humans whose commitment and skills arises from a process we don’t understand, can’t necessarily replicate, and often can’t even reliably teach newbies to even identify.
Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.
If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that “important conversations happen” and the conversation might be enshrined or not… but this enshrining is often not the point.
The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.
Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet’s theory of humor is relevant here.
This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.
The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).
The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.
I offer you TimTyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn’t been around since 2014 that I’m aware of.
I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.
Upvotes don’t matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.
Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.
Yes, I want those “traditionally good” people to exist and I respect their work… but I don’t expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.
Also, the traditionally good people’s content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.
One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.
I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.
I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector.
Is there a natural interpretation of what the first vector means vs what the second or third mean? My lin alg is rusty.
Assuming the interaction matrix is diagonizable, the system state can be represented as a linear combination of the eigenvectors. The eigenvector with the largest positive eigenvalue grows the fastest under the system dynamics. Therefore, the respective compontent of the system state will become the dominating component, much larger than the others. (The growth of the components is exponential.) Ultimately, the normalized system state will be approximately equal to the fastest growing eigenvector, unless there are equally strongly growing other eigenvectors.
If we assume the eigenvalues are non-degenerate and thus sortable by size, one can identify the strongest growing eigenvector, the second strongest growing eigenvector, etc. I think this is what JenniferRM means with ‘first’ and ‘second’ eigenvector.
I’m super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.
Fundamentally, I believe that a major component of LW’s decline isn’t in the primary article and isn’t being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.
These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.
My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer’s writing.
The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.
Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.
Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called “The Singularity Institute For Artificial Intelliegence”. Then they started worrying that AI would turn out bad by default, and dropped the ”...For Artificial Intelligence” part. Then a late arriving brand-taker-over (“Singularity University”) bought their name for a large undisclosed amount of money and the real research started happening under the new name “Machine Intelligence Research Institute”.
Drift is the default! As Hanson writes: Coordination Is Hard.
So basically my hope for “grit with respect to species level survival in the face of the singularity” rests in gritty individual humans whose commitment and skills arises from a process we don’t understand, can’t necessarily replicate, and often can’t even reliably teach newbies to even identify.
Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.
If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that “important conversations happen” and the conversation might be enshrined or not… but this enshrining is often not the point.
The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.
Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet’s theory of humor is relevant here.
This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.
The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).
The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.
I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn’t been around since 2014 that I’m aware of.
I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.
Upvotes don’t matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.
Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.
Yes, I want those “traditionally good” people to exist and I respect their work… but I don’t expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.
Also, the traditionally good people’s content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.
One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.
I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.
Is there a natural interpretation of what the first vector means vs what the second or third mean? My lin alg is rusty.
I wondered the same thing. The explanation I’ve come up with is the following:
See https://en.wikipedia.org/wiki/Linear_dynamical_system for the relevant math.
Assuming the interaction matrix is diagonizable, the system state can be represented as a linear combination of the eigenvectors. The eigenvector with the largest positive eigenvalue grows the fastest under the system dynamics. Therefore, the respective compontent of the system state will become the dominating component, much larger than the others. (The growth of the components is exponential.) Ultimately, the normalized system state will be approximately equal to the fastest growing eigenvector, unless there are equally strongly growing other eigenvectors.
If we assume the eigenvalues are non-degenerate and thus sortable by size, one can identify the strongest growing eigenvector, the second strongest growing eigenvector, etc. I think this is what JenniferRM means with ‘first’ and ‘second’ eigenvector.