Agreed, except for the fact that I don’t think we have enough time to do them separately—HPT without AI seems incredibly difficult to me. I’ve spent some time thinking about it, and I’m enthusiastic about https://microsolidarity.cc/ and https://anarchy.works/, but they rely on establishing a preference fulfillment network-of-caring of sufficient strength for mutual protection and general mutual aid to be natural things to do, which many large aggregate agents (eg, states) seem quite hesitant to do due to power balance instability; if states were too humanitarian and eg were to consider becoming world’s EMT instead of world’s police, they’d put their power at risk. In general, establishing a dense network of coprotective caring relies on a high enough ratio of other-preference fulfillment preference in individuals that there is percolation of protection, which seems necessary to me for reliable reputational-tit-for-tat resistance to defectbot behavior in social network graphs. Because this is so hard to establish in the current preference network, it seems to me plausible that making it easier with tools would make an overwhelming difference (eg, if power sources were dramatically more available), which seems to me to rely on AI capable of solving death, disease, and relative scarcity—which in turn are ai capabilities so strong that the AI needs to be aligned enough to not destroy us to work. But, this is all idiosyncratic to my views, and perhaps I’ve missed something important—eg, perhaps I got the game theory of distributed systems wrong and it’s more tractable than it seems to me.
(I kinda dumped a bunch of relevant keywords into this paragraph, might help to ask an ai to extract the key ones and then look them up, sorry to jargondump; most words besides “coprotection” should be standard terms of art or combinations of them)
Agreed, except for the fact that I don’t think we have enough time to do them separately—HPT without AI seems incredibly difficult to me. I’ve spent some time thinking about it, and I’m enthusiastic about https://microsolidarity.cc/ and https://anarchy.works/, but they rely on establishing a preference fulfillment network-of-caring of sufficient strength for mutual protection and general mutual aid to be natural things to do, which many large aggregate agents (eg, states) seem quite hesitant to do due to power balance instability; if states were too humanitarian and eg were to consider becoming world’s EMT instead of world’s police, they’d put their power at risk. In general, establishing a dense network of coprotective caring relies on a high enough ratio of other-preference fulfillment preference in individuals that there is percolation of protection, which seems necessary to me for reliable reputational-tit-for-tat resistance to defectbot behavior in social network graphs. Because this is so hard to establish in the current preference network, it seems to me plausible that making it easier with tools would make an overwhelming difference (eg, if power sources were dramatically more available), which seems to me to rely on AI capable of solving death, disease, and relative scarcity—which in turn are ai capabilities so strong that the AI needs to be aligned enough to not destroy us to work. But, this is all idiosyncratic to my views, and perhaps I’ve missed something important—eg, perhaps I got the game theory of distributed systems wrong and it’s more tractable than it seems to me.
(I kinda dumped a bunch of relevant keywords into this paragraph, might help to ask an ai to extract the key ones and then look them up, sorry to jargondump; most words besides “coprotection” should be standard terms of art or combinations of them)
I’m curious what downvoters would say if they also replied.