What if asymmetric fake trust technologies are orders of magnitude easier to build and scale sustainably than symmetric real trust technologies?
It already seems like asymmetric technologies work better than symmetric technologies, and that fake trust technologies are easier to scale than real trust technologies.
Symmetry and correct trust are both specific states and there’s tons of directions to depart from them, and the only thing making them attractor states would be people who want the world to be more safe instead of less safe. That sort of thing is not well-reputed for being a great investment strategy (“Socially Responsible Indexes” did not help the matter).
I think that brings up a good point, but the main reason not to work on trust tech is actually cultural (Ayn Rand type stuff), not out of self-interest. There’s actually tons of social status and org reputation to be gained from building technology that fixes a lot of problems, and it makes the world safer for the self-interested people building it.
It might not code as something their society values (e.g. cash return on investment) but the net upside is way bigger than the net downside. Bryan Johnson, for example, is one of the few billionaires investing any money at all in anti-aging tech, even though so little money is going into it that it’s in their personal interest to form a coalition that invests >1% of their wealth into technological advancement in that area.
What if asymmetric fake trust technologies are orders of magnitude easier to build and scale sustainably than symmetric real trust technologies?
It already seems like asymmetric technologies work better than symmetric technologies, and that fake trust technologies are easier to scale than real trust technologies.
Symmetry and correct trust are both specific states and there’s tons of directions to depart from them, and the only thing making them attractor states would be people who want the world to be more safe instead of less safe. That sort of thing is not well-reputed for being a great investment strategy (“Socially Responsible Indexes” did not help the matter).
I think that brings up a good point, but the main reason not to work on trust tech is actually cultural (Ayn Rand type stuff), not out of self-interest. There’s actually tons of social status and org reputation to be gained from building technology that fixes a lot of problems, and it makes the world safer for the self-interested people building it.
It might not code as something their society values (e.g. cash return on investment) but the net upside is way bigger than the net downside. Bryan Johnson, for example, is one of the few billionaires investing any money at all in anti-aging tech, even though so little money is going into it that it’s in their personal interest to form a coalition that invests >1% of their wealth into technological advancement in that area.