sounds interesting if it works as math. have you already written it out in latex or code or similar? I suspect that this is going to turn out to not be incentive compatible. Incentive-compatible “friendly”/”aligned” economic system design does seem like the kind of thing that would fall out of a strong solution to the AI short-through-long-term-notkilleveryone-outcomes problem, though my expectation is basically that when we write this out we’ll find severe problems not fully visible beneath the loudness of natural language. If I didn’t need to get away from the computer right now I’d even give it a try myself, might get around to that later, p ~= 20%
sounds interesting if it works as math. have you already written it out in latex or code or similar? I suspect that this is going to turn out to not be incentive compatible. Incentive-compatible “friendly”/”aligned” economic system design does seem like the kind of thing that would fall out of a strong solution to the AI short-through-long-term-notkilleveryone-outcomes problem, though my expectation is basically that when we write this out we’ll find severe problems not fully visible beneath the loudness of natural language. If I didn’t need to get away from the computer right now I’d even give it a try myself, might get around to that later, p ~= 20%
I’ve been dragging my feet on the sim. Help definitely needed, especially on formalization.