One error of the stag/rabbit hunt framing is that it makes it explicit that it’s a coordination problem, not a values problem. To frame it differently would require that the stag and rabbit hunts not produce different utility numbers, but yield different resources or certainties of resource. If a rabbit hunt yields 3d2 rabbits hunted per hunter, but the stag hunt yields 1d2-1 stag hunted if all hunters work together and 0 if they don’t, then even with a higher expected yield of meat and of hide from the stag hunt, for some people the rabbit hunt might yield higher expected utility, since the certainty of not starving is much more utility than an increase in the amount of hides.
In order to confidently assert that a Schelling point exists, one should have viewed the situation from everyone’s point of view and applying their actual goals- NOT look at everyone’s point of view and apply your goal, or the average goals, or the goals they think they have.
An answer of “There is probably one but I can’t figure out what it is.” is equivalent to an answer of “I can’t find one.”
I’m not making a mathematical conjecture that is probably true but might not have a proof; I’m asking what is wrong with engineering fully sentient catgirls who want to serve people in a volcano fortress that isn’t also wrong with allowing existing people to follow their dreams of changing themselves into sentient catgirls and serving people in a volcano fortress.
Is there any significant difference between finding sentient beings who self-modify into becoming sentient catgirls for the purpose of serving you in your volcano fortress and engineering de novo sentient catgirls who desire to serve you in your volcano fortress?
I don’t think it’s inherently difficult to tell the difference between someone who is speaking N levels above you and someone who is speaking N+1 levels above you. The one speaking at a higher level is going to expand on all of the things they describe as errors, giving *more complex* explanations.
The difficulty is that it’s impossible to tell if someone who is higher level than you is wrong, or telling a sophisticated lie, or correct, or some other option. The only way to understand how they reached their conclusion is to level up to their level and understand it the hard way.
There’s a related problem, where it’s nigh impossible to tell if someone who is actually at level N but speaking at level N+X is making shit up completely unless you are above the level they are (and can spot errors in their reasoning).
Take a very simple case: A smart kid explaining kitchen appliances to a less smart kid. First he talks about the blender, and how there’s an electric motor inside the base that makes the gear thingy go spinny, and that goes through the pitcher and makes the blades go spinny and chop stuff up. Then he talks about the toaster, and talks about the hot wires making the toast go, and the dial controls the timer that pops the toast out.
Then he goes +x over his actual knowledge level, and says that the microwave beams heat radiation into the food, created by the electronics, and that the refrigerator uses an ‘electric cooler’ (the opposite of an electric heater) to make cold that it pumps into the inside, and the insulated sides keep it from making the entire house cold.
Half of those are true explanations, and half of those are bluffs, but someone who is barely has the understanding needed to verify the first two won’t have the understanding needed to refute the last two. If someone else corrects the wrong descriptions, said unsophisticated observer would have to use things other than the explanation to determine credibility (in the toy cases given, a good explanation could level up the observer enough to see the bluff, but in the case of +5 macroeconomics that is impractical). If the bluffing actor tries to refute the higher-level true explanation, they merely need to bluff more; people high enough level to see the bluff /already weren’t fooled/, and people of lower level see the argument see the higher level argument settle into an equilibrium or cycle isomorphic to all parties saying “That’s not how this works, that’s not how anything works; this is how that works”, and can only distinguish between them by things other than the content of what they say (bias, charisma, credentials, tribal affiliation, or verified track records are all within the Overton Window for how to select who to believe).
How useful would it be to have someone who produced luminators that were pegboards with lights mounted via zip ties or something equally aesthetically bad? If the labor of collecting and assembling the components can efficiently be outsourced into buying a nonstandard light fixture, it might be more accessible.
Are you suggesting blacklightboxes?
Has anyone who has gotten relief by using luninators done rigorous a/b testing with different temperatures/colors or intensity or duration or other possibly important variables?
Not just gold standard clinical trials, something like “I tried color a for a week and logged 3 episodes, but color b for a week resulted in 8” could be informative for people deciding which type of bulb to get.
If 20 percent of children in third grade could read at at least the first grade level, what percentage of children that age who didn’t attend school could do so?
The mockingbird: Find whatever method the current leader(s) is/are using to enable self-cooperation, and find the way to mimic them with a small advantage. (e.g. if they use a string of 0,1,4,5s to self-identify, spam 4 until they identify as you, then identify how to get into the side of mutual cooperation that is sometimes up a point.
Tit-for-tat with correction: Start with a even distribution, then play what they played last round, except if the total last round was above five and they played higher, reduce the value you played by the amout that exceeded five last round; if the total last round was below five and they played lower, increase the value you play this round by the shortfall. (If the values played were the same, adjust by half the difference, randomly selecting between two values if a .5 change is indicated. (Loses at most 5 points to fivebot, loses about half a point per round to threebot, leaves some on the table with twobot, but self-cooperates on round two with 80% probability.
Nominal GDP also increases by 1000 times, and everyone’s currency savings increases by 1k-fold, but the things which are explictly in nominal currency rather than in notes will keep the same number. The effect would be to destroy people who plan on using payments from debtors to cover future expenses, in the same way they would as if their debtors defaulted and paid only one part in a thousand of the debt, but without any default occuring.
My predcition is that having a sincerely held belief to ‘defy Moloch whenever possible’ would result in suffering the harm caused by being the first actors to switch from the worse Nash equilibrium.
Let’s talk about how timed-collective-action-threshold-conditional-commitment.
The very most important thing about having the all-things-considered view is not multiply propogating the consensus belief, especially when the information flow is one-way: if you report your credence after updating from a consensus that you didn’t agree with, but without causing the consensus to update at least a tiny bit towards your belief, then someone who updates their inside view with the view you have after updating on others, and on the view that others have without updating on you, will develop and propogate errors even if everyone involved is doing the math diligently and accurately.
There will always be tasks at which better (Meta-)*Cognition is superior to the available amounts of computing power and tuning search protocols.
It becomes irrelevant if either humans aren’t better than easily created AI at that level of meta or AI go enough levels up to be a failure mode.
No individual cares about anything other than the procedures. Thus, the organization as a whole cares only about the procedures. The behavior is similar /with the procedures that exist/ to caring about fitness, but there is also a procedure to change procedure.
If the organization cared about fitness, the procedure to change the height/weight standards would be based on fitness. As it is, it is more based on politics. Therefore I conclude that the Army cares more about politics and procedures than fitness, and any behavior that looks like caring about fitness is incidental to their actual values.
The listener’s filter needs as an input the nature of the speaker’s filter, or information is irretrievably lost.
The speaker’s filter needs as an input the nature of the listener’s filter, or information is irretrievably lost.
Having two codependent filters like that has a lot of stable non-lossy outcomes. One easy one to describe is the one where both filters are empty.
Unless you can convince me of a specific pair of filters such that many more people that I want to talk to use those two filters than use empty filters (increasing the number of people with whom I can communicate losslessly) or that provide some benefit superior to empty filters, I’ll continue to use empty filters as much as possible, even if I have to aggressively enforce that choice on others.
Signalling higher status by applying ‘tact’ when I don’t want to be insulting is not a benefit to me. Giving others more deference than myself regarding what filters to apply is not a benefit to me. If I want to insult someone, I can do that as effectively by insulting them as a tact culture communicator could by speaking without tact.
.… will banish you from the tribe.
The only person I heard of go to the brig was one who broke into barracks and stole personal property. Falsifying official records or running off to run a side job as a real estate broker was more of a ’30 days restriction, 30 days extra duty, reduction in rate to the next inferior rate, forfeiture of 1⁄2 month’s base pay for 2 months’ thing.
The Army works just fine, and has goals that aren’t ours. Why not steal much of their model /which works and has been proven to work/?
Especially if the problematic aspects of Army culture can be avoided by seeing the skulls on the ground.
Part of the program is separating people who don’t lose weight. That doesn’t mean they care about the height/weight, only that the next box is ‘process for separation’.
There’s not a lot other than adherence to procedure that most of the military actually does care about.
I read that “this is causing substantial harm” would be insufficient to cancel a norm, but expect that “this is creating a physical hazard” would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there’s a false negative in a mideterm evaluation of danger...
Maybe I’m concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.
“Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs.”
If you have no norm for evaluating that rule explicitly, it doesn’t mean that you won’t evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won’t quickly learn to put exit clauses in experiments that are likely to need them ‘notwithstanding any other provision’ is failing to accurately predict.