I am confused why anyone would believe that this post is attempting to pass an ITT. It isn’t.
It’s also giving the doctor the benefit of the doubt in important ways that jimrandomh seems confident are unlikely to be accurate—in particular, that the doctor’s justification for such frequent and copious appointments is concern for the patient, and has no profit/fraud motive of any kind.
Is there a name for this type of equilibrium, where a player can pre-commit in a way where the best response leaves the first player very well-off, but not quite optimally well-off? What about if it is a mixed strategy (e.g. consider the version of this game where the player who gave the larger number gets paid nothing).
Yes, that seems right, if it can be used as the sole criteria, and be properly normalized for the time frames and questions involved. There are big second-level Goodhart traps lying in wait if people care about this metric.
Right. I kinda implied it was part of the solution but didn’t say it explicitly enough, and may edit.
The problem for implementation, of course, is that explaining your reasoning is toxic in worlds with the models we describe. It’s the opposite of not taking positions, staying hidden and destroying records. It opens you up to being blamed for any aspect of your reasoning. That’s pretty terrible. It’s doubly terrible if you’re in any sort of double-think equilibrium (see SSC here). Because now, you can’t explain your reasoning.
A key active ingredient here seems to be that exact ability to disguise your true position. Even if someone knows your trades, they don’t know why you did them. You could have a different fair value (probability estimate), you could be hedging risk, you could expect the price to move in a direction without thinking that move is going to be accurate, and so on.
By not requiring the trader to be pinned down to anything (except profit and loss) we potentially extract more information.
And all of that applies to non-prediction markets, too.
Agreed. Changed to dishonest update reporting.
I think it’s definitely not dishonest to actually update too slowly versus what would be ideal. As you say, almost everyone does it.
What’s dishonest is for Bob to think 50% and say 70% (or 75%) because it will look better.
This feature is important to me. It might turn out to be a dud, but I would be excited to experiment with it. If it was available in a way that was portable to other websites as well, that would be even more exciting to me (e.g. I could do this in my base blog).
Note that this feature can be used for more than forecasting. One key use case on Arbital was to see who was willing to endorse or disagree with, to what extent, various claims relevant to the post. That seemed very useful.
I don’t think having internal betting markets is going to add enough value to justify the costs involved. Especially since it both can’t be real money (for legal reasons, etc) and can’t not be real money if it’s going to do what it needs to do.
Robin seems to have run smack into the reasonably obvious “slavery is bad, so anything that could be seen as justifying slavery, or excusing slavery, is also bad to say even if true” thing. It’s not that he isn’t sincere, it’s that it seems like he should have figured this one out by now. I am confused by his confusion, and wish he’d spend his points more efficiently.
The Asymmetric Justice model whereby you are as bad as the worst thing you’ve done would seem to cover this reasonably well at first glance—“Owned a slave” is very bad, and “Owned a slave but didn’t force them into it” doesn’t score a different number of points because “Owned a slave” is the salient biggest bad in addition to or rather than “Forced someone into slavery.”
There’s also the enrichment that, past a certain point, things just get marked as ‘evil’ or ‘bad’ and in many contexts, past that point, it doesn’t matter, because you score points by condemning them and are guilty along side them if you defend them, and pointing out truth counts as defending, and lies or bad arguments against them count as condemning. But that all seems… elementary? Is any of this non-obvious? Actually asking.
Yes. This is a viable strategy people use in the real world. It is often called “get ahead of the story.”
That’s how it works out there in the real world. There’s a big cost to change and a bigger cost to reversing change. Plus, the idea is to give us less dead games. If they gave us that, then took it away, that seems quite bad.
If the whole thing is subtle, it won’t be undone.
If it’s obvious (e.g. dead games actually go up, not down) then it would perhaps be undone.
It varies. Sometimes, they are meant to illustrate broader points, but mostly they are about the object-level, as is this one. Deck guides are always object level.
I’ll give thought to whether this makes sense to do at my main blog (which also has tags but not in a way that would be helpful here). If not, the question is whether to go in and edit the posts manually. I would be 100% fine with moderators adding such a tag where it was appropriate; if we didn’t copy everything over to LW automatically, I wouldn’t be copying MtG posts here.
I worry that these studies in support of free speech are narrowly defining free speech as ‘allowed to speak’ rather than lack of social and economic punishment for speaking one’s mind.
I also worry that the reason it looks like free speech looks supported in Murray’s study is that he’s asking about the things people wanted to censor in 1970, as opposed to the things they want to censor now. E.g. imagine the graph for someone against homosexuality, or in favor of religion, or for big crack-downs on communists. The consensus view on these now, among moderates, would have been subject to censorship in 1970.
I feel a lack of free speech on some issues, but actual zero of that is coming from the threat of government intervention or even corporate censorship, but rather worry about social, economic or reputational retaliation
Which is definitely better than it expiring, and 24h batching is better than instantaneous feedback (unless you were going to check posts individually for information already, in which case things are already quite bad). It’s not obvious to me what encouraging daily checks here is doing for discourse as opposed to being a Skinner box.
I have seen the term used positively in the Trump era. My guess is that this is a reaction to it becoming a rhetorical point that it is bad, which makes others respond that it is good.
Whereas before that, the term had been abandoned due to its negative connotations. Part of my model of this is that people support censoring specific things but are against censoring in general. Just like they say the government/corporation spends too much but are individually in favor of every government program and against firing anyone.
I gotta love this quote from their website:
As the information war escalates, we believe more than ever that our responsibility is to provide an advanced, reliable disinformation solution to national security agencies, responsible leaders, and trusted brands.
The ambiguity between “solution to disinformation” and “solution in the form of disinformation” is delicious.
They say this is only to be used on manipulative or disinformation campaigns:
Based on data from our monitoring system, New Knowledge analysts provide the tools and support that companies need to disrupt manipulative online campaigns and maintain brand integrity. No system integration required. No private data collected.
I have no idea why what they are offering would be an asymmetric weapon. Nor do I think that ‘get very good at detecting and understanding manipulative social media campaigns’ is a strategy likely to lead to non-manipulative counter-strategies at a profit-maximizing corporation.
I can see why it might be better at disruption than creation, like many things. This might be one of the few places that makes me feel a little better.