Where was this originally posted?
1. It’s usually quite clear what it should be.
Are you imagining a manual process where you look at each post and edit it? I was assuming Oliver had in mind an automated script.
Would you expect it to be easy to script the conversion?
The Kelly criterion was recently discussed on LW here.
As I start reading this post, I find myself curious how you see the goal of this post in relation to that one—is it an alternate introduction? does it cover different ground? is there anything you disagree with in that other post (and its follow-up, The Art of the Overbet)?
that is _not_ a scientific/rational approach you are taking w.r.t. Myers-Briggs.
What’s wrong with it? “Descriptions of the INTJ type seem to match me” seems like a meaningful statement.
Perhaps you wanted to know whether they read all the other type descriptions too?
I thought Scott had a pretty good post on this:
My MBTI type is “the type of person who did some looking into it years ago and knows that the MBTI is neither particularly scientific nor particularly consistently applied”. Or, as it’s also called, INTJ— tropylium.tumblr.com
I’m sick of people hating on the Myers-Briggs Type Indicator.
The argument against Myers-Briggs is that it’s not scientific. The argument for Myers-Briggs is that I’m also the kind of person who did some looking into it and realizes that MBTI is neither scientific nor consistently applied, and I also test consistently as INTJ, so clearly something is going on here. And every time I read a description of INTJ I have to facepalm because I so consistently recognize myself in it.
(Yes, I’m familiar with the Forer effect and have compared it to descriptions of different types. Yes, I could totally believe their is a Forer + placebo effect where knowing that you have been assigned a certain type makes it sound more relevant to you than other types you read. Yes, I’m still impressed with how well descriptions of INTJs fit me. Also, I notice that people on Less Wrong, ie people like me, are seven times more likely to be INTJ than the general population. That seems like a nice objective result.)
I think it’s easy to reconcile “Myers-Briggs is not scientific” with “Myers-Briggs is a useful and real descriptive tool”...
The mean person has 1 / 7 billionth control over the fate of humanity. There’s your slightest chance right there!
Edit: In other words, the world is big but not infinite. We are small but not infinitesimal.
I’m having trouble following the part about the operators. Could you spell it out in words? What do the two equations represent? Why is one a multiplication and the other a sum?
In this post (original here), Paul Christiano analyzes the ambitious value learning approach.
I find it a little bit confusing that Rohin’s note refers to the “ambitious value learning approach”, while the title of the post refers to the “easy goal inference problem”. I think the note could benefit from clarifying the relationship of these two descriptors.
As it stands, I’m asking myself—are they disagreeing about whether this is easy or hard? Or is “ambitious value learning” the same as “goal inference” (such that there’s no disagreement, and in Rohin’s terminology this would be the “easy version of ambitious value learning”)? Or something else?
Got it, thanks!
That interpretation makes the “I even forbid myself...” part in rule 3 follow more naturally as well.
53% of the people who had been preparing GS talks offered some kind of help (10/19). 29% of the people preparing non-GS talks stopped to help (6/21).
Wait, surely that means people who prepared a GS talk were 1.8x more likely to help than those with an alternative topic? Oh no, says the report. The difference was not significant at the p<0.05 level.
Isn’t the first category larger than the second? (“some kind of help” vs “stopped to help”)
How many of the 10 GS people who “offered some kind of help” did the “help indirectly” thing (score of 2 on the 0-5 scale)? How many of the 15 non-GS people who did not stop to help did help indirectly?
However, introduction of punishment had no effect in some of the pools.
This doesn’t seem quite right. While some of the pools didn’t see their contribution rates increase when punishment was added, at least the contribution rates didn’t decrease! As they did without punishment.
I’m also a bit confused about your definition of C.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true.
Suppose there exists a special magic eight ball that shows the word “true” or “false” when you shake it after making any statement, and that it always gives the correct answer.
Would you agree that use of this special magic eight ball represents a “procedure/algorithm to assess if any given statement is true”, and so anyone who knows how to use the magic eight ball knows the criterion of truth?
If so, I don’t see how you get from there to saying that a rock must be convinced, or really that anyone must therefore be convinced of anything.
Just because there exists a procedure for assessing truth (absolutely correctly), doesn’t therefore mean that everyone uses that procedure, right?
Suppose that Alice has never seen nor heard of the magic eight ball, and does not know it exists. Just the fact that it exists doesn’t imply anything about her state of mind, does it?
Was there supposed to be some part of the definition of C that my magic eight ball story doesn’t capture, which implies that it represents a universally compelling argument?
Just being able to give the correct answer to any yes/no question does not seem like it’s enough to be universally compelling.
EDIT: If the hypothetical was not A) “there exists… a procedure to (correctly) assess if any given statement is true”, but rather B) “every mind has access to and in fact uses a procedure that correctly assesses if any given statement is true”, then I would agree that the hypothetical implies universally compelling arguments.
Do you mean to be supposing B rather than A when you talk about the hypothetical criterion of truth?
Nm, I see that it’s listed on your home page in the “companies I’m involved with” section.
Is this you?
“Mercatoria uses pseudorandom assignment and locality of state to achieve arbitrary scalability and true decentralization for its payment processing and smart contracts.”
Wei Dai, cofounder
(Almost) never go full Kelly.
Kelly betting, or betting full Kelly, is correct if all of the following are true:
Just to clarify, the first two points that followed are actually reasons you might want to be *more* risk-seeking than Kelly, no? At least as they’re described in the “most situations” list:
Marginal utility is decreasing, but in practice falls off far less than geometrically.
Losing your entire bankroll would end the game, but that’s life. You’d live.
If your utility is linear in money, you should just bet it all every time. If it’s somewhere between linear and logarithmic, you should do something in between Kelly and betting it all.
I think a reasonable way to deal with this scenario is: each of you plays a mixed strategy with the end result that on average, both players will get 40% (meaning that 20% of the time the players will fail to reach agreement and nobody will get anything). The nice things about this strategy are (A) it gives both players at least some money and (B) it satisfies a “meta-fairness” criterion that agents with more biased notions of fairness aren’t able to exploit the system to get more money out of it.
Just wanted to note that this bit reminds me of this post: https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness
(For those who haven’t seen it.)
So then you get Alex felt bad about the fact that, from Alex’s perspective, Bailey keeps ETC.
What does ETC mean here? I thought maybe you meant “etc.” but I can’t get the sentence to make sense in my head when I read it that way.
Specifically, Charlie doesn’t seem like she’s reacting to any chains at all, just the object-level aspect of Alex pegging Bailey as a downer.
I sort of agree. But in the cases where Charlie knows what happened, you might expect that their evaluation of whether Alex was right to conclude that Bailey is a downer might depend on the full chain of events.
In other words, is it that Alex dislikes being made to feel judged? Or, is it that Alex views his own response (feeling judged) to be somehow un-virtuous?
Pretty sure it’s just that Alex dislikes feeling judged, rather than an evaluation of whether his own response was virtuous.
a multiple-choice field with a few specific options, most likely being: [Empty, Exploratory, My Best Guess, Authoritative]
Have you checked that these labels are good matches for a sample of existing posts?