Nm, I see that it’s listed on your home page in the “companies I’m involved with” section.
Is this you?
“Mercatoria uses pseudorandom assignment and locality of state to achieve arbitrary scalability and true decentralization for its payment processing and smart contracts.”
Wei Dai, cofounder
(Almost) never go full Kelly.
Kelly betting, or betting full Kelly, is correct if all of the following are true:
Just to clarify, the first two points that followed are actually reasons you might want to be *more* risk-seeking than Kelly, no? At least as they’re described in the “most situations” list:
Marginal utility is decreasing, but in practice falls off far less than geometrically.
Losing your entire bankroll would end the game, but that’s life. You’d live.
If your utility is linear in money, you should just bet it all every time. If it’s somewhere between linear and logarithmic, you should do something in between Kelly and betting it all.
I think a reasonable way to deal with this scenario is: each of you plays a mixed strategy with the end result that on average, both players will get 40% (meaning that 20% of the time the players will fail to reach agreement and nobody will get anything). The nice things about this strategy are (A) it gives both players at least some money and (B) it satisfies a “meta-fairness” criterion that agents with more biased notions of fairness aren’t able to exploit the system to get more money out of it.
Just wanted to note that this bit reminds me of this post: https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness
(For those who haven’t seen it.)
So then you get Alex felt bad about the fact that, from Alex’s perspective, Bailey keeps ETC.
What does ETC mean here? I thought maybe you meant “etc.” but I can’t get the sentence to make sense in my head when I read it that way.
Specifically, Charlie doesn’t seem like she’s reacting to any chains at all, just the object-level aspect of Alex pegging Bailey as a downer.
I sort of agree. But in the cases where Charlie knows what happened, you might expect that their evaluation of whether Alex was right to conclude that Bailey is a downer might depend on the full chain of events.
In other words, is it that Alex dislikes being made to feel judged? Or, is it that Alex views his own response (feeling judged) to be somehow un-virtuous?
Pretty sure it’s just that Alex dislikes feeling judged, rather than an evaluation of whether his own response was virtuous.
a multiple-choice field with a few specific options, most likely being: [Empty, Exploratory, My Best Guess, Authoritative]
Have you checked that these labels are good matches for a sample of existing posts?
I suspect this difference in intuition is a deep enough disagreement that it makes it difficult to fully agree on moral values.
It’s not clear to me, from what’s written here, that you two even disagree at all. Kaj says, “suffering is bad.” You say, “useless suffering is bad.”
Are you sure Kaj wouldn’t also agree that suffering can sometimes be useful?
why do people think consciousness has anything to do with moral weight?
Is there anything that it seems to you likely does have to do with moral weight?
I feel pretty confused about these topics, but it’s hard for me to imagine that conscious experience wouldn’t at least be an input into judgments I would endorse about what’s valuable.
Even if the process of learning P_H is doing the work to turn it into a coherent probability distribution (removing irrationality and making things well-defined), the end result may find situations which the AI finds itself in too complex to be conceived.
I had trouble parsing the end of this sentence. Is the idea that the AI might get into situations that are too complex for the humans to understand?
It’s quite possible in this case that it takes all the money.
Did you mean to say that it’s quite possible that it takes half the money?
and logical correlations become irrelevant because their votes decrease net x-risk if and only if yours does
I don’t understand this part. What do you mean by “their votes decrease net x-risk if and only if yours does”, and why does that mean logical correlations don’t matter?
And how is this situation different from the general case of voting when some other voters are like-minded?
Just bumped up my monthly Patreon pledge from $50 to $100.
I’m not sure I understand—what is the claim or hypothesis that you are arguing against?
The point about the limits of knowledge is well taken (and also a familiar one around here, no?), but I’m not sure what that implies for honesty.
Surely you would agree that a person or statement can still be more or less honest?
Is the idea that there’s nothing much to be gained by trying to be especially honest—that there’s no low-hanging fruit there?
Now, maybe I just missed something, but I don’t remember reading David Chapman mentioning Less Wrong specifically. So I don’t understand his opinions per se to be attacks against rationality as defined by LW.
I imagine (perhaps incorrectly) that he would agree with some parts of LW common knowledge
He just seems to insist that the true meaning of “rationality” is the Vulcan rationality
Your understanding seems to match what he says in these tweets:
Important: by “rationalists,” I do NOT primarily mean the LW-derived community. I’m pointing to a whole history going back to the Ancient Greeks, and whose most prototypical example is early-20th-century logical positivism.
I think that much of the best work of the LW-derived community is “meta-rational” as I define that. The book is supposed to explain why that is a good thing.
If Scrooge McDuck’s downtown Duckburg apartment rises in price, and Scrooge’s net worth rises equally
Just to clarify, is the scenario that A) Scrooge owns the apartment building and the value of it has gone up (thus increasing his net worth), or is it that B) he rents the apartment from some other landlord, and coincidentally both his net worth and (the NPV of) his rent have gone up by the same amount?
Edit: on second reading, I’m pretty sure you mean A.
Oh right. For loops check the condition before entering the body, just like while loops and unlike do-while loops. Thanks.
What’s wrong with the implementation Dacyn gave?
A reasonable challenge—or, rather, half of one; after all, what CFAR “is trying to accomplish” is of no consequence. What they have accomplished, of course, is of great consequence.
That’s fair. I include the “trying” part because it is some evidence about the value of activities that, to outsiders, don’t obviously directly cause the desired outcome.
(If someone says their goal is to cause X, and in fact they do actually cause X, but along the way they do some seemingly unrelated activity Y, that is some evidence that Y is necessary or useful for X, relative to if they had done Y and also happened to cause X, but didn’t have causing X as a primary goal.
In other words, independently of how much someone is actually accomplishing X, the more they are trying to cause X, the more one should expect them to be attempting to filter their activities for accomplishing X. And the more they are actually accomplishing X, the more one should update on the filter being accurate.)