Programmer, rationalist, chess player, father, altruist.
cata
Funny, but I think it would be funnier if it were less unsubtly rude. The concept is already inherently edgy enough that adding extra insults is like putting frosting on ice cream.
After hanging out in the Polymarket zone for a while, it became clear to me that there are at least two pretty distinct groups of people—the +EV veterans who have fun trying to beat the market and make money, and the gamblers who like to play the game of betting on their opinions and seeing who makes bank. The latter group doesn’t really like it when the +EV people find a “clever” edge like this because it makes the game less fun and fair-seeming.
There exists no third group that really wants an accurate up-to-date prediction of the CO2 level printed on the website as of the next day.
It seems to me that there are many different complaints being raised:
A) Experts in specific things were treated as general authorities.
A lot of what we get is adjacent expertise. So somebody who studies viruses, and maybe knows a lot about the protein structure of viruses, will opine about masks, right… They didn’t have expertise in those areas, and were in fact just on a par with me, or anybody else, right? But they had the, sometimes, arrogance that comes with believing you’re being asked about your area of expertise.
B) People have differing moral values about what is a good result.
We’re just talking about, what is a fair and reasonable way to prioritize different people over each other?
C) The arguments and decisions being made aren’t actually being made by the real experts, they are being made by pseudo-experts.
And I’m like, “Well, according to whom?” Right? Obviously in consequentialist terms, it’s good ethics. I happened to know the top expert in Kantian ethics, she thinks that’s a good idea. So, who the fuck are you?
I think these complaints are not incompatible with some people being experts at reasoning about ethics. It seems like what’s happening (I certainly have never observed “bioethics Twitter” so I am just guessing) is that random people are rationalizing their beliefs by appealing to random prestigious or powerful people and calling them the “experts” to whom you should defer all judgment, and then Julia and Matt are pushing back on that whole general dynamic.
You know, Polymarket really works and anyone can use it right now, including US citizens; the only really serious problem with it is that it costs a flat fee of about ~$50 to deposit and withdraw, and you have to be able to figure out how to send cryptocurrency.
Some comments:
As a random smart LW person, I actually rarely have interest in participating in a prediction market, because I have to work hard to have any edge on anything, and even if I have an edge I risk betting against someone who has true inside information. I would have to value my time less or enjoy modelling things more to be an active participant. It’s not like the stock market where you are incentivized to use it even if you don’t know anything special.
It’s hard to get people to participate in long-term markets, because the payoff just isn’t very good. I have zero desire to work hard on a model so that I can make an expected profit of 10% two years from now.
$25k is a joke—is that for real? That does not pay off someone for making a good model. It also doesn’t let smart money fix a wrong price even if they magically have a perfect model. You would have to organize some kind of fund with many traders working on many models who all shared info and went in on all of each others’ trades.
I think if you have a good system to predict prices, it doesn’t make sense to add this tweeting dynamic. If you do it, and it produces good predictions about the combination tweet+price system and makes money as a result, then what will probably happen is a ton of other people clone the idea and produce similar tweeters and/or predictors until it’s much harder to produce good predictions with consumer-grade models, whereas if you kept your system to yourself you could have gone on extracting money from your good predictions quietly for longer.
Naively, the system seems similar to “good predictor” human fund managers who advertise their services, where you can see the end state I’m describing: there are so many of them that most of them don’t make money anymore, and even if someone shows you good evidence of their good predictions in the past it’s no longer sufficient evidence to act on their advice in the present.
I agree—I think there are many communities which easily achieve a high degree of safety without “safetyism”, typically by being relatively homogeneous or having external sources of trust and goodwill among their participants. LW is an example.
The ultimate explanation for the low savings rate seems to be that people are myopic. In other words, people implicitly care more about having stuff now, rather than later. But that too can’t be the full picture.
What makes you think that this isn’t the full picture (or most of the full picture?)
I only skimmed the article you linked, but I don’t think I agree with your characterization of anonymity—it’s not on the axis of openness, but the axes of freedom and safety. If someone is already let in, they may choose to be anonymous if they think anonymity helps them express things that are outside the box of their existing reputation and identity, or if they think anonymity is a defense against hostile actors who would use their words and actions against them.
I assume the person you’re talking about who made $100K is Vitalik. Vitalik knows much more about making Ethereum contracts work than the average person, and details the very complicated series of steps he had to take to get everything worked out in his blog post. There probably aren’t very many people who can do all that successfully, and the people who can are probably busy becoming rich some other way.
I could have imagined this was true a month ago, but then I spent about 15 total hours learning about Ethereum financial widgets, which was fun, and wrote it up into this post, and now I totally understand Vitalik’s steps, understand many of the possible risks underlying them, and could have confidently done something similar myself. Although I am probably unusually capable even among the LW readership, I think many readers could have done this if they wanted to.
Similarly, I don’t know anything about perpetual futures, but I guarantee that I could understand perpetual futures very clearly by tomorrow if you offered me $20k (or a 20% shot at $100k) to do it.
Having to think hard for a week to clearly understand something complicated, with the expectation that there might be money on the other end*, is definitely a convincing practical explanation for why rationalists aren’t making a lot of money off of schemes like this, but it’s not a good reason why they shouldn’t. Of course, many rationalists may not have enough capital that it matters much, but many may.
*It’s not like these are otherwise useless concepts to understand, either.
That’s fair, I might have been being a little hyperbolic, and I don’t mean to say that no other people care about kids’ short-term well-being. I was more pointing at the fact that if you look for discussions or advice about parenting decisions (e.g. what school to go to, how to interact with them, what activities they should do day-to-day) the majority of the focus will typically be on medium- and long-term effects (e.g. educational outcomes, behavioral training, physical and cognitive development) while ignoring the obvious direct effects on the kid, much like the example in the post about the benefits of tennis.
Parenting advice is an interesting example that might shed some light on what’s going on here. It’s 100% clear that as a parent you are “supposed” to care entirely for your kid’s long-term well-being, ignoring the short term, and short term considerations are only important as a kind of practical “you can only push so hard” issue. The more you manage to successfully optimize your kid for the long term, the better a parent you are in the eyes of society, and that’s all there is to it.
Humans do tend to favor the short term in most domains, sometimes to what seems like a stupid degree. It seems to me that in general, many direct effects are short-term effects, and many second-order effects are postulated long-term effects. So maybe exhorting about second-order effects and ignoring direct effects are really an attempt to get people to pay an appropriate amount of heed* to the long term. I certainly see that in some of your examples.
* As Eliezer memorably wrote in his meta-honesty post:
Because any rule that’s not labeled “absolute, no exceptions” lacks weight in people’s minds. So you have to perform that the “Don’t kill” commandment is absolute and exceptionless (even though it totally isn’t), because that’s what it takes to get people to even hesitate. To stay their hands at least until the weight of duty is crushing them down. A rule that isn’t even absolute? People just disregard that whenever.
I wrote a little at the bottom of my post. IMO probably the main “catch” is that a lot of what you’re getting paid is in governance tokens that may not hold their value, and it’s expensive in gas to be constantly claiming and selling them as you receive them.
I didn’t understand the connection he was drawing between causal modelling and flow.
It sounded like he was really down on learning mere correlations, but in nature knowing correlations seems pretty good for being able to make predictions about the world. If you know that purple berries are more likely to be poisonous than red berries, you can start extracting value without needing to understand what the causal connection between being purple and being poisonous is.
I didn’t understand why he thought his conditions for flow (clear information, quick feedback, errors matter) were specifically conducive to making causal models, or distinguishing correlation from causation. Did anyone understand this? He didn’t elaborate at all.
It would be even easier than that. Suppose for the sake of simplicity you put your whole million dollars of Bitcoin into Compound. You could then withdraw USD up to the point where you hit the collateralization ratio set by Compound, e.g. if the collateralization ratio is 150%, you could withdraw $666k of USD.
When Bitcoin goes up 1000x, your collateral is now worth a billion dollars, but you have still only borrowed $666k (plus interest). You have way more collateral than you need. So you could just withdraw 99.9% (minus interest) of the collateral, if you wanted. You don’t even have to repay anything to do that.
Well, here’s a scenario: Suppose you have owned Bitcoin since 2012 and now you have a million dollars of Bitcoin, but when you sell them you will have to pay ~$250,000 in capital gains tax, since you live in California. You still endorse owning the Bitcoin because you think it’s headed for the moon, but you want to use that capital for some other stuff, like buying a Tesla.
In this case you could deposit $200k of Bitcoin into Compound, withdraw $100k of USDC, cash the USDC out for dollars at Coinbase, and buy your Tesla. Now you are still exposed to the upside on all your Bitcoin but you also have your Tesla.
In practice I actually don’t know what kind of taxable event “taking out a loan at Compound” is so I am not sure this was a correct way to minimize taxes. (On the other hand, even if it’s not, I bet people are doing it; the IRS actually looks if you sell Bitcoin on Coinbase, but I don’t think it looks at what you’re doing with Compound.) But at least it definitely preserved your exposure to Bitcoin. That’s a way to “put it to good use” even when it just sits there.
This is one example. As I mentioned in the post, I’m not sure what is driving the majority of the demand.
This post is fascinating to me because I have no idea what to say. I keep coming back to it, reading the new comments, starting to type, and giving up. It seems like the kind of thing where I should be able to think “what would I do” and generate some opinion, but I have no idea what I would do, or what should have been done.
I would be very interested to see more people reflecting on situations like this and thinking about what should have been done. This was obviously an unusual event, but not so unusual that I don’t expect future events to resemble it in many ways.
I guess I expect the edge to manifest in the form of being able to look at simple, high-upside, good ideas being implemented by smart people, correctly distinguish them from uninspired hacks, and then being able to take effective action on your beliefs. (Good ideas like Bitcoin or Ethereum themselves, or like CFMMs.)
As a random example, if you go look at harvest.finance right now, you can see that there’s a huge APY on investing in Uniswap liquidity pools associated with tokenized versions of public company stock, like AAPL. These tokens are designed by a new project called Mirror which I know nothing about. Do they work? Is the project credible? Is there demand for this product?
It’s possible that if you spent a few hours looking into it, you could put a decent confidence interval on the probability it will succeed or crater, and if it’s likely to succeed, you can make a lot of EV on these pools.
In the long term, if Ethereum (or some replacement) can continue scaling up, there’s just going to be more and more stuff that is begging to be reimplemented as interoperable tokenized contracts, and it’s all going to have to go from zero to sixty. If someone makes the Ethereum version of, like, Ebay, or Twitter, and you can recognize that it’s any good and invest in it, that’s a big edge.
I know, right?! Also, check out this article “Ethereum Is a Dark Forest” if you haven’t seen it yet.
Phil once told me about a cosmic horror that he called a “generalized frontrunner.” Arbitrage bots typically look for specific types of transactions in the mempool (such a DEX trade or an oracle update) and try to frontrun them according to a predetermined algorithm. Generalized frontrunners look for any transaction that they could profitably frontrun by copying it and replacing addresses with their own. They can even execute the transaction and copy profitable internal transactions generated by its execution trace.
Thanks! I literally didn’t know how any of these things worked a week ago so I’m glad if more knowledgeable people approve.
I am a pretty serious chess player and among other things, chess gave me a clearer perception of the direct cost in time and effort involved in succeeding at any given pursuit. You can look at any chess player’s tournament history and watch as they convert time spent into improved ability at an increasingly steep rate.
As a result, I can confidently think things like “I could possibly become a grandmaster, but I would have to dedicate my life to it for ten or twenty years as a full-time job, and that’s not worth it to me. On the other hand, I could probably become a national master with several more years of moderate work in evenings and weekends, and that sounds appealing.” and I now think of my skill in other fields in similar terms. As a result, I am less inclined to, e.g. randomly decide that I want to become a Linux kernel wizard, or learn a foreign language, or learn to draw really well, because I clearly perceive that those actions have a quite substantial cost.