Google AI PM; Foundation board member
Dave Orr
Re ML for audio of a forced cough to detect COVID, my prior here is that ~all new impressive results in the ML space fail to work in the real world. Translating from the lab to reality is just really hard in the ML space, especially once you are talking about things like audio which vary a lot depending on the environment and mic.
All of which is to say someone should definitely build an app, but it’s far from a slam dunk that it’ll work.
The good news is that according to the paper, they are working with someone already:
“To that end, we have reached an agreement with a Fortune 100 company to demonstrate the value of our tool as part of their COVID-19 management practices.”
Most of the value here will be in the third world, I think—by the time this is ready to go we’ll have access to vaccines in first world countries.
parenting rules
I wouldn’t go that far. Instead I would say that it could be right but the evidence is much weaker than I thought it was 8 years ago.
A few things that convinced me:
A slatestarcodex series on mindset:
a pox on growth your houses (scott is really good at titles) and the response
A large preregistered study on growth mindset maybe showed a small effect but nothing like what proponents claimed.
And generally my trust in the kinds of small n social science experiments that were the backbone of growth mindset research and similar things like priming is just much lower—my prior is now that those things are mostly noise, and so the evidence needs to be stronger to overcome that, where that wasn’t true 8 years ago.
That’s a fantastic one! I totally agree, and we do this as well.
If anyone knows how to use money to speed up vaccine delivery I’d love to know. I might be able to quickly allocate something like $5-20M but I have no idea who to work with to do it. CA would be easiest. Also easier if it’s in a poor community like the central valley but honestly any leads would help.
You could make that less parochial by rephrasing to something like:
There is a worldwide rise in nationalism and populism and corresponding rejection of globalism, leading to worse leaders. This rise is poorly understood by elites, which lessens hope that this trend is going away soon.
He gets a tax benefit due to timeshifting. If he puts stock into the foundation he gets the tax benefit immediately even though the foundation will pay out that money over time. In return he has given up some control and is legally obligated to give away 5% every year.
It’s definitely not the only way. Zuck’s equivalent is an LLC, which is less tax efficient but more flexible.
So actually the best outcome for the US is to ship with regular syringes, but then also ensure a ready supply of low dead space syringes so that those can actually be used?
The reinfection rates for the SA variant are indeed concerning. Do we have any data on whether previous infection prevents deaths or severe infection? The vaccines in general seem to do a great job of stopping the really bad outcomes regardless of how well they do on preventing all infections, so possibly something similar could be going on with the SA variant. Any data either way?
I think picking a weekday is better than a Sunday, because most of the influence will come from media coverage. A media cycle is easier to start during the week than on a weekend.
One thing we did when the kids were small was called rose/bud/thorn. Each of us says something good that happened that day (the rose), something bad (the thorn), and something we were looking forward to (the bud). Sort of a starter gratitude journaling exercise.
No idea if it did anything useful, of course. Parenting is like that.
The Kelly Criterion maximizes the growth of your bankroll over time. This is probably not actually the goal that you personally have for wealth, because of the nonlinearity of money. You (if you’re like everyone else) care much more about preserving wealth, once you have some, than you do about growing it.
Some of this might be loss aversion, but mostly this is right—going from $1M to $2M is nice but far from a doubling in your happiness or ability to do things; going from $1M to zero is a disaster. Kelly doesn’t take that into account, except in the purely mathematical way that if you literally go to zero you can’t make any more bets (which never happens).
For this reason, professional gamblers I know tend to bet half-Kelly to balance out bankroll preservation with growth. (Source: used to be a pro poker player.)
On the flipside, if you have another source of income, you can bet more aggressively. For instance, if you have a job that generates positive savings, you can count unearned savings as part of your bankroll for Kelly purposes. This is a huge advantage pure pro gamblers don’t have. You probably don’t want to be too too aggressive there, and how much to count will depend on the stability and/or fungibility of your income. A year or two of savings could be appropriate.
None of this should change your bottom line that you should take +EV longshot bets if you’ve been passing on them, just how much you should bet.
I agree with that, but I think that utility is not even log linear near zero.
The way pro gamblers do this is: figure out how big your edge is, then bet that much of your bankroll. So if you’re betting on a coin flip at even odds where the coin is actually weighted to come up heads 51% of the time, your edge is 2% (51% win probability − 49% loss probability) so you should bet 2% of your bankroll each round.
I guess whether this is easier or harder depends on how hard it is to calculate your edge. Obviously trivial in the “flip a coin” case but perhaps not in other situations.
At this point I will admit that my gambling days were focused on poker, and Kelly isn’t very useful for that.
But here’s the formula as I understand it: EV/odds = edge, where odds is expressed as a multiple of one. So for the coinflip case we’re disagreeing about, EV is .02, odds are 1, so you bet .02.
If instead you had a coinflip with a fair coin where you were paid $2 on a win and lose $1 on a loss, your EV is .5/flip, odds are 2, so bet 25%.
It’s too cumbersome and only addresses part of the issue. Kelly more or less assumes that you make a bet, it gets resolved, now you can make the next bet. But in poker, with multiple streets, you have to think about a sequence of bets based on some distribution of opponent actions and new information.
Also with Kelly you don’t usually have to think about how the size of your bet influences your likelihood to win, but in poker the amount that you bluff both changes the probability of the bluff being successful (people call less when you bet more) but also the amount you lose if you’re wrong. Or if you value bet (meaning you want to get called) then if you bet more they call less but you win more when they call. Again, vanilla Kelly doesn’t really work.
I imagine it could be extended, but instead people have built more specialized frameworks for thinking about it that combine game theory with various stats/probability tools like Kelly.
The Math of Poker, written by a couple of friends of mine, might be a fun read if you’re interested. It probably won’t help you to become a better poker player, but the math is good fun.
I think some markets are basically efficient and very difficult to beat. The public stock market is one. I’m not convinced by the AI example basically due to priors—we’ve seen many many people claim to be able to beat the public markets without special information, with evidence that seems much more convincing than this, and they are on average wrong. So I don’t think at least this argument overcomes my priors.
However less liquid markets are for sure beatable. The prediction markets around the election are one. Crypto is another—I personally have done well not just investing in crypto but by co-founding a hedge fund that has actively traded crypto for 3 years, many trades per day, making a trading profit (earning alpha) on 1081/1093 days. (And the losing days were all very small, each well below a day’s average profits.)
I also sit on an investment committee for an endowment and see what returns can look like in private markets where it’s possible to have a high informational advantage and turn that into outsized returns.
So to me, the EMH is mostly true for highly liquid highly accessible markets. But for illiquid, less accessible, lower information markets, there is money to be made for people willing to put in the effort.
Whether it’s worth the opportunity cost is also another question, it’s not like it’s hard to make money lots of ways if you are motivated and smart. Crypto is a fun hobby for me, like poker used to be, and I like to make money from my hobbies. Not everyone wants to spend their free time looking for EV in weird places.
I wonder how it would be received if we applied the same reasoning to humans and animals.
Humans might (and in fact do) undergo a lot of suffering. If we could identify people who are likely to suffer high amounts of suffering, then should we put in sentience throttling so that they don’t feel it? Seems very Brave New World.
How about with animals? If we could somehow breed chickens that are identical to current ones except that they don’t feel pain or suffering, would that make factory farming ethical? Here the answer might be yes, though I’m not sure the animal rights crowd would agree.
If you buy from a retailer, you are paying in time as well as money. This is a good deal for people who have relatively more time than money. If you buy from a scalper, you are substituting money for the time component, which is good for people who value their time more highly.
Therefore scalpers are shifting supply from people who have more time to people who have more money. This is likely moving supply from middle class people to rich(er) people.
If you’re in the set of people with more time than money, which is most people, I can see being upset. It arguably substantially increases time to PS5 because you weren’t previously competing with someone like me who doesn’t have time to spare to track inventory and call around, but has plenty of money. It’s removing consumers from a pool that they weren’t in yet.
I think that you need to consider both precision and recall of your interview process. The standard interview process is optimized for precision—you want to be as sure as possible that the people you identify as good are actually good. This is in part because it’s very expensive to fix a hiring mistake, and also because the candidate pool is very bad. The good candidates get hired and keep jobs, and the bad candidates keep interviewing.
If you come up with a new process that has higher recall (can find Bob when the typical process doesn’t), unless you’ve invented something that dominates the typical process, you’re going to get a bunch of false positives and end up hiring people you think are Bobs but are actually bad.
TL;DR your post focuses on recall (avoiding false negatives) but in reality precision (avoiding false positives) is much more important because the candidate pool is mostly terrible.