ThomasJ
- [deleted]
Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.
I’ve seen this quote (and similar ones) before. I believe that this approach is extremely flawed, to the point of being anti-rationalist. In no particular order, my objections are:
It is necessarily restricted to the people Altman knows. As a member of the social, technological, and financial elite, Altman associates with people who have an extremely high base rate for being successful relative to the general population (even relative to the general American population).
The “and mean it” opens to the door to a No True Scotsman fallacy. The person didn’t succeed even though they said they wouldn’t give up? They must have not really meant it.
It gives zero weight to the expected value of the work. There are lots of people whose implicit strategy is “No matter my financial challenges, I am never going to give up playing the lottery every week until I get rich. If I run out of money I am going to figure out how to overcome that challenge so I can continue to buy lottery tickets.” More seriously, there are lots of important unsolved problems that humanity has been working on for multiple lifetimes without success. I am literally willing to bet against the success of anyone who believes in Altman’s quote and works on deciding if P=NP, finding a polynomial time algorithm for integer factorization, or similar problems.
It gives zero weight to opportunity cost. If the person wasn’t banging their head against whatever they were working on, they could probably switch to a better problem. Recognizing this, Silicon Valley simultaneously glorifies “Not Giving Up”, and “The Pivot”. One explanation for this apparent contradiction is that the true work that SV wants people to not give up on is “generating returns for investors.”
In general, it is suspicious that Altman’s advice aligns so perfectly with the behavior you would want if you were an angel or VC. That is, you would want the team to work as hard as possible to generate a return without giving up, ignoring opportunity costs, while the investor maintains the option to continue to invest or not. Note that no investor would say, “I will invest as much money as necessary into this startup until it works, and no matter what the challenges are we will figure out how to raise more money for them.”
A rationalist approach would evaluate the likelihood of overcoming known challenges, the likelihood that an unknown challenge would cause a failure, the expected value of the venture, and the opportunity costs, and then periodically re-evaluate to decide whether to give up or not. Altman’s advice to explicitly not do this is self-deceptive, magical thinking.
I do agree that it increases the variance of outcomes. I think it decreases the mean, but I’m less sure about that. Here’s one way I think it could work, if it does work: If some people are generally pessimistic about their chances of success, and this causes them to update their beliefs closer to reality, then Altman’s advice would help. That is, if some people give up too easily, it will help them, while the outside world (investors, the market, etc) will put a check on those who are overly optimistic. However, I think it’s still important to note that “not giving up” can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork).
Thanks for the “Young Rationalists” link, I hadn’t read that before. I think there are a fair number of successful rationalists, but they mostly focus on doing their work rather than engaging with the rationalist community. One example of this is Cliff Asness—here’s a essay by him that takes a strongly rationalist view.
I think I mis-pasted the link. I have edited it, but it’s suppose to go to https://www.aqr.com/Insights/Perspectives/A-Gut-Punch
Despite being a webcomic, I think this is a funny, legitimate, and scathing critique of the philosophic life and to some extent the philosophy of rationality
I feel like I have all the things you state are required to have a huge edge, and yet...my edge is not obvious to me. Most of the money-making opportunities in DeFi seem to involve at least one of:
That’s that look, at least on the surface, like market manipulation
Launching products that are illegal in the US, at least without tons of regulatory work (exchanges, derivatives platforms, new tokens, etc)
Taking on significant crypto beta risk (i.e., if the crypto market goes down, my investment drops as much as any other crypto investor’s)
Yield farming does look attractive, and I plan to invest some stablecoins in the near future.
It seems like this is a single building version of a gated community / suburb? In “idealized” America (where by idealized I mean somewhat affluent, morally homogeneous within the neighborhood, reasonably safe, etc), all the stuff you’re describing already happens. Transportation for kids is provided by carpools or by the school, kids wander from house to house for meals and play, etc. Families get referrals for help (housekeeping, etc) from other families, or because there are a limited number of service providers in the area. In general, these aren’t the hard things about having kids.
In my experience, here are the hard things:
The early months / years are miserable. The kid wakes you up in the middle of the night and won’t go back to sleep and you don’t know why. You’re in a constant state of sleep deprivation. This happened to me even though I had a night nanny for the first few months (which was hugely helpful, but did not completely eliminate the problem). I got off easier than my friends who had such a problem that they finally hired a “sleep coach” (yes this is a thing).
Your kid is sick, and you need to take care of them. You could outsource this if you had live-in help, but in practice there is a biological imperative to make you want to do the caretaking yourself.
Your kid has physical or mental issues. This doesn’t necessarily mean anything like they’re in a wheelchair or have severe learning disabilities, it could mean something like attention issues or delayed fine motor skills.
The kid needs almost constant supervision, particularly in the early years. Again you can outsource this to a limited extent (e.g., with daycare) but as a parent you want to spend some time with them (because if not, why have the kid at all?)
Even when things are going smoothly, there are significant coordination costs. Do you and your partner both need to stay late at work? Figure out who’s going to pick up the child from school (and make sure the school has all the appropriate forms allowing that person to pick up), arrange childcare for the night (will you be home early enough to put your kid to bed?), etc.
You finally got home and you’re dead tired. Unfortunately at 3am your kid wakes you up because they had a bad dream. This happens more than once per week, for various reasons.
There’s a trade-off between living in the best place for your work and living in the best place for your kid. Would it be better for you to live in the heart of Manhattan (or wherever) for your job and career socializing? Probably yes. Is it the best place to raise kids? Probably no.
You can never again give 110%. You know those couple of weeks you had crunch period and had to work 80 hours? You can’t do that anymore. No one else can actually replace you as a parent for your own kid. Or rather you can, but you have to be aware that you’re now actively putting on the trade of “sell relationship with child.”
+1, CLion is vastly superior to VsCode or emacs/vi for capabilities and ease of setup, particularly for C++ and Rust
What is the most evil AI that we could build, today?
But if I had to use the billion dollars on evil AI specifically, I’d use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
How exactly would you do this? Lots of places market “AI powered” hedge funds, but (as someone in the finance industry) I haven’t heard much about AI beyond things like regularized regression actually giving significant benefit.
Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
Didn’t this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.
Edit: Don’t pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
Above 99% certainty:
Run inference in reasonable latency (e.g. < 1 second for text completion) on a typical home gaming computer (i.e. one with a single high-powered GPU).
>75% confidence: No consistent strong play in simple game of imperfect information (e.g. battleship) for which it has not been specifically trained.
>50% confidence: No consistent “correct” play in a simple game of imperfect information (e.g. battleship) for which it has not been specifically train. Correct here means making only valid moves, and no useless moves. For example, in battleship a useless move would be attacking the same grid coordinate twice.
>60% confidence: Bad long-term sequence memory, particularly when combined with non-memorization tasks. For example, suppose A=1, B=2, etc. What is the sum of the characters in a given page of text (~500 words)?