Liron
Actually, the only time I know they cashed in early was selling half their Coinbase shares at the direct listing after holding for 7 years.
Their racket was to be the #1 crypto fund with the most assets under management ($7.6B total) so that they can collect the most management fees (probably about $1B total). It’s great business for a16z to be in the sector-leader AUM game even when the sector makes no logical sense.
I’m just saying Marc’s reputation for publicly making logically-flimsy arguments and not updating on evidence should be considered when he enters a new area of discourse.
I encourage you to look into his firm’s Web3 claims and the reasoning behind them. My sibling comment has one link that is particularly egregious and recent. Here’s another badly-reasoned Web3 argument made by his partner, which implies Marc’s endorsement, and the time his firm invested over $100M in an obvious Ponzi scheme.
My #1 and #2 are in a separate video Marc made after the post Zvi referred to, but ya, could fall under the “bizarrely poor arguments” Zvi is trying to explain.
My #3 and his firm’s various statements about Web3 in the last couple years, like this recent gaslighting, are additional examples of bizarrely poor arguments in an unrelated field.
If we don’t come in with an a-priori belief that Marc is an honest or capable reasoner, there’s less confusion for Zvi to explain.
My model is that Marc Andreessen just consistently makes badly-reasoned statements:
Last year being unable to coherently explain a single Web3 use case despite his firm investing $7.6B in the space
I’ve personally been saying “AI Doom” as the topic identifier since it’s clear and catchy and won’t be confused with smaller issues.
Great post! Agree with everything. You came at some points from a unique angle. I especially appreciate the insight of “most of the useful steering work of a system comes from the very last bits of glue code”.
Bravo.
Which 2+ outcomes from the list do you think are most likely to lead to your loss?
It seems from your link like CFAR has taken responsibility, taken corrective action, and states how they’ll do everything in their power to avoid a similar abuse incident in the future.
I think in general the way to deal with abuse situations within an organization is to identify which authority should be taking appropriate disciplinary action regarding the abuser’s role and privileges. A failure to act there, like CFAR’s admitted process failure that they later corrected, would be concerning if we thought it was still happening.
If every abuse is being properly disciplined by the relevant organization, and the rate of abuse isn’t high compared to the base rate in the non-rationalist population, then the current situation isn’t a crisis—even if some instances of abuse unfortunately involve the perpetrator referencing rationality or EA concepts.
Great post! I agree with this analogy.
I think the fire stands for value creation. My Lean MVP Flowchart post advises to always orient your strategy about what it’ll take to double the size of your current value creation. Paul Graham’s Do Things That Don’t Scale is a coarse-grained version of this advice, pointing out that doubling a small fire is qualitatively different from doubling a large fire.
I guess that’s plausible, but then my main doom scenario would involve them getting leapfrogged by a different AI that has hit a rapid positive feedback loop of how to keep amplifying its consequentialist planning abilities.
My reasoning stems from believing that AI-space contains designs that can easily plan effective strategies to get the universe into virtually any configuration.
And they’re going to be low-complexity designs. Because engineering stuff in the universe isn’t a hard problem from a complexity theory perspective.
Why should the path from today to the first instantiation of such an algorithm be long?
So I think we can state properties of an unprecedented future that first-principles computer science can constrain, and historical trends can’t.
I think the mental model of needing “advances in chemistry” isn’t accurate about superintelligence. I think a ton of understanding of how to precisely engineer anything you want out of atoms just clicks from a tiny amount of observational data when you’re really good at reasoning.
I don’t know if LLM Ems can really be a significant factorizable part of the AI tech tree. If they have anything like today’s LLM limitations, they’re not as powerful as humans and ems. If they’re much more powerful than today’s LLMs, they’re likely to have powerful submodules that are qualitatively different from what we think of as LLMs.
I agree that rapid capability gain is a key part of the AI doom scenario.
During the Manhattan project, Feynman prevented an accident by pointing out that labs were storing too much uranium too close together. We’re not just lucky that the accident was prevented; we’re also lucky that if the accident had happened, the nuclear chain reaction wouldn’t have fed on the atmosphere.
We similarly depend on luck whenever a new AI capability gain such as LLM general-topic chatting emerges. We’re lucky that it’s not a capability that can feed on itself rapidly. Maybe we’ll keep being lucky when new AI advances happen, and each time it’ll keep being more like past human economic progress or like past human software development. But there’s also a significant chance that it could instead be more like a slightly-worse-than-nuclear-weapon scenario.
We just keep taking next steps of unknown magnitude into an attractor of superintelligent AI. At some point our steps will trigger a rapid positive-feedback slide where each step is dealing with very powerful and complex things that we’re far from being able to understand. I just don’t see why there’s more than 90% chance that this will proceed at a survivable pace.
My commentary on this grew into a separate post: Contra Hanson on AI Risk
Contra Hanson on AI Risk
Robin Hanson’s latest AI risk position statement
I tweeted my notes of Eliezer’s points with abridged clips.
FWIW I’ve never known a character of high integrity who I could imagine writing the phrase “your career in EA would be over with a few DMs”.