Thanks, I suppose I’m taking issue with sequencing five distinct conditional events that seem to be massively correlated with one another. The likelihoods of Events 1-5 seem to depend upon each other in ways such that you cannot assume point probabilities for each event and multiply them together to arrive at 1%. Event 5 certainly doesn’t require Events 1-4 as a prerequisite, and arguably makes Events 1-4 much more likely if it comes to pass.
followthesilence
Apologies, I’m not trying to dispute math identities. And thank you, the link provided helps put words to my gut concern: that this essay’s conclusion relies heavily on a multi-stage fallacy, and arriving at point probability estimates for each event independently is fraught/difficult.
Spoiler: Less than 1% will admit they were wrong. Straight denial, reasoning that it doesn’t actually matter, or pretending they knew the whole time lab origin was possible are all preferable alternatives. Admitting you were wrong is career suicide.
The political investments in natural origin are strong. Trump claiming a Chinese lab was responsible automatically put a large chunk of Americans in the opposite camp. My interest in the topic actually started with reading up to confirm why he was wrong, only to find the Daszak-orchestrated Lancet letter that miscited numerous articles and the Proximal Origins paper that might be one of the dumbest things I’ve ever read. The Lancet letter’s declaration that “lab origin theories = racist” influenced discourse in a way that cannot be understated. It also seems many view more deadly viruses as an adjoining component of climate change: a notion that civilizing more square footage of earth means we are inevitably bound to suffer nature’s increasing wrath in the form of increasingly virulent, deadly pathogens.
The professional motivations are stark and gross. “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Thoughts on the origin are frequently dismissed if you’re not a virologist. But all the money in virology is in gain of function. Oops!
Enjoyed this post, thanks. Not sure how well chess handicapping translates to handicapping future AGI, but it is an interesting perspective to at least consider.
Voting that you finish/publish the RFK Jr piece. Thanks for this weekly content.
I’m pretty bullish on hypothetical capabilities of AGI, but on first thought decided a 40% chance of “solving aging” and stopping the aging process completely seemed optimistic. Then reconsidered and thought maybe it’s too pessimistic. Leading me to the conclusion that it’s hard to approximate this likelihood. Don’t know what I don’t know. Would be curious to see a (conditional) prediction market for this.
IMO a lot of claims of having imposter syndrome is implicit status signaling. It’s announcing that your biggest worry is the fact that you may just be a regular person.
Imposter syndrome ≠ being a regular person is your “biggest worry”.
Can you succinctly explain what OCH is? Is it, roughly, applying Occam’s razor to conspiracy theories?
Great review. Brilliant excerpts, excellent analysis. My only quibble would be:
What Michael Lewis is not is for sale.
What leads you to this conclusion? I don’t know much about Lewis, but based on his prior books I would’ve said one thing he is not is stupid, or bad at understanding people. I feel you have to be inconceivably ignorant to stand by SBF and suggest he probably didn’t intentionally commit fraud, particularly in light of all the stories presented in the book.
Bizarre statements like “There’s still an SBF-shaped hole in the world that needs filling” have me speechless with no good explanation other than Lewis was on the take.
Thanks. I’m probably missing the point, but I don’t see how these definitions apply to moon landing conspiracies, which much of your post seems to center on. The thrust of their argument, as I understand it, is that the US committed to landing on the moon by the end of the 60s, but that turned out to be much harder than anticipated so the landing was fabricated to maintain some geopolitical prestige/advantage. As you pointed out, pulling this off would require the secrecy of countless scientists and astronauts to their grave, or at least compartmentalizing tasks such that countless people think they’re solving real scientific problems that are achieving moon landing with a smaller group conspiring to fake the results. This seems improbable. Like you said, it could be “easier to just… go to the moon for real”.
But moon conspiracists seem to explicitly dismiss—rather than assume—these circumstances. They argue that landing on the moon was physically too difficult (or impossible) for the time such that faking the landing was the easier route. Applying OCH here seems to assume the conclusion, and I don’t understand how it provides a better/faster route to dismissing moon conspiracies than just applying existing evidence or Occam’s razor. Perhaps, though, I’m missing the “circumstances [moon landing] conspiracy theories must assume” in this example.
No idea how likely it is. I’m not going to create a market but welcome someone else doing so. I agree the likelihood “evidence will come out [...] over the next year” is <10%. That is not the same as the likelihood it happened, which I’d put at >10%. More than anything, I just cannot reconcile my former conception of Michael Lewis with his current form as a SBF shill in the face of a mountain of evidence that SBF committed fraud. I asked the question because Zvi seems smarter than me, especially on this issue, and I’m seeking reasons to believe Lewis is just confused or wildly mistaken rather than succumbing to ulterior motives.
this is crazy, perhaps the most sweeping action taken by government on AI yet.
Seems like too much consulting jargon and “we know it when we see it” vibes, with few concrete bright-lines. Maybe a lot hinges on enforcement of the dual-use foundation model policy… any chance developers can game the system to avoid qualifying as a dual-use model? Watermarking synthetic content does appear on its face a widely-applicable and helpful requirement.
I agree, I was trying to highlight it as one of the most specific, useful policies from the EO. Understand the confusion given my comment was skeptical overall.
Granted this all rests on unsubstantiated rumors and hypotheticals, but in a scenario in which the board said “shut it down this is too risky”, doesn’t the response suggest we’re doomed either way? Either
a) Investors have more say than the board and want money, so board resigns and SA is reinstated to pursue premiere AGI status
b) Board holds firm in decision to oust SA, but all his employees follow him to a new venture and investors follow suit and they’re up and running with no more meaningful checks on their pursuit of godlike AI
After some recent (surprising) updates in favor of “oh maybe people are taking this more seriously than I expected and maybe there’s hope”, this ordeal leads me to update in the opposite direction of “we’re in full speed ahead arms race to AGI and the only thing to stop it will be strong global government interventionist policy that is extremely unlikely”. Not that the latter wasn’t heavily weighted already, but this feels like the nail in the coffin.
Like many I have no idea what’s happening behind the scenes, so this is pure conjecture, but one can imagine a world in which Toner “addressed concerns privately” but those concerns fell on deaf ears. At that point, it doesn’t seem like “resigning board seat and making case publicly” is the appropriate course of action, whether or not that is a “nonprofit governance norm”. I would think your role as a board member, particularly in the unique case of OpenAI, is to honor the nonprofit’s mission. If you have a rogue CEO who seems bent on pursuing power, status, and profits for your biggest investor (again, purely hypothetical without knowing what’s going on here), and those pursuits are contra the board’s stated mission, resigning your post and expressing concerns publicly when you no longer have direct power seems suboptimal. Seems to presume the board should have no say whether the CEO is doing their job correctly when, in this case, that seems to be the only role of the board.
This is a great post, synthesizing a lot of recent developments and (I think) correctly identifying a lot of what’s going on in real time, at least with the limited information we have to go off of. Just curious what evidence supports the idea of Summers being “bullet-biting” or associated with EA?
If Sam is as politically astute as he is made out to be, loading the board with blatant MSFT proxies would be bad optics and detract from his image. He just needs to be relatively sure they won’t get in his way or try to coup him again.
Credit to their dad and these kids who achieved these early results. As noted, genetics could factor into aptitude at such a young age—I’m curious (if not skeptical) whether this system is reproducible in many children of the same age. The following excerpts in conjunction made me cringe a little bit:
I really, really thought I was pushing too hard; I had no desire to be a “tiger dad”, but he took it with extreme grace. I was ready to stop at any moment, but he was fine.
Hannah went through a phase where she didn’t want to do it. We tried to compromise and work through it. Eventually, it became part of her “job”—we told her that every human has a job, and her job was to do Anki. Other than that, we never had to coerce any of the kids.
But that’s more a personal values issue, and I’m in no position to judge parenting styles. Congrats again to this family, and I hope Anki is useful for other families.
It sounds quite intense, though I’m hesitant to describe it as “too hard” as I don’t know how children should be reared. The cringing was more at what I perceive as some cognitive dissonance, with “I didn’t want to be a tiger parent” coinciding with informing them they didn’t really have a choice because it was their job (I don’t see the compromise there, nor do I put much stock in a 3-5 year old’s ability to negotiate compromises, though these do sound like extraordinary children). But my views are strongly influenced by my upbringing which was a very hands off, “do what you enjoy” mentality. That could be a terrible approach. Internally I grapple with what the appropriate level of parental guidance is, to the extent that can be ascertained… [Narrator: It can’t.]
Can you explain how Events #1-5 from your list are not correlated?
For instance, I’d guess #2 (learns faster than humans) follows naturally—or is much more likely—if #1 (algos for transformative AI) comes to pass. Similarly, #3 (inference costs <$25/hr) seems to me a foregone conclusion if #5 (massive chip/power scale) and #2 happen.
Treating the first five as conditionally independent puts you at 1% before arriving at 0.4% with external derailments, so it’s doing most of the work to make your final probability miniscule. But I suspect they are highly correlated events and would bet a decent chunk of money (at 100:1 odds, at least) that all five come to pass.