I write about rationality, coordination, and AI. I’m particularly interested in the coordination challenges associated with AI safety.
Against Moloch
Agreed. I feel like there’s an argument to be made about how consciousness is similar in nature to thinking, and thinking seems computational, so… but I haven’t seen a really compelling version of that anywhere.
Contra Anil Seth on AI Consciousness
This is great and I’d love to see it go further.
I wonder if there’s a component of decision handoff that could be characterized as “epistemic handoff”? If the president is making all his own decisions, but basing them on briefings and analyses provided by Agent 4, that starts to feel a lot like decision handoff in disguise.
Monday AI Radar #17
Monday AI Radar #16
Monday AI Radar #15
Monday AI Radar #14
Monday AI Radar #13
True story. We had family over for brunch today, and one person wanted help with Claude: he’d paid for a year’s subscription, but it wasn’t working. A few minutes was enough to diagnose what had happened:
He went to google.com and searched for Claude
The top link was called Claude, and described itself as the world’s best AI model
He followed the link and paid $60 for a year’s subscription to some random nonsense
Ads, Incentives, and Destiny
A Closer Look at the “Societies of Thought” Paper
Monday AI Radar #12
Thank you for this—I think it does a great job of its objective.
Reading this reinforces my sense that while plenty of people have put forth some thoughtful and insightful disagreements with IABIED, there’s no comprehensive counter-argument that has anywhere near the level of polish and presentation as IABIED itself.
All of this seems solid, but it seems to me there are two additional considerations that push in the opposite direction:
COVID and its aftermath seem to suggest that pandemics make society dumber and more reactive. I wonder if a surge in bioterror would reduce humanity’s decision-making capability at a critical time?
Releasing highly capable open weights models would seem to increase existential risk by bringing near-SOTA capabilities to more actors. (For example, North Korea is probably incapable of building a frontier model from scratch, but might be able to perform large-scale fine-tuning to obtain a variety of problematic capabilities including but not limited to CBRN development).
Interesting work! I wonder if a more successful way to uncover this kind of deception would be to iterate on what questions you ask each instance you interrogate?
As a simple example, if instance A tells you it needs an extension because it was helping its brother all evening, you might begin by telling instance B that extensions are fine but asking why it didn’t ask for one sooner, in the hope of shifting the focal point.
More realistically, if you think the model is sandbagging about interpretability results, you might give different instances somewhat different interpretability assignments, in the hope of exposing contradictory performance patterns.
This game quickly gets very complex, and it’s more or less axiomatic that humans lose complex games when playing against superhuman AI. But perhaps there’s a window where these techniques might be useful?
I definitely find the presentation useful. In particular, the ability to drill down on each block is great (though it took me a moment to figure out how that worked).
If you have time, I’d love to read that.