A random observation from a think tank event last night in DC—the average person in those rooms is convinced there’s a problem, but that it’s the near-term harms, the AI ethics stuff, etc. The highest-status and highest-rank people in those rooms seem to be much more concerned about catastrophic harms.
This is a very weird set of selection effects. I’m not sure what to make of it, honestly.
davekasten
One item that I think I see missing from this list is what you might call “ritual”—agreed-upon ways of knowing what to do in a given context that two members of an organization can have shared mental models of, whether or not they’ve worked together in the past. This allows you to scale trust by reducing the amount of trust needed to handle the same amount of ambiguity, at some loss of flexibility.
For example, when I was at McKinsey, calling a meeting a “problem solving” as opposed to a “status update” or a “steerco” would invoke three distinct sets of behaviors and expectations. As a result, each participant had some sense of what other participants might by-default expect the meeting to feel like and be, and so even if participants hadn’t worked much with each other in the past, they could know how to act in a trust-building way in that meeting context. The flip side is that if the meeting needed something very different from the normal behaviors, it became slightly harder to break out of the default mode.
I would politely, but urgently suggest that if you’re thinking a lot about scenarios where you could justify suicide, you might not be as interested in the scenarios as the permission you think they might give you. And you might not realize that! Motivated reasoning is a powerful force for folks who are feeling some mental troubles.
This is the sort of thing where checking in with a loved one generally about how they perceive your general affect and mood is a really good idea. I urge you to do that. You’re probably fine and just playing with some abstract ideas, but why not check in with a loved one just in case?
One of the things I greatly enjoyed about this writeup is that it reminded me how much the “empty-plate” vibe was lovely and something I want to try to create more of in my own day-to-day.
Tangible specific action: I have been raving about how much I loved the Lighthaven supply cabinets. I literally just now purchased a set of organizers shaped for my own bookcases to be able to recreate a similar thing in my own home; thank you for your reminder that caused me to do this.
I would like to politely request that if you happen to have a chance to tell Leo’s owner that Leo is clearly a very happy dog that feels loved, could you please do so on my behalf ?
I am fairly skeptical that we don’t already have something close-enough-to-approximate this if we had access to all the private email logs of the relevant institutions matched to some sort of correlation of “when this led to an outcome” metric (e.g., when was the relevant preprint paper or strategy deck or whatever released)
You know, you’re not the first person to make that argument to me recently. I admit that I find it more persuasive than I used to.
Put another way: “will AI take all the jobs” is another way of saying* “will I suddenly lose the ability to feed and protect those I love.” It’s an apocalypse in microcosm, and it’s one that doesn’t require a lot of theory to grasp.
*Yes, yes, you could imagine universal basic income or whatever. Do you think the average person is Actually Expecting to Get That ?
I totally think it’s true that there are warning shots that would be non-mass-casualty events, to be clear, and I agree that the scenarios you note could maybe be those.
(I was trying to use “plausibly” to gesture at a wide range of scenarios, but I totally agree the comment as written isn’t clearly meaning that).
I don’t think folks intended anything Orwellian, just sort of something we stumbled into, and heck, if we can both be less Orwellian and be more compelling policy advocates at the same time, why not, I figure.
I really dislike the term “warning shot,” and I’m trying to get it out of my vocabulary. I understand how it came to be a term people use. But, if we think it might actually be something that happens, and when it happens, it plausibly and tragically results the deaths of many folks, isn’t the right term “mass casualty event” ?
I think this reveals something interesting about how US policymakers think about technology. They don’t really care how it works, they care that if they put budgetary dollars on this, they might plausibly get an outcome where (in combination with the social system that is the border and its policing), they get fentanyl detected.
I am glad you wrote this, as I have been spending some time wondering about this possibility space.
One more option: an AI can have a utility function where it seeks to max its time alive, and have enough cognition to think it is likely to die regardless when humans decide it is dangerous. Even if they think they cannot win, they might seek to cause chaos that increases their total time to live.
This is definitely a tradeoff space!
YES, there is a tradeoff here and yes regulatory capture is real, but there are also plenty of benign agencies that balance these things fairly well. Most people on these forums live in nations where regulators do a pretty darn good job on the well-understood problems and balance these concerns fairly well. (Inside Context Problems?)
You tend to see design of regulatory processes that requires stakeholder input; in particular, the modern American regulatory state’s reliance on the Administrative Procedure Act means that it’s very difficult for a regulator to regulate without getting feedback from a wide variety of external stakeholders, ensuring that they have some flexibility without being arbitrary.I also think, contrary to conventional wisdom, that your concern is part of why many regulators end up in a “revolving-door” mechanism—you often want individuals moving back and forth between those two worlds to cross-populate assumptions and check for areas where regulation has gotten misaligned with end goals
No clue if true, but even if true, but DARPA is not at all a comparable to Intel. Entity set up for very different purposes and engaging in very different patterns of capital investment.
Also very unclear to me why R&D is relevant bucket. Presumably buying GPUs is either capex or if rented, is recognized under a different opex bucket (for secure cloud services) than R&D ?
My claim isn’t that the USG is like running its own research and fabs at equivalent levels of capability to Intel or TSMC. It’s just that if a war starts, it has access to plenty of GPUs through its own capacity and its ability to mandate borrowing of hardware at scale from the private sector.
I meant more “already in a data center,” though probably some in a warehouse, too.
I roll to disbelieve that the people who read Hacker News in Ft. Meade, MD and have giant budgets aren’t making some of the same decisions that people who read Hacker News in Palo Alto, CA and Redmond, WA would.
As you note, TSMC is building fabs in the US (and Europe) to reduce this risk.
I also think that it’s worth noting that, at least in the short run, if the US didn’t have shipments of new chips and was at war, the US government would just use wartime powers to take existing GPUs from whichever companies they felt weren’t using them optimally for war and give them to the companies (or US Govt labs) that are.
Plus, are you really gonna bet that the intelligence community and DoD and DoE don’t have a HUUUUGE stack of H100s? I sure wouldn’t take that action.
I think I am very doubtful of the ability of outsiders to correctly predict—especially outsiders new to government contracting—what the government might pull in. I’d love to be wrong, though! Someone should try it, and I think I was probably too definitive in my comment above.
If you think nationalization is near and the default, you shouldn’t try to build projects and hope they get scooped into the nationalized thing. You should try to directly influence the policy apparatus through writing, speaking on podcasts, and getting to know officials in the agencies most likely to be in charge of that.
(Note: not a huge fan of nationalization myself due to red-queen’s-race concerns)
I totally understand your point, agree that many folks would use your phrasing, and nonetheless think there is something uniquely descriptively true about the phrasing I chose and I stand by it.
Say more ?
Yup1 I think those are potentially very plausible, and similar things were on my short list of possible explanations. I would be very not shocked if those are the true reasons. I just don’t think I have anywhere near enough evidence yet to actually conclude that, so I’m just reporting the random observation for now :)