I went to CMU in Pittsburgh for grad school, the citybitself is half the size it once was, and some of the surrounding towns have lost as much as 90% of their population. At some point shifting economic geography just makes maintaining some places really tough. The question is, what could actually shift things a different way? Outside money is not a sustainable solution unless it’s building something that really makes sense.
AnthonyC
It gave you $150 a semester and that paid tuition for the Baby Boomer generation in New York. All you needed was an 85 on your Regents exams and in New York you had free college as long as you could make it to a campus on a regular basis.
For those under 35: by the early 2000s NY started changing the format, difficulty level, and grading formulas for all the required Regents exams, making an 85 no longer meaningful in most subjects as a distinguishing characteristic. Each time they’d roll out a new exam, the first time they gave it statistically almost everyone would do terribly, then the next time they’d adjust the grading so the scores went way up, making it nearly impossible to fail and very easy to get above 85.
As for letting the region die: as you say, it’s entirely possible to have a beautiful region that people mostly don’t go to. Ithaca is great, but Potsdam is not Cornell and even then Cornell’s weather and remoteness and culture are a tough sell for many people (it certainly wasn’t good for the mental health of many of the people I knew who went there, which I nearly chose to do). This is something the world has never really had to grapple with before. In the past, when your region started to die, what was left pretty quickly got razed by invaders or destroyed in a fire or natural disaster, and then either it was gone, or people came in and rebuilt. We’ve mostly conquered these problems without coming up with a decent plan for how to renew aging infrastructure in a vibrant and productive city (like e.g. Boston), let alone a dying community.
It’s very obviously better than 50% and worse than 20%, and the worst case scenario is 100%
Plausibly, the best case scenario is also 100%.
This is very sensible but consider: The *funniest* way to solve this problem would be to find a jurisdiction, perhaps outside the USA, which will let Claude take the bar exam and legally recognize it as a lawyer.
This seems to be the natural response to all professional licensing concerns, but oftentimes states do not necessarily have to recognize one another’s professional licenses. Not sure how that works for lawyers in particular.
Biggest epistemic divide I’ve seen in a while.
Yep. Plus, I still see otherwise intelligent people do things like quoting the Tower of Hanoi paper as evidence that ‘models are bad at multistep reasoning.’
Fair enough, and I agree that’s a plausible scenario.
This also applies to all decision making by all elected political leaders, and is a big part of why we usually can’t seem to act on an issue until it becomes a crisis, often more than once. Most people don’t have the knowledge, interest, habits, or willingness to grapple with hypothetical harms deeply enough to properly evaluate them, so leaders who try to do so get punished for wasting resources on things that get deemed not real, or else punished for choosing an ineffective strategy if the harms happen anyway.
The step where you say that aligned ASI will want what humans want is, in my opinion, an unjustified leap. Any ASI, aligned or not, will naturally understand that humans don’t know what we want, not in detail, not in general, not in out-of-distribution hypothetical scenarios. An aligned ASI would, as you clearly understand, have to grapple with that fact, but I don’t think it would just acquiesce to current stated values at each moment. I also wouldn’t want it to.
I don’t know how much this helps, the problem is still there, but I hope if we align ASI enough to avoid extinction in the short to medium term, we’ll have aligned it enough to solve this problem in the medium to long term. Because if not, I would argue that the kind of weirdness you’re pointing towards is still a kind of extinction and replacement.
That virtues are so readily thought of as fuzzy or flexible is an unfortunate consequence of our limited ability to properly anticipate and evaluate the consequences of our actions and the definitions of our words. IMO deontological rules aren’t a problem because they’re rigid, they’re a problem because they try to be succinct and thereby draw an inaccurate, crude boundary around the set of behaviors we would ideally want. If we choose to align AIs to virtues, I’d like to make sure they know that virtues are also rigid and unyielding but fractally complex at their boundaries. It is each mind’s understanding of the virtues it seeks to uphold (along with the world it is operating in) that is fuzzy, and this thereby necessitates flexibility and caution in practice. “Don’t believe everything you think” is critical advice for everyone, and “Don’t optimize too hard without way more evidence than you think you need” is a subset of it, but a well-grounded virtue ethicist can incorporate it more easily into planning and review processes than a deontologist or a naive utilitarian/consequentialist can.
Edit to add: I do think, from a God’s eye view, consequentialism is in some deep sense ‘true’ as a final arbiter of what makes an action good or bad. But, I think the problem that the complete set of results of an action are not computable in advance for any finite agent within the universe is inescapably damning if you want to rely on this kind of reasoning for each decision. We can try to approximate such computations this when the decision is sufficiently important and none of our regular heuristics seem adequate. Otherwise, we use deontological rules as heuristics within known contexts, and virtues as different kinds of heuristics in a broader set of less known contexts. Strict adherence to deontological rules, or deontological definitions of virtues, leads to horrible places out of distribution.
I would say that in some sense, the relevant sequence is “everything written by Scott Alexander.” Or at least, that was my takeaway as I was trying to think about why I didn’t come away with quite as much of this particular misunderstanding.
I think there is some value in the original bio anchors, if it’s done privately with full awareness of the limitations of the approach, but I think it’s a mistake to try to publish an estimate with that many caveats.
I’ve been in much smaller, much less contentious, vastly lower stakes version of that myself, and in more than one case I made the decision to just not publish a number. My main example was very mundane: In 2015, I was writing a report estimating the market size for carbon fiber composites in automotive applications in 2025. I was able (thanks to skills I learned here!) to explain to my bosses that there was no good way to estimate this that would be actually useful to anyone for making any decisions, because different reasonable assumptions gave scenarios with answers varying by >3 OOMs. My solution was to explain that very fact in the intro of my report, and otherwise focus on a flowchart of possible pathways and how to respond to different hypothetical events.
This is also a big part of why I’ve been so impressed with the AI 2027 crew. They’ve been about as open as it is possible to be about the implications and limits of their approach, what they’re actually saying and doing, and why they’re expressing it the way they are. They have also been incredibly gracious with the large subset of people who constantly misinterpret, oversimplify, or otherwise ignore the things they actually say, and are working hard to communicate clearly despite that.
I’m gladyou’re continuously updating based on new evidence. What are you seeing at the meta-level of sensitivity analysis? How volatile or chaotic are your estimates in response to each new data point? Should we expect big swings in expected timeline be something we see coming a mile away, or are they able to happen in a moment with little warning?
Isn’t that kind of the point, though? A lot of gradual disempowerment is mundane, and useful on the margin, but leads to a repugnant conclusion in aggregate. The rest sounds abstract and unbelievable to people who reject the premise that various superhuman capabilities are possible.
Today Windows told me I have 18 optional driver updates. I don’t think I’ve every gotten more than ~3 at once. I am assuming coding agents are involved, but can’t be sure?
I would venture that the problem is not the microwave, it’s that this is not a natural set of methods for humans. I expect my hypothetical circa 2045 household robots to handle a lot of cooking in ways that maximize efficiency beyond what this human would want to bother with.
Of course he isn’t safe! He’s not a tame AI. But he’s good. We hope. We don’t really know. But he thinks he’s good enough.
They think Opus 4.6 might approach ‘can fully do the job of a junior engineer at Anthropic’ if given proper scaffolding.
Worth keeping in mind, for those of us not exposed to the relevant communities in normal life on a regular basis, that “junior engineer at Anthropic” is someone making ~$350k a year total, who may be in their late 20s to mid 30s. (My numbers are based on a quick web search and may be completely wrong in either direction, in which case I definitely would like to be corrected. In any case it’s someone very highly skilled).
That’s true
If you’re going to tell people it prevents a disease, the FDA is going to (find a way to) regulate it like a drug, even if it is also legally a food.
I think there’s a useful point here, though I’m not sure the framing makes it clear what you want readers to take away from it.
Relevant personal anecdote: Over the past few years I’ve had the pleasure of visiting 38 US national parks, and even small differences in accessibility seem to greatly alter the makeup and mindset of the visiting population. For example, Zion and Canyonlands are not so far apart or so different in absolute terms. But, in Zion, which is more readily reachable from major highways and cities, many of the guests show up with no plan or gear, seem to think it’s acceptable to carve games of tic tac toe into stones, and need signs warning them that squirrels will bite you if you try to feed and pet them. Canyonlands provides a lot less handholding, because most of the people who choose to go there do so more deliberately, with at least marginally more understanding of what they’re getting into. None of these are anywhere near the level of preparation and skill you’d need to survive in a real wilderness, of course, but I certainly found it striking how the visitors to two different parks in the same state have such different expectations/needs for how sanitized they want their experience of ‘nature’ to be.
No, I don’t think that is an accurate summary, but that’s on me for leaving out the key piece: I apply very different standards to myself vs others. If I am late, I know all the things I counterfactually could have done to instead be on time but didn’t. When I tell others the story of why I’m late, it usually feels like an excuse I don’t quite believe. When others are (occasionally) late, I too am curious to hear their stories.
When someone (in a friendship context) is chronically late, you learn to expect it and route around it, whether they have a story or not, and whether the story is entertaining or believable or not. It’s not a big deal because you’ve established that expectation. But I’m never going to ask that friend to drive me to the airport.
When someone (in a casual or friendly context) is actively talking about planning and time, and you know they’re being unrealistically optimistic but they don’t want to hear it, then from then on you know not to believe their stories on why they’re late. They’re late because they’re not interested in planning to be on time. The story is not evidence of the real ‘why’. Whether or not this is fine is entirely dependent on context. In some cultures, it’s expected to be late to things, sometimes even hours late, and being on time could actually be a problem because everyone else won’t be ready. In others, being early is fine but being late is unacceptable—a lot of structured social activities, like team sports or many kinds of classes, are like this. In some cases both are seen as bad—I’ve known a few people (all of German descent, TINACBNIEAC) who would literally drive to the corner and wait in their cars, ideally just out of sight, until 1-2 minutes before they were ‘supposed’ to arrive, so as to get to the door pretty much literally as the clock changed to the ‘right’ time.
When someone (in a business context) is chronically or unapologetically late, it’s potentially but not unambiguously some combination of rude, disrespectful, counterproductive, and wasteful. If it’s because they had back to back meetings and one ran over, or they needed to use the bathroom in between, or they’re having technical difficulties, or some urgent personal matter came up, no problem! But you’re supposed to take 10 seconds to send a message letting people know, and if you can but don’t, that’s a problem. If it’s some sort of (even inadvertent) power move, because they don’t care about your time, that might be something you just have to deal with from your boss or a client, but it is always frustrating.
There’s a wide range of different social contexts for this. I personally share the opinion expressed in your quote, but I also have been in environments where such an approach was actively socially or otherwise counterproductive.
Meant to add one more thing- declining property tax base is a death spiral, but recent population decline suggests excess local capacity to supply water and electricity. Are towns in the region trying to attract data centers? Various kinds of energy-hungry cleantech company projects? I know somr are trying to do that in Quebec.