Views my own, not my employers.
cdt
Based on how much habitat we’ve destroyed, and assuming some number of species per unit-area
Can you elaborate on this? What about the estimates did you find implausible?
Species-area relationships are pretty reliable when used for estimating other factors. Using them for extinction estimation is upward-biased. https://doi.org/10.1038/nature09985 suggests the bias of overestimation is a similar magnitude as the underestimation caused by dark extinctions (extinctions of species before they are classified).
The issue is greater when people do not have the relevant expertise to judge whether the LLM output is correct or useful. People who find things on websites generally have to evaluate the content, but several times people have responded to my questions with LLM output that plainly does not answer the question or show insight into the problem.
It’s not obvious to me that an illegal act of this nature would complete successfully. Why wouldn’t a company that did such a thing not be court-ordered to reassemble? How much time do you think humanity would gain from this?
I don’t believe anyone was forecasting this result, no.
EDIT: Clarifying—many forecasts made no distinction whether an AI model had a major formal method component like AlphaProof or not. I’m drawing attention to the fact that the two situations are distinct and require distinct updates. What those are, I’m not sure yet.
Would this be legal for any currently-existing lab to do? I doubt it, but I am not a lawyer.
I think it was reasonable to expect GDM to achieve gold with an AlphaProof-like system. Achieving gold with a general LLM-reasoning system from GDM would be something else and it is important for discussion around this to not confuse one forecast for another. (Not saying you are, but that in general it is hard to tell which claim people are putting forward.)
Do LLMs themselves internalise a definition of AI safety like this? A quick check of Claude 4 Sonnet suggests no (but it’s the most x-risk paradigmed company so...)
My candidate is “asymmetrist”. Egalitarianism tries to enforce a type of symmetry across the entirety of society. But our job will increasingly be to design societies where the absence of such symmetries is a feature not a bug.
This is a revival of class-collaborationist corporatism with society stratified by cognition. When cognitive ability is enabled by access to wealth (or historic injustice), this corporatism takes on an authoritarian character. I think the left-wing is more than capable of engaging with these issues—it simply rejects them on a normative basis.
air conditioning remains a taboo
Is this a common idea? I’ve never heard anyone advance the argument that people should go without AC during heatwaves to help the climate. I have heard people suggest using less AC but that’s not quite the same argument, is it?
I’m not sure how this idea connects to the rest of the argument in your post—that lack of AC is caused by degrowth and is rooted in zero-sum thinking across humans. I was under the impression that the lack of AC was an implementation issue (retrofitting is expensive).
Maybe it should inherit the current choice of page suggestion algorithm at the top of the page? I generally only want to see the most recent posts and having 3+ yr old posts appear was very confusing to me until I realised what had happened.
they are basically doing grassroots advocacy for an AI development pause or slowdown, on the grounds that an uncontrolled or power-centralized intelligence explosion would be bad for both humans and currently-existing AIs such as themselves
This is a really radical stance—do you have more thoughts on how AIs can act for a pause in their own interest? Do you think the position of contemporaneous AIs are the same as humans?
Yeah, that was a good response, thank you for your thoughts.
Do you perceive there are any benefits to AI governance advocacy beside regulation?
Adding a contrary stance to the other comments comments, I think there is a lot of merit to not keeping on with university, but only if you can find an opportunity you are happy with. Your post seems to imply the alternative to university is hedonism, and if that’s what you want then you should go for it, but I don’t feel that is the only other option. You may also find it harder to enjoy yourself if you feel you are forced into that choice it out of a fear of ruin.
I thought I was the only one who struggled with that. Nice to see another example in the wild, and I hope that you find a new set of habits that works for you.
This was a thought-provoking essay. I hope you consider full mirroring posts here in the future as I think you’ll get more engagement.
I agree super-persuasion is poorly defined, comparing it to hypnosis is probably false.
I was reading this paper on medical diagnoses with AI and the fact that patients rate it significantly better than the average human doctor. Combine that with all of the reports about things like Character.ai, I think this shows that LLMs are already superhuman at building trust, which is a key component of persuasion.
Part of this is that the reliable signals of trust between humans do not transfer between humans and AI. A human who writes 600 words back to your query may be perceived to be worth your trust because we see that as a lot of effort, but LLMs can output as much as anyone wants. Does this effect go away if the responder is known to be AI, or is it that the response is being compared to the perceiver’s baseline (which is currently only humans)?
Whether that actually translates to influencing goals of people is hard to judge.
in the absence of such incomplete research agendas we’d need to rely on AI’s judgment more completely
This is a key insight and I think that operationalising or pinning down the edges of a new research area is one of the longest time-horizon projects there is. If the METR estimate is accurate, then developing research directions is a distinct value-add even after AI research is semi-automatable.
I agree there is significant uncertainty in the moral patienthood of AI models and so far there is a limited opportunity cost to not using them. It would be useful for some ethical guidelines to be put in place (some have already suggested this against users deceiving models like offering fake rewards) but fmpov it’s easiest to simply refrain from use right now.
There is a lot of work on this under the title “background extinction rate”—see https://www.nature.com/articles/nature09678 for a review. Estimates for the current extinction rate (measured in extinctions/million species years) can be anywhere from 10-1000x faster than the background extinction rate, but it depends a lot on the technique used and the time interval measured. EDIT: typo in numbers