Spencer’s post isn’t loading for me and I don’t see any post about this on his facebook feed—is the link right?
wunan
Specifically for ME/CFS and Long Covid, I recommend s4me.info. Pretty much all of the major studies on the mind-body methods already have threads there with discussions. The tl;dr is that they are extremely low quality studies in ways that will not be a surprise to anyone familiar with the replication crisis and these techniques very likely do not work for ME/CFS or LC.
This blog is good as well: https://mecfsscience.org/
Anthropic researchers estimate that Opus 4.5 provides 2-3x speedup to their research, if I’m reading this correctly. This seems very important and I’m surprised I haven’t seen more discussion of it.
Twitter thread: https://x.com/HjalmarWijk/status/1993752035536331113
Unrolled/without login required: https://twitter-thread.com/t/1993752035536331113
@HjalmarWijk
Nov 26Anthropic says in their system card that *all* their AI R&D evals are close to saturation, and report a median self-reported uplift of 2X (mean over 3X!) for power users. They provide very little evidence ruling out imminent dramatic AI R&D acceleration.
I personally suspect that their self-report uplift numbers are inflated and that agent time horizons are still limited. But if taken at face value, then even the most aggressive scenarios (e.g. AI 2027 or https://blog.redwoodresearch.org/p/whats-up-with-anthropic-predicting) would have underestimated progress.
I didn’t quote the whole thread, there’s more if you follow the link.
I recommend Deep Utopia for extensive discussion of this issue, if you haven’t already read it.
I agree with most of this, but I think you’re typical-minding when you assume that successionists are using this to resolve their own fear or sadness surrounding AI progress. I think instead, they mostly never seriously consider the downsides because of things like the progress heuristic. They never experience the fear or sadness you refer to in the first place. For them, it is not “painful to think about” as you describe.
Here is Eliezer’s post on this topic from 17 years ago for anyone interested: https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model
Anna Salamon’s comment and Eliezer’s reply to it are particularly relevant.
Searching the keyword “prompt engineering” (both on here and Google) may guide you to some helpful resources. Sorry I don’t have anything specific to link you to.
No massive advance (no GPT-5, or disappointing GPT-5)
Inversion: There was a substantial advance in frontier model AI in 2024.
Shouldn’t the inversion simply be “There was a massive advance”?
If you have Long COVID or ME/CFS, or want to learn more about them, I highly recommend https://s4me.info. The signal to noise ratio is much better than on other forums for those topics that I’ve found. The community is good at recognizing and critiquing low vs high quality studies.
As an example of the quality, this factsheet created by the community is quite good: https://s4me.info/docs/WhatIsMECFS-S4ME-Factsheet.pdf
[Question] What is the research speed multiplier of the most advanced current LLMs?
[Question] Avoiding “enlightenment” experiences while meditating for anxiety?
Did you and GPT4 only output the moves, or did you also output the board state after each turn?
Unfortunately without speaker labels the YouTube transcript is less useful unless you’re listening while reading.
Is there a transcript anywhere?
[Question] COVID contagiousness after negative tests?
[Question] What AI newsletters or substacks about AI do you recommend?
Another similar result was that AlphaFold was trained on its own high-confidence predictions for protein sequences with unknown structures:
The AlphaFold architecture is able to train to high accuracy using only supervised learning on PDB data, but we are able to enhance accuracy (Fig. 4a) using an approach similar to noisy student self-distillation35. In this procedure, we use a trained network to predict the structure of around 350,000 diverse sequences from Uniclust3036 and make a new dataset of predicted structures filtered to a high-confidence subset. We then train the same architecture again from scratch using a mixture of PDB data and this new dataset of predicted structures as the training data, in which the various training data augmentations such as cropping and MSA subsampling make it challenging for the network to recapitulate the previously predicted structures. This self-distillation procedure makes effective use of the unlabelled sequence data and considerably improves the accuracy of the resulting network.
I’m also dealing with chronic illness and can relate to everything you listed. I’ve been thinking that a discord server specifically for people with chronic illness in the rationality community might be helpful to make it easier for us to share notes and help each other. There are different discord servers for various conditions unaffiliated with the rationality community, but they tend to not have great epistemic standards and generally have a different approach than what I’m looking for. Do you have any interest in a discord server?
Have you read Friendship is Optimal? Is that outcome unappealing to you in a way which can’t be easily patched (e.g. removing the “become a pony” requirement)? Do you think it would be unappealing to almost everyone in ways that can’t be easily patched?