Just want to say I absolutely loved the story you wrote with Lintamande between Carissa and Altarrin. Thank you for that 🙏🏻
Aorou
Idk if I’m the only one here, but I use LLMs for coding and I have disabled memory. This wasn’t really an educated move, but after having an AI completely hallucinate in one chat and get the problem or the code at hand totally wrong, I’m afraid its misunderstanding will contaminate its memory and mess up every new chat I start.
At least with no memory I can attempt to rewrite my prompt in a new window and hope for a better outcome. It forces me to repeat myself a lot, but I now have systems in place to briefly summarize what my app does, show the file structure, and explain the problem at hand.
It doesn’t seem to be an uncommon experience that a model is given e.g. a piece of code with a bug in it and asked to find the bug, and then it keeps repeatedly claiming that it “found” the bug and offering revised code which doesn’t actually fix the problem. Or have anything to do with the problem, for that matter.
My personal insanity generator. Sigh.
Thanks for putting this on my radar :)
A few thoughts.
TLDR several reasons to be suspicious of PRT and its creator, but anecdotally it seems to ~work on me.
The negative first:The leading study in favor of PRT (Ashar et al 2021) had 151 participants (a third on which they did PRT, another third on which they did normal pain therapy, and a control). The measured effect was very large (which to me is suspicious). Importantly, it has not been replicated yet.
(This is not to say it has failed to replicate)I’m now reading Gordon’s book. As someone who’s been in a semi-cult and who’s also paid a lot of attention to wannabe gurus and other scammers in the fields of therapy, group therapy, pain relief etc, I can say that this book has a lot of the signs. For instance, Gordon claiming that he cures everyone. This is highly suspicious to me.
Generally speaking, the book has an aura of “this cures everything and everyone, now buy my programme” that other scammers in the field have. This is of course not to say that I’m convinced he’s a fraud. But it’s a dark orange flag.In a more intuitive way, I just feel resistant to believe that telling your brain “this is fine” will make the pain go away. Isn’t the brain a super complex machine that you can’t just talk to? </rant>
The positive:
Because as Ruby said this has “Big if true” energy, I do want to give this a shot.
Your article definitely peaked my interest: I’ve had two types of chronic pain over the last year.
First on my foot sole after playing a lot of tennis, which pushed me to stop. Even after I stopped tennis, the pain (mild) lasted for months, flaring especially when walking.
Second in the form of mild headaches I now have every day, pretty much since I had my first real intense migraine 3 months ago. I’ve gotten rid of the foot pain, but the headaches are still very real.Since reading your post, whenever I get the start of a headache (usually a 2 or 3⁄10 intensity), I tell myself something like “all is fine, I’m ok, this is not painful, you can relax”… and it kind of works?
I’d say the headache intensity goes down to a 0.5 or 1⁄10.
So while I’m highly suspicious, I’m surprised it’s had some positive effect on my pain already.
I’d love to see more replication studies.
Super glad I landed on your post! Just ordered the game.
Alexey Guzey, walkthrough of his computer setup and productivity workflow.
Founder of New Science. Popular blogger (eg, author of Matthew Walker’s “Why We Sleep” Is Riddled with Scientific and Factual Errors).
It seems like Guzey has changed his mind about a bunch of things, including needing all those huge monitors.
Makes me think this video is no longer relevant.
Blogpost
So you’re saying that for running, it’s better to do a more intense (uphill) shorter duration run, than a less intense (flat terrain) longer duration run?
If I understand that correctly, it would imply that, for cardio, the rule is reverse the one for weights: “heavier” for “less reps”?
WRT cardio, besides rowing more, I also do more of my running up hills, as it substantially lowers impact and allows higher volume.
What do you mean by ‘impact’ in this context?
Peterson
Petersen*
After giving it some thought, I do see a lot of real-life situations where you get to such a place.
For instance-
I was recently watching The Vow, the documentary about the NXIVM cult (“nexium”).
In very broad strokes, one of the core fucked up things the leader does, is to gaslight the members into thinking that pain is good. If you resist him, don’t like what he says, etc, there is something deficient in you. After a while, even when he’s not in the picture so it would make sense for everyone to suffer less and get some slack, people punish each other for being deficient or weak.
And now that I wrote it about NXIVM I imagine those dynamics are actually commonplace in everyday society too.
Thanks for pointing to your clarification. I find it a lot clearer than the OP.
Downvoted because there is no « disagree » button.
I strongly disagree with the framing that one could control their emotions (both from the EA quote and from OP). I’m also surprised that most comments don’t go against the post in that regard.
To be specific, I’m pointing to language like « should feel », « rational to feel » etc.
That was very interesting, thank you!
It was useful to me to read your footnote “I am autistic” at the beginning. It gave me better context and I expect I would have just been confused by the post otherwise.
I’d suggest adding it to the main body, or even starting with it as the first sentence.
A general intelligence may also be suppressed by an instinct firing off, as sometimes happens with humans. But that’s a feature of the wider mind the GI is embedded in, not of general intelligence itself.
I actually think you should count that as evidence against your claim that humans are General Intelligences.
Qualitatively speaking, human cognition is universally capable.
How would we know if this wasn’t the case? How can we test this claim?
My initial reaction here is to think “We don’t know what we don’t know”.
I think this is evidence that should increase our p(aliens), but not enough evidence to make the claim “either all are lying, or aliens are real”.
It’s also evidence of something like “they are wrong but honest”; “the instruments bugged”; “something about reality we don’t get which is not aliens” etc
Gotcha. Thanks for clarifying!
I am confused. Why does everyone else select the equilibrium temperature? Why would they push it to 100 in the next round? You never explain this.
I understand you may be starting off a theorem that I don’t know. To me the obvious course of action would be something like: the temperature is way too high, so I’ll lower the temperature. Wouldn’t others appreciate that the temperature is dropping and getting closer to their own preference of 30 degrees ?
Are you saying what you’re describing makes sense, or are you saying that what you’re describing is a weird (and meaningless?) consequence of Nash theorem?
Hey! I appreciate you for making this.
I live alone in Sweden and I’ve been feeling very stressed about AI over the last few days.
It was a nice video to watch, and I entertained myself listening to you speak Finnish. Thanks!
This sounds like the streetlight bias, superficially? Just because your audience is intelligent and knowledgeable, doesn’t mean it’s the right audience, and doesn’t mean the stance that led you to them is correct.