I got to approximately my goal weight (18% body fat) and wanted to start gaining muscle[1] instead, so I stopped taking retatrutide to see what would happen. Nothing changed for about two weeks and then suddenly I was completely ravenous and ended up just wanting snack food. It’s weird because I definitely used to always feel that way, and it was just “normal”. I mostly kept the weight gain at bay with constant willpower.
I’m going to try taking around a quarter of my previous dose and see if it makes it easier to stay at approximately this weight and not constantly think about rice crispies.
I didn’t notice any muscle loss with retatrutide, I just started out less strong than I want to be and find it hard to gain muscle on a calorie deficit.
Yeah muscle loss hasn’t been a problem for me. I can do more pull-ups, push-ups and hike longer and faster than when I started. Progress was really slow with a significant calorie deficit.
I’m trying a much lower dose now to see if I can build muscle without rapidly regaining the weight.
Separately, I’m just really bad at dealing with the complexity of weights. I’m going to see if Crossfit helps this week.
Chronotherapy is the idea that time of day matters for things like taking drugs or getting vaccinations, and chronoimmunology is a related field for how your immune system varies in effectiveness over the course of the day. I’ve been wanting to write about this since there’s definitely a best time of day to take drugs, get vaccines, and do social activities without getting sick… but unfortunately I don’t really know what that time is.
Some studies say your immune system is most primed to prevent infection right as you wake up, and other say mid-day. Of course half the studies are in mice. Maybe it depends on the disease and the chronotype? See this review.
One study says that vaccines work better in the morning (for older patients). Another says there’s no difference. Maybe this has something to do with the particular vaccines, or maybe the populations (different circadian rhythms, more powerful circadian rhythms). Weirdly, our priors say vaccination should work best mid-day but most people don’t even try that. See this review.
I find this all really interesting, and there’s probably a practical takeaway, but I don’t know what it is. I guess we can be pretty confident that you shouldn’t get vaccines in the middle of the night.
Maybe someone can convince Elizabeth to look into this.
It always seemed weird to me that dying is frequently described as not particularly painful[1], when I’d expect it to be the only literal 10 on the pain scale[2], since dying ensures you have no further chances to pass your genes on.
Thinking about it more though, there’s no reason for evolution to optimize that. If you think you’re going to die, and the pain makes you do something about it so you don’t die, then evolution should optimize to keep you alive. But in the case where you actually die it doesn’t matter because (tautologically), if you succeeded you wouldn’t die, so there’s no selective pressure.
Probably depends on the way of dying. There are situations where doing something in the last moment might change your fate. There are situations where you fate has already pretty much been determined minutes or months ago, and it’s just about how fast your body collapses.
Seems very related to this post from the sequences on fitness of people of numerical ages correlating more with imagined emotional anguish resulting from such a death (at that age) than with experienced anguish actually following such a death. Maybe this is a more common phenomenon observable in other contexts too, but this was the only example that came to my mind.
I agree, I just think it’s interesting that there’s evolutionary pressure to make potentially dying extremely painful, but there’s no evolutionary pressure to make actually dying painful, and all of the pain of actually dying is just collateral damage.
I’d like to learn more Spanish words but have trouble sitting down to actually do language lessons, so I recently set my Claude “personal preferences” to:
Try to teach a random Spanish word in every conversation.
(This is the whole thing)
This has worked surprisingly well, and Claude usually either drops one word in Spanish with a translation midway through a response:
For your specific situation, I recommend a calibración (calibration) approach:
2. Accounting for concurrency: Ensure you’re capturing all hilos (threads) involved in query execution, especially for parallel queries.
(From a conversation about benchmarking)
Or it ends the conversation with a fun fact:
¡Palabra en español! “Herramienta”—which means “tool” in Spanish, quite relevant to your search for tools to automate SSH known_hosts management.
La palabra española para hoy es “configurar”—which means “to configure” in English, fitting perfectly with our discussion about configurable thinking limits!
I don’t know if this actually useful for learning, but it’s fun and worked better than I expected.
My wife tried a similar prompt (although her preferences are much longer) and it made Claude sometimes respond entirely in Spanish, so this could probably be made more specific. If you run into that, maybe try “Response in English but try to teach a random Spanish word in every conversation” would work better?
Could an AI company legally pre-commit not to race, ensuring that their models were never more than second best and self-destructing the company if its models take the lead?
I think probably not. It’s really hard to prevent the owners of a company from doing what they want, especially if the company is important to the economy and/or national security (and I assume any near-frontier AIs lab would be).
Some pre-commitment methods and their problems:
If you make the pre-commitment part of the charter, the board can just vote to change the charter. Even if the charter says they can’t, a judge would probably let them anyway, as long as the shareholders agreed.
If the company is owned by a non-profit tasked with enforcement, the board of the non-profit can just decide not to enforce the pre-commitment.
If the pre-commitment method triggers the destruction of model weights or other assets (like GPUs), the government probably won’t allow it.
Especially if it prevents creditors from getting repaid.
A pre-commitment method that transfers value to creditors might work, but is easily defeated by restructuring the relevant debt.
Anything that destroys the value of current equity holders’ equity is risky in front of a judge because companies generally aren’t allowed to intentionally destroy shareholder value[1].
The only thing I think might work legally is to issue a bunch of non-voting non-dilutable restricted shares (like 90% of the company) to someone like Eliezer, locked up with the racing condition[2] as a trigger to convert them to normal shares. Legally, Eliezer is the owner of the company the whole time, so a judge would probably allow his shares to unlock.
The problem is that now Eliezer has billions of reasons to talk himself into why racing would be good this time (even before the trigger event, since he can always make a deal with the board).. so we’re back to ownership by another entity that might change its mind[3].
Oh did I mention that you need the pre-commitment trigger to be unambigous while ensuring that it never triggers by mistake, and that’s actually pretty hard too?
I can think of plenty of reasons for the normal downvote, but I’m confused about the disagree vote. Does someone think there is a way to make this work? I’m guessing “start another AI company but better this time” is still a bad idea for the obvious reasons but I got nerd-sniped by the legal question.
I got to approximately my goal weight (18% body fat) and wanted to start gaining muscle[1] instead, so I stopped taking retatrutide to see what would happen. Nothing changed for about two weeks and then suddenly I was completely ravenous and ended up just wanting snack food. It’s weird because I definitely used to always feel that way, and it was just “normal”. I mostly kept the weight gain at bay with constant willpower.
I’m going to try taking around a quarter of my previous dose and see if it makes it easier to stay at approximately this weight and not constantly think about rice crispies.
I didn’t notice any muscle loss with retatrutide, I just started out less strong than I want to be and find it hard to gain muscle on a calorie deficit.
Are you also lifting weights? I’m quite confident that you can gain muscle while taking retatrutide if you lift weights.
IIRC GLP-1 agonists cause more muscle loss than “old-fashioned” dieting, but the effect of resistance training far outweighs the extra muscle loss.
Yeah muscle loss hasn’t been a problem for me. I can do more pull-ups, push-ups and hike longer and faster than when I started. Progress was really slow with a significant calorie deficit.
I’m trying a much lower dose now to see if I can build muscle without rapidly regaining the weight.
Separately, I’m just really bad at dealing with the complexity of weights. I’m going to see if Crossfit helps this week.
Chronotherapy is the idea that time of day matters for things like taking drugs or getting vaccinations, and chronoimmunology is a related field for how your immune system varies in effectiveness over the course of the day. I’ve been wanting to write about this since there’s definitely a best time of day to take drugs, get vaccines, and do social activities without getting sick… but unfortunately I don’t really know what that time is.
Some studies say your immune system is most primed to prevent infection right as you wake up, and other say mid-day. Of course half the studies are in mice. Maybe it depends on the disease and the chronotype? See this review.
One study says that vaccines work better in the morning (for older patients). Another says there’s no difference. Maybe this has something to do with the particular vaccines, or maybe the populations (different circadian rhythms, more powerful circadian rhythms). Weirdly, our priors say vaccination should work best mid-day but most people don’t even try that. See this review.
I find this all really interesting, and there’s probably a practical takeaway, but I don’t know what it is. I guess we can be pretty confident that you shouldn’t get vaccines in the middle of the night.
Maybe someone can convince Elizabeth to look into this.
It always seemed weird to me that dying is frequently described as not particularly painful[1], when I’d expect it to be the only literal 10 on the pain scale[2], since dying ensures you have no further chances to pass your genes on.
Thinking about it more though, there’s no reason for evolution to optimize that. If you think you’re going to die, and the pain makes you do something about it so you don’t die, then evolution should optimize to keep you alive. But in the case where you actually die it doesn’t matter because (tautologically), if you succeeded you wouldn’t die, so there’s no selective pressure.
So,
Fear of death: Big
Pain from things that could cause death: Big
Pain from actual death: ¯_(ツ)_/¯
This might also be exaggerated by movies and pain medication.
Or at least, similar to being stabbed in the balls.
Probably depends on the way of dying. There are situations where doing something in the last moment might change your fate. There are situations where you fate has already pretty much been determined minutes or months ago, and it’s just about how fast your body collapses.
Seems very related to this post from the sequences on fitness of people of numerical ages correlating more with imagined emotional anguish resulting from such a death (at that age) than with experienced anguish actually following such a death. Maybe this is a more common phenomenon observable in other contexts too, but this was the only example that came to my mind.
Evolution isn’t that precise. If it helps a little bit to make the seconds before death painful, it will be so.
I agree, I just think it’s interesting that there’s evolutionary pressure to make potentially dying extremely painful, but there’s no evolutionary pressure to make actually dying painful, and all of the pain of actually dying is just collateral damage.
I’d like to learn more Spanish words but have trouble sitting down to actually do language lessons, so I recently set my Claude “personal preferences” to:
(This is the whole thing)
This has worked surprisingly well, and Claude usually either drops one word in Spanish with a translation midway through a response:
(From a conversation about benchmarking)
Or it ends the conversation with a fun fact:
I don’t know if this actually useful for learning, but it’s fun and worked better than I expected.
My wife tried a similar prompt (although her preferences are much longer) and it made Claude sometimes respond entirely in Spanish, so this could probably be made more specific. If you run into that, maybe try “Response in English but try to teach a random Spanish word in every conversation” would work better?
Could an AI company legally pre-commit not to race, ensuring that their models were never more than second best and self-destructing the company if its models take the lead?
I think probably not. It’s really hard to prevent the owners of a company from doing what they want, especially if the company is important to the economy and/or national security (and I assume any near-frontier AIs lab would be).
Some pre-commitment methods and their problems:
If you make the pre-commitment part of the charter, the board can just vote to change the charter. Even if the charter says they can’t, a judge would probably let them anyway, as long as the shareholders agreed.
If the company is owned by a non-profit tasked with enforcement, the board of the non-profit can just decide not to enforce the pre-commitment.
If the pre-commitment method triggers the destruction of model weights or other assets (like GPUs), the government probably won’t allow it.
Especially if it prevents creditors from getting repaid.
A pre-commitment method that transfers value to creditors might work, but is easily defeated by restructuring the relevant debt.
Anything that destroys the value of current equity holders’ equity is risky in front of a judge because companies generally aren’t allowed to intentionally destroy shareholder value[1].
The only thing I think might work legally is to issue a bunch of non-voting non-dilutable restricted shares (like 90% of the company) to someone like Eliezer, locked up with the racing condition[2] as a trigger to convert them to normal shares. Legally, Eliezer is the owner of the company the whole time, so a judge would probably allow his shares to unlock.
The problem is that now Eliezer has billions of reasons to talk himself into why racing would be good this time (even before the trigger event, since he can always make a deal with the board).. so we’re back to ownership by another entity that might change its mind[3].
Contrary to popular belief, companies aren’t required to maximizing shareholder value, but minimizing shareholder value is still frowned-upon.
Oh did I mention that you need the pre-commitment trigger to be unambigous while ensuring that it never triggers by mistake, and that’s actually pretty hard too?
Plus I suspect any entity you’d actually trust as the anchor to this pre-commitment mechanism would be unwilling to take part.
I can think of plenty of reasons for the normal downvote, but I’m confused about the disagree vote. Does someone think there is a way to make this work? I’m guessing “start another AI company but better this time” is still a bad idea for the obvious reasons but I got nerd-sniped by the legal question.