I couldn’t find the source so I didnt want to include it, but for example i remember seeing that Joe Biden was treated with Rapamycin for Covid.
Rapamycin costs $35 https://www.goodrx.com/sirolimus
I couldn’t find the source so I didnt want to include it, but for example i remember seeing that Joe Biden was treated with Rapamycin for Covid.
Rapamycin costs $35 https://www.goodrx.com/sirolimus
The premise of this article seems to be that drugs that can increase lifespan indefinitely exist but all of the studies you cite are in yeast or mice, and I suspect they’re all looking at endpoints that don’t actually map to living forever with no side effects (like making some marker look better without increasing lifespan, or dying of cancer instead of some other aging-related cause of death).
The societal concerns also seem overblown and straightforward to solve in market economies. We get new drugs all the time and they don’t cause a collapse of society, even though a lot of people don’t understand that drug research costs money; and if no one wants to do menial labor, we’ll just have to pay more until someone does (or use robots for that).
This is what I came to ask about. Randomizing based on health and then finding that the healthier group makes more despite other factors seems like it doesn’t really prove the thing the paper is claiming.
Although the fact that wages matched between the groups beforehand is pretty interesting.
I’m not sure if this is quite what you’re looking for, but one thing I do is store things I need for work and travel in consistent bags.
For example, my work laptop and my badge live in specific parts of a work-specific backpack, and I never the leave the badge anywhere except in the specific pocket of the backpack or attached to my belt.
For travel, I keep a couple things that are annoying to forget in my carry-on bag (travel-sized soap, conditioner, toothpaste, an un-opened toothbrush, a multi-country power-adapter and a spare swim suit). A battery and anti-nausea meds live in a specific backpack.
There’s trade-offs with some of these (it costs money to have an extra swim suit and backpack—if you don’t already have them), but some are basically free: the anti-nausea meds and badge need to live somewhere, so why not in the backpack I’ll always have on me in the situation where I need them?
I’m not sure if you have the setup for this, but call centers tend to pay reasonably well, and some are online now and don’t care where you work from.
You could try using your partner’s connections to get a signing bonus or early-payment of his first few paychecks, then use that to cover moving.
It might be worth getting him to where the work is with a cheap one-way flight and the cheapest hotel you can find (or stay with friends if possible), then follow later when you have enough saved for moving. Or do something similar in Arizona (find a job that’s too far to drive and stay in the cheapest motel you can find nearby until you’ve saved enough to move).
Some jobs provide transportation, room, and board, like cruise companies. If you can get one of those jobs, they’ll get you where you need to be and provide somewhere to live during the season. This includes both people on the ships and some people on land (i.e. they don’t expect employees to live year-round in Skagway, AK).
This is evidence against the claim that debating opinions which are not widely held, or even considered “conspiracy theories”, just gives them a platform and strengthens their credibility in the eyes of the public.
I’m not sure if this really applies here, since Lab Leak was never really treated as a crazy/fringe idea among rationalists. In fact, it looks like it was the majority opinion before the debate and ACX posts.
I think historically frying would have used olive oil or lard though.
Don’t forget the standard diet advice of avoiding “processed foods”. It’s unclear what exactly the boundary is, but I think “oil that has been cooking for weeks” probably counts.
Having 1.6 million identical twins seems like a pretty huge advantage though.
Just curious, but if you found a big group house you liked where everyone had kids, would you be interested? I guess it would have to be a pretty big house.
You should probably take reverse-causation into account here. I doubt the effect of the school is nearly as strong as you think, since people who want finance jobs are drawn to the schools known for getting people finance jobs. Add to that that the schools known for certain things are the outliers. If you go to a random state school, the students are going to have much more varying interests.
Any chance you can link to that discussion? I’m really curious.
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.
While military robots might be bad for other reasons, I don’t really see the path from this to doom. If AI powered weaponry doesn’t work as expected, it might kill some people, but it can’t repair or replicate itself or make long-term plans, so it’s not really an extinction risk.
I don’t think there’s anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
You could imagine a similar situation in medicine: I think if we could engineer a virus that spreads rapidly among humans and rewrites our DNA to solve all of our health issues and make us smarter would be really good, and I might think it’s the most important thing for the world to be working on; but at the same time, I think the number of engineered super-pandemics should remain at zero until we’re very, very confident.
It’s worth noticing that MIRI has been working on AI safety research (trying to speed up safe AI) for decades and only recently got into politics.
You could argue that Eliezer and some other rationalist are slowing down AGI and that’s bad because they’re wrong about the risks, but that’s not a particularly controversial argument here (for example, see this recent highly-upvoted post). There’s less (recent) posts about how great safe AGI would be, but I assume that’s because it’s really obvious.
I would be more worried about getting kicked out of parties because you think “the NRC is a good thing”.
More seriously, your opinion on this doesn’t sound very e/acc to me. Isn’t their position that we should accelerate AGI even if we know it will kill everyone, because boo government yay entropy? I think rationalists generally agree that speeding up the development of AGI (that doesn’t kill all of us) is extremely important, and I think a lot of us don’t think current AI is particularly dangerous.
To be fair, the one-in-a-million legislators who make it to the federal level probably are very good at politics. It’s kind of unreasonable to hold them the the standard of knowing (and demonstrating their knowledge of) things about economics or healthcare when their job is to win popularity contests by saying transparently ridiculous things.
I’m not downvoting because this was downvoted far enough, but downvoting doesn’t mean you think the post has committed a logical fallacy. It means you want to see less of that on LessWrong. In this case, I would downvote because complaining about the voting system isn’t interesting or novel.
I realized after asking that my default prompt makes ChatGPT really verbose so I changed the prompt to:
Identify types of human cells using the following marker genes. Identify one cell type for each row. Only provide the cell type name and no other commentary.
And it gave me:
Embryonic stem cells
Induced pluripotent stem cells
Endoderm
Granulosa cells
Oocytes
Pituitary gland cells
Germ cells
Leydig cells
Neurons
Meiotic cells
Sertoli cells
Neural progenitor cells
For 9 it’s actually interesting that if I let it give commentary it says:
CASC3, PGAP1, SLC6A16, CNTNAP4, NPHP1 - This set of genes does not point to a well-defined cell type but could suggest Neuronal Cells or specific types of Neural Precursors based on the presence of neural development and function genes.
Isn’t this just assuming the conclusion?