This is the source I found. It’s fairly old, so if you’ve found something that supersedes it I’d be interested.
An initial search doesn’t confirm whether or not mycoplasma age. Bacteria do age though; even seemingly-symmetrical divisions yield one “parent” bacterium that ages and dies.
If mycoplasma genuinely don’t, that would be fascinating and potentially yield valuable clues on the aging mechanism.
Minimal cell experiments (making cells with as small a genome as possible) have already been done successfully. This presumably removes transposons, and I have not heard that such cells had abnormally long lifespans.
One possibility is that there are at least two aging pathways-the effect of transposons, which evolution wasn’t able to eliminate, and an evolved aging pathway intended to eliminate older organisms so they don’t compete with their progeny (doing so while suffering ill health from transposon build-up would be less fit than dying and delegating reproduction to one’s less transposon-heavy offspring).
There is significant evidence that most organisms have evolved to eventually deliberately die, independent of problems like transposons that aren’t intentional on the level of the organism. Yamanaka factors can reverse some symptoms of aging, and appear to do so by activating a rejuvenation pathway. This makes perfect sense if the body deliberately ordinarily reserves that pathway for gamete production, while letting itself deteriorate. It is extremely confusing if aging is purely damage, however. Yamanaka factors don’t provide new information (other than the order to rejuvenate) or resources; a body that is doing its best to avoid aging wouldn’t seem to benefit from them, and could presumably evolve to produce them if evolution found this desirable. Other examples include the beneficial effects of removing old blood plasma (this appears to trick the body into thinking it is younger, which should work on a deliberately aging organism but not one that aged purely through damage), the fact that rat brain cells deteriorate as the perceive the brain to gradually stiffen with age, but rejuvenate if their ability to detect stiffness is removed, and the fact that some species of octopuses commit suicide after reproducing, and refrain from doing so if a particular gland is removed.
If both transposons and a deliberate aging pathway contribute to aging, it would be very interesting to see what happens in an organism with both transposon inactivation and Yamanaka factor treatment. Neither appears to create massive life extension on its own, but together they might do so, or at least point out worthwhile directions for further inquiry.
“Or maybe anti-aging is inherently interesting to some people who want to live to see flying cars...”
Maybe anti-aging is inherently interesting? Do you not expect some people to want to survive? The will to live is inherent in humanity for very obvious evolutionary reasons. Moreover, anyone whose quality of life is positive has reason to want to live so long as that is the case. There are religious people who want to die so as to attain an afterlife, but unless you are hoping for Heaven/Nirvana/72 Virgins/whatever, or your current quality of life is negative, anti-aging should be inherently interesting to you.
“and no rational critique would dissuade them.”
If something is inherently interesting, people will want it unless there is a cost that exceeds the benefit. If there is such a cost, such a rational critique will in fact dissuade rational people. This seems like a cheap attempt to make transhumanists seem unwilling to listen to reason without actually making a case to that effect.
“In short, the best approach would be to rebuild your tree from scratch. This is why having kids is more efficient than just having more time on earth.”
More efficient for what purpose? Even if we assume you are correct that experience is a negative to career success (not what is typically observed, to put it mildly), what are you hoping to attain with your career that is better served by dying and hoping your children will carry on the work? It can’t be making money for you-you do not benefit from money when you’re dead! It can’t be making money for your children; you’re as dismissive of their survival as of your own. It sounds like you want money for your genetic lineage, but why? Normally people value their wellbeing and that of their family; all of you dying does not serve this. You can’t even claim to be following some underlying evolutionary principle, as the survival of you and your children will preserve your genes better than letting them be diluted down over generations.
“Even if birth rates went down to replacement rate tomorrow, improvements in longevity would result in more people being on the planet at any given time.”
Correct. On the other hand, while overpopulation is a potential concern with longevity, it is worth taking five minutes to consider the problem rather than simply electing to die. Potential solutions include interplanetary colonization, mind uploading, better birth control or simply handing off the problem to a friendly AI. All of these are technically challenging, but so is life extension. It does not make sense to assume that a world capable of it must be forever incapable of ever finding a solution to overpopulation. To assert that this question must necessarily make life extension harmful is to assert that we know that no such solution can be found, quite the extraordinary claim. The milder claim that this is a concern worth addressing is by contrast valid, but that’s not a reason to abandon life extension, merely one to develop population solutions in tandem, if we can.
“Arguably, one of reasons young people are frustrated with modern politics is that boomers are still very much in the driver’s seat. ”
Easy enough to mandate political retirement at a particular age. Disenfranchisement is better than death. To quote Eliezer’s short story Three Worlds Collide, “Only youth can Administrate. That is the pact of immortality.”
″...we’ll need more senior care. This may become a costly burden on future generations. ”
Potentially. Or a population that spends more time healthy and able to work and less time slowly decaying in retirement might have a lighter burden on future generations. Or perhaps a growing, potentially-automated economy will obviate the question entirely. This is much like the overpopulation question in that it conflates desirability with prudence. Desirability is whether or not we consider a thing beneficial as such; whether or not we’d want it in the absence of countervailing costs. Prudence is whether or not we consider a thing worthwhile on net even counting the costs. You point out, correctly, that overpopulation and a strained senior care system are potential risks that may need to be addressed if we want to make life extension prudent. That does not mean that it is not desirable, nor that we should immediately view the costs it could impose as impossible to mitigate.
“We may also have to consider assisted suicide for people who would be dead if it weren’t for technology. Should we keep them alive because we can?”
Do these people want to die? Are we out of resources to sustain them with? If the answers are no and no, why should we kill them? If one or more of the answers is yes, that’s a concern, but one better answered by seeking to improve their quality of life or acquire more resources, at least if we value human wellbeing. And if we don’t, why are we bothering to stay alive ourselves, or avoid killing willy-nilly?
Ultimately, it is human nature to value survival. We cannot always survive, we may sacrifice ourselves for others we care for if we cannot both survive, and some people even choose death out of misery or religious faith. Yet where it is possible, it is better to make life worth living than to give up and die. Where it is possible, it is better to save everyone rather than sacrificing our lives. Where it is possible, it is better to oppose aging like we would any other injury, and while I cannot claim that life is better than Heaven, you did not bring up afterlives, so it seems unlikely that they are factoring in your reasoning. Unless you assert that the natural order of things was divinely, benevolently ordained, there is no reason to think that death by aging is somehow better than any other threat to life, be it disease, injury, war, poverty or the like.
Would you use those same reasons to argue for Covid?
This is also true for many people not in that age range. “Many people in a group will try to make life harder for those around them” isn’t much of an argument for incarceration. If it were, who would you permit to be free?
That might work. Maybe have the adversarial network try to distinguish GPT-3 text from human text? That said, GPT-3 is already trying to predict humanlike text continuations, so there’s a decent chance that having a separate GAN layer wouldn’t help. It’s probably worth doing the experiment though; traditional GANs work by improving the discriminator as well as the desired categorizer, so there’s a chance it could work here too.
You say vulnerable, low-income people “must put themselves at risk to stay alive”, then propose not letting them do so? A lockdown, by itself, does not give the poor any money. If you wish to prevent them from working risky jobs to support themselves, you must either offer them some other form of support or assert that they have other, better options (“homelessness, malnourishment, etc.”?), but are making the wrong decision by working and thus ought to be prevented from doing so. Being denied options is only protection if one is making the wrong decision.
Do you think these people ought to be homeless and malnourished? If so, that’s a hard case to make morally or practically. If not, you should offer an alternative, rather than simply banning what you yourself state is their only path to avoiding this.
“We hold all Earth to plunder, all time and space as well. Too wonder-stale to wonder at each new miracle.”-Rudyard Kipling
This is a genuine concern, and this may be particularly high-variance advice. However, a focus on avoiding mistakes over trying new “superstrategies” might also help some people with akrasia. It’s easier to do what you know than seek some special trick. Personally, at least, I find akrasia is worst when it comes from not knowing what to do next. And while taking fewer actions in general is usually a bad idea, trying to avoid mistakes could also be used for “the next time I’m about to sit around and do nothing, instead I’ll clean/program/reach out to a friend.” This doesn’t sound like it has to be about necessarily doing less.
Consider a charity providing malaria nets. Somebody has to make the nets. Somebody has to distribute them. These people need to eat, and would prefer to have shelter, goods, services and the like. That means that you need to convince people to give food, shelter, etc. to the net makers. If you give them money, they can simply buy their food.
This of course raises the question of why you can’t simply ask other people to support the charity directly. But consider someone providing a service to the charity workers: even if they care passionately about fighting malaria, they do not want to run out of resources themselves! If you make food, and give it all to the netweavers, how can you get your own needs met? What happens when you need medical care, and the doctor in turn would love to treat a supporter of the anti-malaria fight, but wants to make sure he can get his car fixed?
In a nutshell, people want to make sure there will be resources available to us when we need them. Money allows us to keep track of those resources: if everyone treats money as valuable, we can be confident of having access to as many resources as our savings will buy at market rates. If we decide instead to have everyone be “generous” and give in the hopes that others will give to them in turn, it becomes impossible to keep track of who needs to do how much work or who can take how many resources without creating a shortage. You can’t even solve that problem by having everyone decide to work hard and consume little; doing too much can be as harmful as doing too little, as resources get foregone. And of course, that’s with everyone cooperating. If someone decides to defect in such a system, they can take and take while providing nothing in return. Thus, it is much easier to mange resources with money, despite it being “not real”, even in the chase of charity. Giving money to a charity is a commitment to consume less (or to give up the right to consume as much as you possibly could, whether or not your actual current spending changes), freeing up resources that are then directed to the charity.
By that definition nothing is zero sum. “Zero sum” doesn’t mean that literally all possible outcomes have equal total utility; it means that one person’s gain is invariably another person’s loss.
But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.
This is obviously true in terms of Soviet policy, but it sounds like you’re making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.
We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn’t know for certain that it was one, Petrov was defecting against the system.
Indeed. But we do not cooperate in prisoners’ dilemmas “just because”; we cooperate because doing so leads to higher utility. Petrov’s defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.
Is that the sort of behavior we really want to lionize?
If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it’s hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.
This seems wrong.
The second law of thermodynamics isn’t magic; it’s simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it’s more likely that you’ll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it’s a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder.
This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn’t explain and entropy-driven life would. Also, remember, entropy doesn’t think ahead. It is purely the difficulty of hitting small targets; a bullet isn’t going to ‘decide’ to swerve into a bull’s eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths.
This seems like academics seeking to be edgy much more than a coherent explanation of biology.
As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It’s rather self-defeating to suggest that losing interest in goodness would be a good idea.
So enlightenment is defragmentation, just like we do with hard drives?
That make a fair bit of sense. And what are your thoughts on work days? I get my work for my job done, but advice on improving productivity on chores and future planning would be appreciated. Also good point on pica!
Very interesting dichotomy! Definitely seems worth trying. I’m confused about the reading/screen time/video games distinction though. Why would reading seem appealing but being in front of a screen not? Watching TV is essentially identical to reading right? You’re taking in a preset story either way. Admittedly you can read faster than TV characters can talk, so maybe that makes it more rewarding?
Also, while playing more video games while recovering and fewer while resting makes sense (they’re an easy activity while low on energy, and thus will take up much of a recovery day, but less of a rest day), “just following my gut” can still lead to plenty of gaming. Does this mean that I should still play some on a rest day, just less? That I almost never have enough energy to rest instead of recover? That I’m too into gaming and this is skewing my gut such that a good rest day rule would be “follow your gut, except playing fewer/no games today”?
First off, you probably want to figure out if your nihilism is due to philosophy or depression. Would you normally enjoy and value things, but idea of finite life gets in the way? Or would you have difficulty seeing a point to things even if you were suddenly granted immortality and the heat death of the universe was averted?
Either way, it’s difficult to give a definitive solution, as different things work for different people. That said, if the problem seems to be philosophy, it might be worth noting that the satisfaction found in a good moment isn’t a function of anything that comes after it. If you enjoy something, or you help someone you love, or you do anything else that seems valuable to you, the fact of that moment is unchangeable. If the stars die in heaven, that cannot change the fact that you enjoyed something. Another possible solution would be trying to simply not think about it. I know that sounds horribly dismissive, but it’s not meant to. In my own life there have been philosophical (and in my case religious) issues that I never managed to think my way out of… but when I stopped focusing on the problem it went away. I managed this only after getting a job that let my brain say “okay, worry about God later, we need to get this task done first!” If you think it would help, finding an activity that demands attention might help (if you feel that your brain will let you shift your attention; if not this might just be overly stressful).
If the problem seems to be depression, adrafinil and/or modafinil are extremely helpful for some people. Conventional treatments exist too of course (therapy and/or anti-depressants); I don’t know anyone who has benefited from therapy (at least not that they’ve told me), but one of my friends had night and day improvement with an anti-depressant (sadly I don’t remember which one; if you like I can check with her). Another aspect of overcoming depression is having friends in the moment and a plan for the future, not a plan you feel you should follow, but one you actively want to. I don’t know your circumstances, but insofar as you can prioritize socialization and work for the future, that might help.
As for the actual question of self-improvement, people vary wildly. An old friend of mine found huge improvements in her life due to scheduling; I do markedly better without it. The best advice I can offer (and this very well might not help; drop it if it seems useless or harmful) is three things:
Don’t do what you think you should do, do what you actually want to (if there isn’t anything that you want, maybe don’t force trying to find something too quickly either). People find motivation in pursuing goals they actually find worthwhile, but following a goal that sounds good but doesn’t actually excite you is a recipe for burnout.
Make actionable plans-if there’s something you want to do, try to break it down into steps that are small enough, familiar enough or straightforwards enough that you can execute the plan without feeling out of your depth. Personally, at least, I find there’s a striking “oh, that’s how I do that” feeling when a plan is made sufficiently explicit, a sense that I’m no longer blundering around in a fog.
Finally, and perhaps most importantly, don’t eliminate yourself. That is, don’t abandon a goal because it looks difficult; make someone else eliminate you. This is essential because many tasks look impossible from the outside, especially if you are depressed. It’s almost the mirror image of the planning fallacy-when people commit to doing something, it’s all too easy to envision everything going right and not account for setbacks. But before you actually take the plunge, so to speak, it’s easy to just assume you can’t do anything, which is simply not true.
“To understand anatomy, dissect cadavers.” That’s less a deliberate study of an edge case, and more due to the fact that we can’t ethically dissect living people!
At the risk of appearing defective, isn’t this the sort of action one would only want to take in a coordinated manner? If it turns out that use of such delivery services tends to force restaurants out of business, then certainly one would prefer a world where we don’t use those services and still have the restaurants-you can’t order take out from a place that doesn’t exist anymore! But deciding unilaterally to boycott delivery imposes a cost without any benefit-whether I choose to use delivery or not will not make the difference. This looks like a classic tragedy of the commons, where it is best to coordinate cooperation, but cooperating without that coordination is a pure loss.
Interesting article. It argues that the AI learned spam clicking from human replays, then needed its APM cap raised to prevent spam clicking from eating up all of its APM budget and inhibiting learning. Therefore, it was permitted to use inhumanly high burst APM, and with all its clicks potentially effective actions instead of spam, its effective actions per minute (EPM, actions not counting spam clicks) are going to outclass human pros to the point of breaking the game and rendering actual strategy redundant.
Except that if it’s spamming, those clicks aren’t effective actions, and if those clicks are effective actions, it’s not spamming. To the extent Alphastar spams, its superhuman APM is misleading, and the match is fairer than it might otherwise appear. To the extent that it’s using high burst EPM instead, that can potentially turn the game into a micro match rather than the strategy match that people are more interested in. But that isn’t a question of spam clicking.
Of course, if it started spam clicking, needed the APM cap raised, converted its spam into actual high EPM and Deepmind didn’t lower the cap afterwards, then the article’s objection holds true. But that didn’t sound like what it was arguing (though perhaps I misunderstood it). Indeed, it seems to argue the reverse, that spam clicking was so ingrained that the AI never broke the habit.