Why will they die? Say robots are swapping parts with lab grown organs. The patient lives forevermore in a sealed lab. The robots are driven by AI models that learn from all patients.
What can kill them? What can kill them if engineers take straightforward and obvious precautions? (Redundant power sources, redundant ai models, redundant lab grown organs made different ways)
If you accept that current engineers could at least paper design a system that is unlikely to fail, when did they die? Their heart cannot stop because there are 3+ parallel systems that serve the role. Their immune system can’t fail for the same reason. Strokes don’t do much damage because the robots react in seconds.
A. Did they die when they entered the biolab, as they cannot experience the world directly until there is a major technology advance?
B. Did they die when 10 percent of their brain rotted by age 110 and procedures to add neural stem cells and brain implants mean more of their cognition is artificial?
C. Did they die because they hit age 200 and have forgotten most of what happened when they could touch grass?
I can see many ways the above could fail and patients could die, but I can’t see a way that is very likely for them to all die. Some people could live to 250+ and will if this tech becomes possible, and they will not have pre birth edits.
A valid counterargument would be a grounded way this will always fail. Today, for example,if you tried to do this, the death rate is 100 percent because “eventually the patient dies from a lack of an organ function that current science is aware exists” or “eventually the human workers doing this make a single mistake” or “eventually weird and unexpected stuff happens like swollen lymph ducts that there is no treatment for and they die”.
But when you imagine “robots driven by AI models who have learned to grow new bodies from scratch” it’s hard to see a valid counterargument. Mistakes will be made but generally only once, and eventually a subset of the original cohort reaches 250.
Say robots are swapping parts with lab grown organs.
There’s a pretty big gap from swapping non-cognitive organs to swapping whatever goes wrong as brains age. But take that as solved—there are machines that replace or regenerate degradation on the information-content level. Which means they can distinguish between “bad” change due to damage and aging and “good” change due to learning and experiencing. I’m skeptical that this will happen before biological humans are obsolete.
Even so, the most likely (IMO) way to die is if the robots find something better to do. Either caring for more profitable entities, or creating bad art, or creating new entities that the robots love more (or love the same, but are way cheaper because they’re not biological and don’t degrade in these weird ways). In the same way that human professional caregivers give a lot more attention to their children than to patients.
They die when the brain stops being conscious, retrieving and forming memories, etc. They die irrevocably when the patterns in the brain are vanishingly unlikely to ever be re-instantiated and powered up.
Ok but it’s a wildly different claim between “humans who are alive now or will be born in the future without genetic edits are all doomed to die of aging” vs “humans will not be worth keeping alive once the tech base is adequate to keep them alive 250+ years”.
This also kinda simplifies. If you think a technological singularity is an inevitable emergent future event that no choice by humans can do more than delay, then this simplifies to “every human alive now will die soon after the Singularity, which will likely happen somewhere between 2028 and 2060”.
Which seems to be the majority view of many posters here. (I think the Singularity will happen but am not confident it will be as deadly to humans as others model)
Either way it seems you would believe that humans will die by 2060 and aging doesn’t matter for many of us.
My argument is fractal, though. It’s not “there will be lots of investment and progress toward immortality, then it will be ignored because nobody noticed that it’s not worth it (to those who control it)”. It’s “at each step, the hard problem of brain repair will be … hard, and not solved, because replacement works so well already that continuation (of others; most of us would continue the self if we could) isn’t worth everything”. I do strongly expect a lot of early death due to “simple” organ failure, cancer, heart disease, etc will be reduced or fully solved. I don’t expect that to add up to true immortality, and the last bit of cascade failures involving brain degradation may well never be solved.
So ok this reminds me of a pro atheist argument. So you concede that straightforward tech advances an automation can probably fix everything but brain degradation. And hypothetically suppose someone demonstrated some method of partial brain repair. (Neural stem cells, gene edits to turn off aging mechanisms, replacing non neuron support cells with deaged replacements made by deaging and editing the mutations on a single pluripotent stem cell then differentiating it). So that cuts the problem in half, as at least half the cells in the brain motile and replaceable.
Then of course there’s the implants. Theoretically they can replace any function and there are some successful experiments in rats.
And like ok, so someone’s memories are in implants and their brain continues to degrade. Do you think there is some measurable cognitive capacity you can’t restore? When I think of this problem, I think of a VR world made with generative ai, and injected into the narrative are continuous cognitive tests for a variety of functions. So as a patient starts to perform poorly on some tests, more implants are installed, new structures are grown with stem cells (their brain might take on an alien shape and be several times present size to fit all these modifications). This happens until the scores on all tests reach a target baseline
It’s a continuous process. Because they keep reflecting on their original life and personality as the process happens, at all times the patient retains their original personality, human level cognitive capacity, and most declarative memories though there will be errors that get corrected by checking records.
So I mean...at age 100 probably every memory before 50 is a copy made by recall. Nothing is original. And age 150, same for the (100,150) interval. And so on for eternity.
So what I challenge you to find is some definition of death that lets the 250 year old be dead but doesn’t let you define a 100 year old...or 50 year old....as deceased.
I will note that my own views are that it’s a continuous process, it’s possible for someone to be partially dead. With enough technology you can prevent someone from being completely dead for at least a billion years. I just think of it as number in the interval (0, 1). 0 means their body was incinerated and any journals burned, 1.0 is today. Yesterday is 0.99999....i think humans “die” over time regardless of still breathing because a small amount of information is being lost. The loss only stops with neural implants and backed up files.
So that cuts the problem in half, as at least half the cells in the brain motile and replaceable.
I don’t think so. It may solve for 90% of the body’s mass, or even a large percentage of neurons, without making very much progress on the hard part of maintaining cognitive ability and continuity. I (and we) don’t know enough detail of what makes human brains work to have any clue whether it’s actually solvable in existing brains, which haven’t already developed with monitoring and electronic access.
And with that, I think I’ll bow out. Thanks for the discussion—I’ll read further posts and rebuttals, but probably won’t reply.
Ok. Just one note, I did address memory later in the same comment above. You can grow new brain structures, digitally connect them, and they will learn over time the traits of the dying “original” networks they are mimicking. Note we do this all the time in ANNs.
Another meta comment is I am like explaining how you could use a big steam engine made of brass to reach 60mph in a train. I don’t know of better techniques either. I am saying “well you could bolt the patients skull to a fixed point, expose the brain, and add additional structures to copy and augment it to restore lost capabilities.”
I don’t believe such a crude solution will be necessary, I just don’t know anything better with today’s tech base, and I am saying that this will work eventually. Your belief that “death always wins” would be like people in 1910 believing “aircraft will always crash”. Technically true but the rate matters, with methodical refinement and midair refueling and component replacement you can make an aircraft fly for centuries or longer before it crashes.
Why will they die? Say robots are swapping parts with lab grown organs. The patient lives forevermore in a sealed lab. The robots are driven by AI models that learn from all patients.
What can kill them? What can kill them if engineers take straightforward and obvious precautions? (Redundant power sources, redundant ai models, redundant lab grown organs made different ways)
If you accept that current engineers could at least paper design a system that is unlikely to fail, when did they die? Their heart cannot stop because there are 3+ parallel systems that serve the role. Their immune system can’t fail for the same reason. Strokes don’t do much damage because the robots react in seconds.
A. Did they die when they entered the biolab, as they cannot experience the world directly until there is a major technology advance?
B. Did they die when 10 percent of their brain rotted by age 110 and procedures to add neural stem cells and brain implants mean more of their cognition is artificial?
C. Did they die because they hit age 200 and have forgotten most of what happened when they could touch grass?
I can see many ways the above could fail and patients could die, but I can’t see a way that is very likely for them to all die. Some people could live to 250+ and will if this tech becomes possible, and they will not have pre birth edits.
A valid counterargument would be a grounded way this will always fail. Today, for example,if you tried to do this, the death rate is 100 percent because “eventually the patient dies from a lack of an organ function that current science is aware exists” or “eventually the human workers doing this make a single mistake” or “eventually weird and unexpected stuff happens like swollen lymph ducts that there is no treatment for and they die”.
But when you imagine “robots driven by AI models who have learned to grow new bodies from scratch” it’s hard to see a valid counterargument. Mistakes will be made but generally only once, and eventually a subset of the original cohort reaches 250.
There’s a pretty big gap from swapping non-cognitive organs to swapping whatever goes wrong as brains age. But take that as solved—there are machines that replace or regenerate degradation on the information-content level. Which means they can distinguish between “bad” change due to damage and aging and “good” change due to learning and experiencing. I’m skeptical that this will happen before biological humans are obsolete.
Even so, the most likely (IMO) way to die is if the robots find something better to do. Either caring for more profitable entities, or creating bad art, or creating new entities that the robots love more (or love the same, but are way cheaper because they’re not biological and don’t degrade in these weird ways). In the same way that human professional caregivers give a lot more attention to their children than to patients.
They die when the brain stops being conscious, retrieving and forming memories, etc. They die irrevocably when the patterns in the brain are vanishingly unlikely to ever be re-instantiated and powered up.
Ok but it’s a wildly different claim between “humans who are alive now or will be born in the future without genetic edits are all doomed to die of aging” vs “humans will not be worth keeping alive once the tech base is adequate to keep them alive 250+ years”.
This also kinda simplifies. If you think a technological singularity is an inevitable emergent future event that no choice by humans can do more than delay, then this simplifies to “every human alive now will die soon after the Singularity, which will likely happen somewhere between 2028 and 2060”.
Which seems to be the majority view of many posters here. (I think the Singularity will happen but am not confident it will be as deadly to humans as others model)
Either way it seems you would believe that humans will die by 2060 and aging doesn’t matter for many of us.
My argument is fractal, though. It’s not “there will be lots of investment and progress toward immortality, then it will be ignored because nobody noticed that it’s not worth it (to those who control it)”. It’s “at each step, the hard problem of brain repair will be … hard, and not solved, because replacement works so well already that continuation (of others; most of us would continue the self if we could) isn’t worth everything”. I do strongly expect a lot of early death due to “simple” organ failure, cancer, heart disease, etc will be reduced or fully solved. I don’t expect that to add up to true immortality, and the last bit of cascade failures involving brain degradation may well never be solved.
So ok this reminds me of a pro atheist argument. So you concede that straightforward tech advances an automation can probably fix everything but brain degradation. And hypothetically suppose someone demonstrated some method of partial brain repair. (Neural stem cells, gene edits to turn off aging mechanisms, replacing non neuron support cells with deaged replacements made by deaging and editing the mutations on a single pluripotent stem cell then differentiating it). So that cuts the problem in half, as at least half the cells in the brain motile and replaceable.
Then of course there’s the implants. Theoretically they can replace any function and there are some successful experiments in rats.
And like ok, so someone’s memories are in implants and their brain continues to degrade. Do you think there is some measurable cognitive capacity you can’t restore? When I think of this problem, I think of a VR world made with generative ai, and injected into the narrative are continuous cognitive tests for a variety of functions. So as a patient starts to perform poorly on some tests, more implants are installed, new structures are grown with stem cells (their brain might take on an alien shape and be several times present size to fit all these modifications). This happens until the scores on all tests reach a target baseline
It’s a continuous process. Because they keep reflecting on their original life and personality as the process happens, at all times the patient retains their original personality, human level cognitive capacity, and most declarative memories though there will be errors that get corrected by checking records.
So I mean...at age 100 probably every memory before 50 is a copy made by recall. Nothing is original. And age 150, same for the (100,150) interval. And so on for eternity.
So what I challenge you to find is some definition of death that lets the 250 year old be dead but doesn’t let you define a 100 year old...or 50 year old....as deceased.
I will note that my own views are that it’s a continuous process, it’s possible for someone to be partially dead. With enough technology you can prevent someone from being completely dead for at least a billion years. I just think of it as number in the interval (0, 1). 0 means their body was incinerated and any journals burned, 1.0 is today. Yesterday is 0.99999....i think humans “die” over time regardless of still breathing because a small amount of information is being lost. The loss only stops with neural implants and backed up files.
I don’t think so. It may solve for 90% of the body’s mass, or even a large percentage of neurons, without making very much progress on the hard part of maintaining cognitive ability and continuity. I (and we) don’t know enough detail of what makes human brains work to have any clue whether it’s actually solvable in existing brains, which haven’t already developed with monitoring and electronic access.
And with that, I think I’ll bow out. Thanks for the discussion—I’ll read further posts and rebuttals, but probably won’t reply.
Ok. Just one note, I did address memory later in the same comment above. You can grow new brain structures, digitally connect them, and they will learn over time the traits of the dying “original” networks they are mimicking. Note we do this all the time in ANNs.
Another meta comment is I am like explaining how you could use a big steam engine made of brass to reach 60mph in a train. I don’t know of better techniques either. I am saying “well you could bolt the patients skull to a fixed point, expose the brain, and add additional structures to copy and augment it to restore lost capabilities.”
I don’t believe such a crude solution will be necessary, I just don’t know anything better with today’s tech base, and I am saying that this will work eventually. Your belief that “death always wins” would be like people in 1910 believing “aircraft will always crash”. Technically true but the rate matters, with methodical refinement and midair refueling and component replacement you can make an aircraft fly for centuries or longer before it crashes.