This is obviously the one realistic way for most of us to see the far-ish future. I have put extensive thought into this myself, looking at closer to practical methods that exist now, and I do think even the relatively crude ‘machine learning’ methods in use now could lead to massively better outcomes.
If you think about the problem a little bit, if you could toss every scientific paper into a machine of arbitrary intelligence (but not so much intelligence it can simulate the whole universe particle by particle, it’s just super-smart but not a deity), and asked it to cure someone’s cancer, it probably couldn’t do it. There are too many holes in the data. Too many errors and contradictory findings and vague language in human published papers. The machine would have no adequate model of the relationship between actions taken by it’s waldos and outcome.
I actually think to get to a reliable “cure” of what ails us humans, you would unfortunately have to start over at first principles. You would need probably on the order of millions of waldos : basic bioscience test cells. And a vast library of samples and automated synthetic compound labs. What the machine would be doing is building a model by recreating all of the basic science in a more rigorous manner than humans ever did.
Want to know what happens when you mix peptide <A23> with ribosome enzyme <P40>? Predict the outcome with your existing model. Test it. Analyze the bindings.
One advantage of using robotic waldos instead of human lab techs is reliable replication. You would have an exact log of every step taken, and the sensor data from the robot as the step is completed. Replication is a matter of replaying the same sequence with the same robot type in a different lab.
So the basics of it is that you build a model up of higher level outcomes primarily using your data from lower level experiments. This is ultimately how you will reach a level where you can build a custom molecule or peptide to address a specific problem. With your model, you can predict the probable outcomes of that peptide interacting with the patient. This reduces how many you have to try.
You would also need a vast array of testing apparatus. Not lab animals, synthetic organs 3d printed from human cells. Entire mockup human bodies (everything but the brain) of separate organs in separate life support containers. When you have an unexpected interaction or problem, you isolate it with binary searches. If a drug causes liver problems, you don’t shrug your shoulders, you send a sample of the liver cells and the drug back to the basic science wing. You begin subdividing all the liver proteins in half until you discover the exact molecule the drug interferes with, and the exact binding site, and this informs your search for future drugs.
In summary, the advantages, even with mere existing techniques include : a. Ability to do mass scale experimentation. b. You do not need data to meet some arbitrary threshold of significance to use it. If there is a tiny relationship between 2 variables, a floating point neural weight can hold it, even if the relationship is very small. c. Replicability of experiments. d. A model of biology more complex than any human mind can hold. e. Ability to use advanced planners, similar to what is demonstrated in AlphaGo, to make intelligent guesses for drug discovery. f. Ability to design a drug just in time to help a specific patient.
I want to elaborate on (f) a little. We should be weighting the risks and rewards properly. If a patient is actively dying, the closer they are to their predicted death, the more risks we should take in an attempt to save them. The closer death is, the more dramatic the intervention. Not only will this periodically save people’s lives, but it allows for rapid progress. Suppose the person is dying from infection, and the AI agent recommends a new antiviral molecule that works against a sample of the person’s infection in the lab. The molecule gets rapid synthesized automatically and delivered by an injection robot within hours of development. And then the patient suddenly develops severe liver failure and dies.
The NEXT time this happens, the agent knows the liver is a problem. It made copies of the person’s liver from their corpse (and of course froze their brain), and has investigated the failure and found a variant on the molecule that works. So the next patient is treated successfully.
Thanks for sharing your vision. I will work on the chapter of a book on the same topic, and hope to incorporate your ideas in it, if you don’t mind, with proper attribution.
This is obviously the one realistic way for most of us to see the far-ish future. I have put extensive thought into this myself, looking at closer to practical methods that exist now, and I do think even the relatively crude ‘machine learning’ methods in use now could lead to massively better outcomes.
If you think about the problem a little bit, if you could toss every scientific paper into a machine of arbitrary intelligence (but not so much intelligence it can simulate the whole universe particle by particle, it’s just super-smart but not a deity), and asked it to cure someone’s cancer, it probably couldn’t do it. There are too many holes in the data. Too many errors and contradictory findings and vague language in human published papers. The machine would have no adequate model of the relationship between actions taken by it’s waldos and outcome.
I actually think to get to a reliable “cure” of what ails us humans, you would unfortunately have to start over at first principles. You would need probably on the order of millions of waldos : basic bioscience test cells. And a vast library of samples and automated synthetic compound labs. What the machine would be doing is building a model by recreating all of the basic science in a more rigorous manner than humans ever did.
Want to know what happens when you mix peptide <A23> with ribosome enzyme <P40>? Predict the outcome with your existing model. Test it. Analyze the bindings.
One advantage of using robotic waldos instead of human lab techs is reliable replication. You would have an exact log of every step taken, and the sensor data from the robot as the step is completed. Replication is a matter of replaying the same sequence with the same robot type in a different lab.
So the basics of it is that you build a model up of higher level outcomes primarily using your data from lower level experiments. This is ultimately how you will reach a level where you can build a custom molecule or peptide to address a specific problem. With your model, you can predict the probable outcomes of that peptide interacting with the patient. This reduces how many you have to try.
You would also need a vast array of testing apparatus. Not lab animals, synthetic organs 3d printed from human cells. Entire mockup human bodies (everything but the brain) of separate organs in separate life support containers. When you have an unexpected interaction or problem, you isolate it with binary searches. If a drug causes liver problems, you don’t shrug your shoulders, you send a sample of the liver cells and the drug back to the basic science wing. You begin subdividing all the liver proteins in half until you discover the exact molecule the drug interferes with, and the exact binding site, and this informs your search for future drugs.
In summary, the advantages, even with mere existing techniques include : a. Ability to do mass scale experimentation. b. You do not need data to meet some arbitrary threshold of significance to use it. If there is a tiny relationship between 2 variables, a floating point neural weight can hold it, even if the relationship is very small. c. Replicability of experiments. d. A model of biology more complex than any human mind can hold. e. Ability to use advanced planners, similar to what is demonstrated in AlphaGo, to make intelligent guesses for drug discovery. f. Ability to design a drug just in time to help a specific patient.
I want to elaborate on (f) a little. We should be weighting the risks and rewards properly. If a patient is actively dying, the closer they are to their predicted death, the more risks we should take in an attempt to save them. The closer death is, the more dramatic the intervention. Not only will this periodically save people’s lives, but it allows for rapid progress. Suppose the person is dying from infection, and the AI agent recommends a new antiviral molecule that works against a sample of the person’s infection in the lab. The molecule gets rapid synthesized automatically and delivered by an injection robot within hours of development. And then the patient suddenly develops severe liver failure and dies.
The NEXT time this happens, the agent knows the liver is a problem. It made copies of the person’s liver from their corpse (and of course froze their brain), and has investigated the failure and found a variant on the molecule that works. So the next patient is treated successfully.
Thanks for sharing your vision. I will work on the chapter of a book on the same topic, and hope to incorporate your ideas in it, if you don’t mind, with proper attribution.