As in, the number and type of neurotransmitter receptors embedded in each synapse.
This isn’t “disappointing”, this was expected. The initial wiring layout is random, though there’s some pruning that occurs in early brain development.
As in, the number and type of neurotransmitter receptors embedded in each synapse.
This isn’t “disappointing”, this was expected. The initial wiring layout is random, though there’s some pruning that occurs in early brain development.
The reason while you had limited instruction in shooting a weapon was probably due to a related problem I observed.
The military spends lavish sums on expensive capital equipment and human resources, but it seems to pinch pennies on the small stuff. For example, I recall being assigned numerous times to various cleanup details, and noticed we would never have any shortage of manpower—often 10+ people, but there would be an acute shortage of mops, cleaning rags, and chemicals.
Similarly, we all had rifles, but live ammunition to train with was in very short supply. I would mentally compute how backwards this was. It costs the government several hundred dollars in pay and benefits to have each one of us standing around for a day, yet they were pinching pennies on ammo that cost maybe 10 cents a round.
I don’t know what causes these backwards situations, where you would be drowning in expensive equipment and people yet critically short of cheap, basic supplies, but I’ve seen many references to the problem.
Prediction : Harry has stolen a march on Quirrelmort. I predict that between the time Professor Mcgonagall unlocked his time turner and Quirrelmort entered the room, he already used the device to visit the library’s restricted section.
At least, I hope so : I really want to learn how “spell creation” is done, per EY’s interpretation. That will tell us a lot about what magic actually is and what can be done to achieve Real Ultimate Power.
Furthermore, this would be fully rational. Harry’s analysis of what to do next should have already made it abundantly clear that he needs to obtain more information, and the restricted section obviously has stuff that might be helpful. And why start on a task now when you can start on it 6 hours ago?
You missed the boat completely. Not modding down because this is an easy cognitive error to make, and I just hit you with a wall of text that does need better editing.
I just said that the model of “basic research” is WRONG. You can’t throw billions at individual groups, each eating away a tiny piece of the puzzle doing basic research and expect to get a working device that fixes the real problems.
You’ll get rafts of “papers” that each try to inform the world about some tiny element about how things work, but fail miserably in their mission for a bunch of reasons.
Instead you need targeted, GOAL oriented research, and a game plan to win. When groups learn things, they need to update a wiki or some other information management tool with what they have found out and how certain they are correct—not hide their actual discovery in a huge jargon laden paper with 50 references at the end.
The designer had a specific design goal : “thou shalt replicate adequately well under the following environmental conditions”...
Given the complex, intricate mechanisms that humans seem to have that allow for this, the “designer” did a pretty good job.
Cognitive biases boost replication under the environmental conditions they were meant for, and they save on the brainpower required.
So yes, I agree with you. If the human brain system were an engineered product, it clearly meets all of the system requirements the client (mother nature) asked for. It clearly passes the testing. The fact that it internally takes a lot of shortcuts and isn’t capable of optimal performance in some alien environment (cities, virtual spaces, tribes larger than a few hundred people) doesn’t make it a bad solution.
Another key factor you need to understand in order to appreciate nature is the constraints it is operating under. We can imagine a self-replicating system that has intelligence of comparable complexity and flexibility to humans that makes decisions that optimal to a few decimal places. But does such a system exist inside the design space accessible to Earth biology? Probably not.
The simple reason for this is 3 billion years of version lock in. All life on earth uses a particular code-space, where every possible codon in DNA maps to a specific amino acid. With 3 bases per codon, there’s 4^3 possibilities, and all of them map to an existing amino acid. In order for a new amino acid to be added to the code base, existing codons would have to be repurposed, or an organism’s entire architecture would need to be extended to a 4 (or more) codon base system.
We can easily design a state machine that translates XXX → _XXX, remapping an organisms code to a new coding scheme. However, such a machine would be incredibly complicated at the biological level—it would be a huge complex of proteins and various RNA tools, and it would only be needed once in a particular organism’s history. Evolution is under no forces to evolve such a machine, and the probability of it occurring by chance is just too small.
To summarize, everything that can ever be designed by evolution has to be made of amino acids from a particular set, or created as a derivative product by machinery made of these amino acids.
An organism without cognitive biases would probably need a much more powerful brain. Nature cannot build such a brain with the parts available.
The theory goes : plagues that are especially deadly must spread through the body extremely quickly. Otherwise, they give the immune system time for the B cells to formulate an antibody. Yet, if the plague spreads quickly, it has a short incubation period, and it means that hosts will die before spreading it. Ebola is thought to fit in this part of the ecology, and this is one reason why the virus is rare.
A virus that spread itself like the flu but also killed like ebola would be pushed by evolution away from these properties because it would kill off it’s hosts too quickly.
Another factor is that some of the better viruses for evading the immune system (HIV) depend on being able to randomly recombine and change the pattern for their outer shells.
If you designed a virus that had a tough outer coating, targeted cells and receptors designed to kill the host, and had some kind of sophisticated clock mechanism to force a long incubation period, you would be forced to give it genes that would code for complex error correcting proteins so that each new generation of the virus would have a low chance of containing a mutation. This would in turn prevent it from evolving, allowing the immune system (and synthetic antibodies) to target it easily.
So, you’d have to deliberately make it able to adjust it’s own outer coat randomly, but not any other components.
Such a virus is not something evolution is likely to ever create (because for one, it would extinct it’s hosts, and for another, evolution doesn’t work like this. Evolution as an algorithm finds the highest point on the NEAREST hill in the solution space, not the peak of a theoretical mountain that towers over the solution space)
Net result : with very sophisticated bioscience, such a person killer that had overlapping qualities could be created. However, you are correct that there is a reason you don’t see them in nature.
This isn’t true. Viruses are subject to evolutionary pressure even inside a single patient. They don’t replicate perfectly (partly because they have to be small and simple, and don’t have very good control of the cellular environment they are inside, being invaders and all) and so variants of the particle compete with one another. Because of this, features that might be desired in a bioweapon but are not needed in order for the virus to replicate can get lost.
For instance, a bioweapon virus might contain genes for botulism toxin in order to kill the host. However, copying this gene every generation would diminish the particles ability to replicate, and so variants of the particle that are missing the gene would have a small evolutionary advantage. After just a few patients, the wild version of the virus might have lost this feature.
An example of a nasty trick that would make for a relatively easy to produce and deploy bio-weapon : http://www.plospathogens.org/article/info%3Adoi%2F10.1371%2Fjournal.ppat.1001257
Inhaled prions have extremely long incubation times (years), so it would be possible for an attacker to expose huge numbers of people unknowingly to them. The disease it causes is slow and insidious, and as of today, there is no way to detect it until post-mortem. There’s no treatment, either. I’m not certain of the procedure for making prions in massive quantities in the laboratory, but since they are self-replicating if placed in a medium containing the respective protein, they probably could be mass-produced.
On the bright side, the disease would not be self-replicating in the wild, so it would not be an existential risk—merely a very nasty way to cause mass casualties. Also, this method has never been tested on humans, so it might not be very effective, so one can hope that terrorists will stick with bombs.
Let’s talk actual hardware.
Here’s a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.
Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System
Combined with sensors based on this patent : http://www.google.com/patents/US5686889
http://en.wikipedia.org/wiki/Gunfire_locator
The hardware and software would be optimized for detecting AK-47 fire, though it would be able to detect most firearms. Some of these sensors work best if multiple platforms armed with the same sensor are spread out in space, so there would need to be several of these drones hovering overhead for maximum effectiveness.
How would this system be used? Whenever a group of soldiers leaves the post, they would all have to wear blue force trackers that clearly mark them as friendly. When they are at risk for attack, a swarm of drones follows them overhead. If someone fires at them, the following autonomous kill decision is made
if( SystemIsArmed && EventSmallArmsFire && NearestBlueForceTracker > X meters && ProbableError < Y meters) ShootBack();
Sure, a system like this might make mistakes. However, here’s the state of the art method used today :
http://www.youtube.com/watch?list=PL75DEC9EEB25A0DF0&feature=player_detailpage&v=uZ2SWWDt8Wg
This same youtube channel has dozens of similar combat videos. An autonomous killing drone system would save soldier’s lives and kill fewer civilians. (drawbacks include high cost to develop and maintain)
Other, more advanced systems are also at least conceivable. Ground robots that could storm a building, killing anyone carrying a weapon or matching specific faces? The current method is to blow the entire building to pieces. Even if the robots made frequent errors, they might be more effective than bombing the building.
From reading Radical Abundance :
Drexler believes that not only are stable gears possible, but that every component of a modern, macroscale assembly-line can be shrunk to the nanoscale. He believes this because his calculations, and some experiments show that this works.
He believes that ” Nanomachines made of stiff materials can be engineered to employ familiar kinds of moving parts, using bearings that slide, gears that mesh, and springs that stretch and compress (along with latching mechanisms, planetary gears, constant-speed couplings, four-bar linkages, chain drives, conveyor belts . . .).”
The power to do this comes from 2 sources. First of all, the “feedstock” to a nanoassembly factory always consists of the element in question bonded to other atoms, such that it’s an energetically favorable reaction to bond that element to something else. Specifically, if you were building up a part made of covalently bonded carbon (diamond), the atomic intermediate proposed by Drexler is carbon dimers ( C—C ). See http://e-drexler.com/d/05/00/DC10C-mechanosynthesis.pdf
Carbon dimers are unstable, and the carbon in question would rather bond to “graphene-, nanotube-, and diamond-like solids”
The paper I linked shows a proposed tool.
Second, electrostatic electric motors would be powered by plain old DC current. These would be the driving energy to turn all the mechanical components of an MNT assembly system. Here’s the first example of someone getting one to work I found by googling : http://www.nanowerk.com/spotlight/spotid=19251.php
The control circuitry and sensors for the equipment would be powered the same way.
An actual MNT factory would work like the following. A tool-tip like in the paper I linked would be part of just one machine inside this factory. The factory would have hundreds or thousands of separate “assembly lines” that would each pass molecules from station to station, and at each station a single step is perfomed on the molecule. One the molecules are “finished”, these assembly lines will converge onto assembly stations. These “assembly stations” are dealing with molecules that now have hundreds of atoms in them. Nanoscale robot arms (notice we’ve already gone up 100x in scale, the robot arms are therefore much bigger and thicker than the previous steps, and have are integrated systems that have guidance circuitry, sensors, and everything you see in large industrial robots today) grab parts from assembly lines and place them into larger assemblies. These larger assemblies move down bigger assembly lines, with parts from hundreds of smaller sub-lines being added to them.
There’s several more increases in scale, with the parts growing larger and larger. Some of these steps are programmable. The robots will follow a pattern that can be changed, so what they produce varies. However, the base assembly lines will not be programmable.
In principle, this kind of “assembly line” could produce entire sub-assemblies that are identical to the sub assemblies in this nanoscale factory. Microscale robot arms would grab these sub-assemblies and slot them into place to produce “expansion wings” of the same nanoscale factory, or produce a whole new one.
This is also how the technology would be able to produce things that it cannot already make. When the technology is mature, if someone loads a blueprint into a working MNT replication system, and that blueprint requires parts that the current system cannot manufacture, the system would be able to look up in a library the blueprints for the assembly line that does produce those parts, and automatically translate library instructions to instructions the robots in the factory will follow. Basically, before it could produce the product someone ordered, it would have to build another small factory that can produce the product. A mature, fully developed system is only a “universal replicator” because it can produce the machinery to produce the machinery to make anything.
Please note that this is many, many, many generations of technology away. I’m describing a factory the size and complexity of the biggest factories in the world today, and the “tool tip” that is described in the paper I linked is just one teensy part that might theoretically go onto the tip of one of the smallest and simplest machines in that factory.
Also note that this kind of factory must be in a perfect vacuum. The tiniest contaminant will gum it up and it will seize up.
Another constraint to note is this. In Nanosystems, Drexler computes that the speed of motion for a system that is 10 million times smaller is in fact 10 million times faster. There’s a bunch of math to justify this, but basically, scale matters, and for a mechanical system, the operating rate would scale accordingly. Biological enzymes are about this quick.
This means that an MNT factory, if it used convergent assembly, could produce large, macroscale products at 10 million times the rate that a current factory can produce them. Or it could, if every single bonding step that forms a stable bond from unstable intermediates didn’t release heat. That heat product is what Drexler thinks will act to “throttle” MNT factories, such that the rate you can get heat out will determine how fast the factory will run. Yes, water cooling was proposed :)
One final note : biological proteins are only being investigated as a boostrap. The eventual goal will use no biological components at all, and will not resemble biology in any way. You can mentally compare it to how silk and wood was used to make the first airplanes.
The latter. I’ve read of limited successes in other fields of research (no one is publicly trying to make something like this) that indicate it’s just barely possible, maybe, with some luck.
One nasty thing is that the virus doesn’t have to be safe. It just has to work, and it’s not a problem if it permanently damages the people it doesn’t kill. So, creating a weapon like this is fundamentally much easier than trying to create, say, a treatment for cancer using similar methods.
Shouldn’t that be spelled “canon bait”? Heh.
The problem with your example is that sterilization is a side effect that would take a very long time to actually cause armageddon, and if civilization can produce GMO foods they can PROBABLY reverse whatever leads to the sterility.
A nuclear exchange that wipes out the key parts of western civilization could happen in 1 hour, and the war could be over in 1 day. If someone were to release a huge swarm of killer nano-machines, it might take months to years to eat the biosphere (again, the machinery I am talking about is NOT self replicating) but developing a countermeasure could take decades.
Another factor to keep in mind is that human biotech advances are very slow and incremental, due to extremely heavy regulation. There is a reluctance to take risks, yet if the risks were taken, far more rapid advances are likely. If a significant proportion of the population were sterile, this reluctance to take risks would be enormously reduced, and rapid advances could happen.
One apocryphal story I heard from a professor at Texas Tech Medical School was that most chemotherapy agents in use today were developed during the postwar period, when regulation was almost nonexistent and a researcher could go straight from the laboratory to the bedside of a cancer patient.
Hearing stories like this, and realizing that in the United States, something like 1.6 million people die every year ANYWAY, I think that regulations should be greatly reduced, but that’s for another discussion.
How about existing examples of specific technologies where the risks outweigh the benefits? And perhaps a statement or two as to why this is.
Let me start with the obvious one—nuclear weapons. Nuclear weapons can be released in a matter of minutes, in theory, and at one time there was enough of them ready to fire to wipe out all the urban areas for Western civilization. Also, no real defense.
With this said, it is unclear to me as to the real probabilities of this risk. If an accidental release had occurred, and a whole city in the Eastern or Western bloc had been lost, would this have likely lead to an escalating series of exchanges resulting in every available weapon being fired? Also, was there any point with the policymakers with legitimate power considered an all out attack to destroy the other side, in essense hoping to extermine all military and infrastructure while “only” losing some cities.
And, despite this apparent existential risk, relatively few people were killed in global wars between nuclear armed nations after their development.
The reason why the nuclear weapons are such a risk is that human civilization is concentrated in nice neat target clusters, limits on control systems makes interception of incoming reentry vehicles difficult and unreliable, and due to tech limitations, anti-ICBM defenses are much more complex and resource intensive than the missiles themselves.
Molecular nanotechnology, if used to build “killer nanobots” that combust organic molecules with oxygen and so essentially incinerate all life in the biosphere a few molecules at a time, presumably have a similar problem. Destructive, weaponized bots would not need to be self replicating, and could be orders of magnitude simpler than a device that could somehow hunt down and disable the weaponized robots.
And of course, malicious AI. Pretty much unless the ecological niches that a malicious AI would occupy (aka all the major computing systems and automated factories) are already occupied by friendly AIs, we’re all screwed.
The way biological nanotechnology (aka the body you are using to read this) solves this problem is it bonds the molecule being “worked on” to a larger, more stable molecule. This means instead of whole box of legos shaking around everywhere, as you put it, it’s a single lego shaking around bonded to a tool (the tool is composed of more legos, true, but it’s made of a LOT of legos connected in a way that makes it fairly stable). The tool is able to grab the other lego you want to stick to the first one, and is able to press the two together in a way that makes the bonding reaction have a low energetic barrier. The tool is shaped such that other side-reactions won’t “fit” very easily.
Anyways, a series of these reactions, and eventually you have the final product, a nice finished assembly that is glued together pretty strongly. In the final step you break the final product loose from the tool, analagous to ejecting a cast product from a mold. Check it out : http://en.wikipedia.org/wiki/Pyruvate_dehydrogenase
Note a key difference here between biological nanotech (life) and the way you described it in the OP. You need a specific toolset to create a specific final product. You CANNOT make any old molecule. However, you can build these tools from peptide chains, so if you did want another molecule you might be able to code up a new set of tools to make it. (and possibly build those tools using the tools you already have)
Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon. The idea here is to make your “lego” analogy manageable. All the “legos” in the box are glued tightly to one another (low temperature, strong covalent bonds) except for the ones you are actually playing with. No extraneous legos are allowed to enter the box (vacuum chamber)
If you want to bond a blue lego to a red lego, you force the two together in a way that controls which way they are oriented during the bonding. Check it out : http://www.youtube.com/watch?v=mY5192g1gQg
Current organic chemical synthesis DOES operate as a box of shaking legos, and this is exactly why it is very difficult to get lego models that come out without the pieces mis-bonded. http://en.wikipedia.org/wiki/Thalidomide
As for your “Shroedinger Equations are impractical to compute” : what this means is that the Lego Engineers (sorry, nanotech engineers) of the future will not be able to solve any problem in a computer alone, they’ll have to build prototypes and test them the hard way, just as it is today.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
These people’s objections are not entirely unfounded. It’s true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today’s hardware to do it.
The point is : you can simplify people’s arguments to “I’m not worried about the imminent existence of AI because we cannot build the hardware to run one”. The fact that a detail about their argument is wrong doesn’t change the conclusion.
How would you build such an AI? Most or all proposals for developing a super-human AI require extensive feedback between the AI and the environment. A machine cannot iteratively learn how to become super-intelligent if it has no way of testing improvements to itself versus the real universe and feedback from it’s operators, can it?
I’ll allow that if an extremely computationally expensive simulation of the real world were used, it is at least possible to imagine that the AI could iteratively make itself smarter by using the simulation to test improvements.
However, this poses a problem. At some point N years from today, it is predicted that we will have sufficiently advanced computer hardware to support a super intelligent AI. (N can be negative for those who believe that day is in the past). So we need X amount of computational power (I think using the Whole Brain Emulation roadmap can give you a guesstimate for X)
Well, to also simulate enough of the universe to a sufficient level of detail for the AI to learn against it, we need Y amount of computational power. Y is a big number, and most likely bigger than X. Thus, there will be years (decades?, centuries?) which X is available to a sufficiently well funded group, but X+Y is not.
It’s entirely reasonable to suppose that we will have to deal with AI (and survive them...) before we ever have the ability to create this kind of box.
The method I described WILL work. The laws of physics say it will. Small scale experiments show it working. It isn’t that complicated to understand. Bad mRNA present = cell dies. All tumors, no matter what, have bad mRNAs, wherever they happen to be found in the body.
But it has to be developed and refined, with huge resources put into each element of the problem.
Here, specifically, is the difference between my proposed method and the current ‘state of the art’. Ok, so the NIH holds a big meeting. They draw a massive flow chart. Team 1,2,3 - your expertise is in immunology. Find a coating that will evade the immune system and can encapsulate a large enough device. Million dollar prize to the first team that succeeds. Here are the specific criteria for success.
Team 4 - for some reason, health cells are dying when too many copies of the prototype device are injected. Million dollars if you can find a solution to this problem within 6 months.
Team 5 - we need alternate chemotherapy agents to attach to this device.
Team 6 - we need a manufacturing method.
Once a goal is identified and a team is assigned, they are allocated resources within a week. Rather than awarding and penny pinching funds, the overall effort has a huge budget and equipment is purchased or loaned between groups as needed. The teams would be working in massive integrated laboratories located across the country, with multiple teams in each laboratory for cross trading of skills and ideas.
And so on and so forth. The current model is “ok, so you want to research if near infrared lasers and tumor cells will work. You have this lengthy list of paper credentials, and lasers and cancer sound like buzzwords we like to hear. Also your buddies all rubber stamped your idea during review. Here’s your funds, hope to see a paper in 2 years”...
No one ever considers “how likely are actually going to be better than using high frequency radiation we already have? How much time is this really going to buy a patient even if this is a better method?”.
The fact is, I’ve looked at the list of all ongoing research at several major institutions, and they are usually nearly all projects of similarly questionable long term utility. Sure, maybe a miracle will happen and someone will discover and easy and cheap method that works incredibly well that no one ever thought would work.
But a molecular machine, composed of mostly organic protein based parts, that detects bad mRNAs and kills the cell is an idea that WILL work. It DOES work in rats. More importantly, it is a method that can potentially hunt down tumor cells of any type, no matter where they are hiding, no matter how many metastases are present.
Anyone using rational thought would realize that this is an idea that actually is nearly certain to work (well, in the long run, not saying a big research project might not hit a few showstoppers along the way).
And there is money going to this idea—but it’s having to compete with 1000 other methods that don’t have the potential to actually kill every tumor cell in a patient and cure them.
What stops you from making a change that is addictive or self-amplifying? For example, suppose a subtle tweak makes you less averse to making another subtle tweak in the same direction. A few thousand iterations later and your network is trashed. http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
It seems to me that the only safe way to do this would be to only permit other uploaded entities to make the edits, working in teams, with careful observation and testing of results. Older versions of yourself might be team members.
Also, the hardware design would need to be extremely well thought out, so that it is not possible for someone to Blue Pill attack you without your knowledge, or directly overwrite your neural structures with someone else’s patterns. The hardware would have to be designed with security permissions inherently baked in : here’s a blog post where Drexler discusses this :
It’s easy to point fingers at a very sick subset of scientific endeavors—biomedical research. The reasons it is messed up and not very productive are myriad. Fake and non-reproducible results that waste everyone’s time are one facet of the problem. The big one I observed was that trying to make a useful tool to solve a real problem with the human body is NOT something that the traditional model can handle very well. The human body is so immensely complex. This means that “easy” solutions are not going to work. You can’t repair a jet engine by putting sawdust in the engine oil or some other cheap trick, can you? Why would you think a very small molecule that can interact with any one of tens of thousands of proteins in an unpredictable manner could fix anything either? (or a beam of radiation, or chopping out an entire sub-system and replacing it with a shoddy substitute made by cannibalizing something else, or delivering crude electric shocks to a huge region. I’ve just named nearly every trick in the arsenal)
Most biomedical research is slanted towards this “cheap trick” solution, however. The reason is because the model encourages it. University research teams usually consist of a principle investigator and a small cadre of graduate students, and a relatively small budget. They are under a deadline to come up with something-anything useful within a few years, and the failures don’t receive tenure and are fired. Pharmaceutical research teams also want a quick and cheap solution, generally, for a similar reason. Most of the low hanging fruit—small molecule drugs that are safe and effective—has already been plucked, and in any case there is a limit to the problems in biological systems that can actually be fixed with small molecules. If a complex machine is broken, you usually need to shut it off and replace major components. You are not going to be able to spray some magic oil and fix the fault.
For example, how might you plausible cure cancer? Well, what do cancer cells share in common? Markers on the outside of the cells? Nope, if there were, the immune system would usually detect them. Are the cells always making some foreign protein? Nope, same problem. All tumors share mutated genes, and thus have mRNAs present in the cells that you can detect.
So how might you exploit this? Somehow you have to build a tool that can get into cells near the tumor and detect the ones with these faulty mRNAs(and kills them). Also, this tool needs to not affect healthy cells.
If you break down the components of the tool, you realize it would have to be quite complex, with many sub-elements that have to be developed. You cannot solve this problem with 10 people and a few million dollars. You probably need many interrelated teams, all of whom are tasked with developing separate components of the tool. (with prizes if they succeed, and multiple teams working on each component using a different method to minimize risks)
No one is going to magically publish a working paper in Nature tomorrow where they have succeeded in such an effort overnight. Yet, this is basically what the current system expects. Somehow someone is going to cure cancer tomorrow without there being an actual integrated plan, with the billions of dollars in resources needed, and a sound game plan that minimizes risk and rewards individual successes.
Professors I have pointed this out to say that no central agency can possibly “know” what a successful cancer cure might look like. The current system just funds anyone who wants to try anything, assuming they pass review and have the right credentials. Thus a large variety of things are tried. I don’t see it. I don’t think there is a valid solution to cancer that can be found with a small team just trying things with a million or 2 of equipment, supplies, and personnel.
Growing replacement organs is a similar endeavor. Small teams have managed to show that it is viable—but they cannot actually solve the serious problems because they lack the resources to go about it in a systematic and likely to succeed way. While Wake Forest has demonstrated years ago that they can make a small heart that beats, there isn’t a huge team of thousands systematically attacking each element of the problem that has to be solved to make full scale replacement hearts.
One final note : this ultimately points to gross misapplication of resources. Our society spends billions to kill a few Muslims who MIGHT kill some people violently. It spends billions to incarcerate millions of people for life who individually MIGHT commit some murders. It spends billions on nursing homes and end of life care to statistically extend the lives of millions by a matter of months.
Yet real solutions to problems that kill nearly everyone, for certain, are not worth the money to solve them in a systematic way.
The reason for this is lack of rationality. Human beings fear emotionally extremely rare causes of death much more than extremely likely, “natural” causes. They fear the idea of a few disgruntled Muslims or a criminal who was let out of prison murdering them far more than they fear their heart suddenly failing or their tissues developing a tumor when they are old.