Perhaps the foreknowledge could be considered as analogous to the extra intelligence an AI would have.
AstraSequi
Even with current knowledge, we’re not really omniscient about what happened in the past. I know the Romans were good record-keepers (I’m not sure how good), but I would be surprised if there weren’t at least some errors and omissions in the small details.
If we did have omniscient knowledge, I would suggest that the situation might be most analogous after several years—when you could still make predictions with high accuracy, but never really being sure that something you’ve done has slightly changed the dates or other details of specific events.
“Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal. Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who’s more likely to survive than not.”
I’m not sure this is necessarily due to the mental image. My initial thoughts on reading this were that “1,286 people out of every 10,000” carries connotations implying that at least 10,000 people have been affected, since it would be strange to say that otherwise (you would say “out of 1,000″ or a different convenient denominator). The 24.14% figure does not contain this information.
It’s still not valid reasoning, since they used diseases that affect far more than 10,000 people. I’m just saying that I think the underlying basis in this example may be different.
I would add that we should remember that proposing “categories in which solutions might fall” might itself be subject to priming/anchoring within category-space. =)
So for example, we could consider systematized ways to map the idea-space—I imagine that certain categories could be easily missed, and/or are very likely to show up for specific classes of problems.
Yeah, that’s what I was thinking of when I suggested that it might be done systematically. I hope that a pre-written list wouldn’t be necessary though, since I think such a list would also cause priming unless it were completely exhaustive.
Also, a separate idea I just thought of—although making a list as you suggest is a big step forward in generating ideas, I would speculate that every idea is still primed in some way, even if only by your previous thoughts. (For example, if I forget a thought that I wanted to consider more, often thinking about what I was thinking about right before will let me produce that thought again, even though the previous thoughts were ostensibly unrelated. Similarly, a category like “jobs” will tend to elicit different initial thoughts in different people, which would then prime their next thoughts, etc.) So I’m suggesting that making a list could perhaps be interpreted as “preparing many unrelated ways of priming yourself beforehand so that when you exhaust one search you can re-prime yourself from another starting point.” And then you would cover a much larger region of idea-space as a result—although I’m not sure how this different interpretation might help in the search for more ideas.
I think that you can derive a strong argument from “your definitions aren’t specific enough,” if the theory allows for more than one interpretation (which should arise as a result of nonspecific definitions being used). The “specific” criticism could come from looking at your answer for the first part of the essay, suggesting an analysis that is sufficiently different from yours that they differ in key points, and then supporting this alternative analysis using the theory as well. Or, as an alternative way of phrasing question—is there one clear answer to the first part, or does the theory allow for multiple courses of action with non-trivial differences?
So although the original problem arose from the definitions, the actual criticism would be along the lines of “the theory is not specific/developed enough to prescribe a unique course of action in all circumstances.”
Not really what the topic is about, but I think it’s always important to remember that there is a countering factor if the timescale is long enough (years) - your cognitive abilities will decrease with age, so decisions made earlier may benefit from this.
I imagine that there might be some age at which your (always-increasing, but perhaps subject to diminishing returns) experience and your (always-decreasing, after around 20 or 25) cognitive ability cancel, resulting in a peak in decision-making ability ceteris paribus.
now ‘the flora and fauna’ will be poisoned if they try to take your offer anytime soon.
All the biological material will be cycled back into the ecosystem, most of it quite soon—even despite the presence of toxins (formaldehyde-eating bacteria, etc). His statement is correct in the sense that if you are cryopreserved, the net amount of carbon, nitrogen, phosphorus, etc in the biosphere will be slightly lower than it otherwise would have been.
Not attacking your position, just pointing that out.
I’m surprised that nobody seems to have brought up any mental benefits of speaking more than one language. I’m not sure how strong the evidence is, but there has definitely been research that claims to point in that direction.
Of course (I think I should have pointed that out in my first post), but physics/math/etc also take a long time to learn properly, so the time required becomes much less relevant.
I do not mean that one should necessarily learn a new language instead of learning math—although I might say that if you already know a lot of math (enough to get significant benefit to your thought processes), it might be useful to spend some time learning something that trains different aspects of mental processing (like learning a language). If I had to speculate on what any specific benefits might be, I would suggest that it comes from having more than one independent lens through which you interpret the world/more than one basis for your thought processes, and the benefit to mental flexibility you can get from switching between them (I’m not sure if I am properly communicating what I mean by that though).
The traditional way of inserting a gene into the genome is to use a retrovirus with its DNA replaced. Most such viruses (at least, that have been used) incorporate randomly, meaning that there is a small but nonzero chance every time a new cell is modified that it will knock out a gene that is important for controlling cancer. On a cellular level, the most likely cause of this is cell death, as the rest of the cell’s anticancer mechanisms shut down the cell. But of course, this doesn’t work every time.
There are specific viruses (i.e. that always integrate at the same, safe genomic location) currently being developed, and it’s hoped that these will solve the problem.
However, there’s actually another related problem. If you want to make major changes to the cell (like reprogramming it into a stem cell), the cell’s anticancer mechanisms will detect that as well, so in order to make those changes you have to at least temporarily shut off some of those mechanisms. So there is a risk for cancer in that as well.
About the topic of this thread—generally, the ability to survive specific extreme environments (especially one that affects everything in the cell such as changes in water content or temperature) is a specialized adaptation. I would not be surprised if there are global differences in the genomes of these species, e.g. most proteins are much more hydrophilic, or there is a system of specialized chaperones (=proteins that refold other proteins or help prevent them from misfolding) plus the adaptations in proteins that allow the chaperones to act on them, and further systems to repair damage the chaperones don’t prevent. It is unlikely that only a few genes would be involved, and unless a case can be made for evolutionary conservation of the adapted genes to humans, we wouldn’t have most of them (in fact, any genome-wide changes would mean that we would have to adapt our own proteins in new ways, just because we don’t share all of them with the species in question). Cold temperature is actually a special case here, because it slows down everything and thus reduces the amount of “equivalent normal-temperature time” that has passed. It’s still difficult (and of course none of these are impossible), but I don’t think it’s likely that small-scale gene therapy would be sufficient.
Blood clotting is not caused by red blood cells but by platelets. They do get caught up by the clot spreading around them and then act as parts of the barrier, but removing them too fast would actually increase ischemia because they’re what carry the oxygen.
(By the way, I hope that the cryoprotectant solutions contain high concentrations of dissolved oxygen. Not nearly as good as having the actual RBCs, but you can increase the amount (supersaturation) by keeping it under pressure.)
Anyways, given that perfusion is already taking place (and this is removing all of the components of the blood including the platelets), the other option is to disable the blood clotting cascade, for example by administration of anticoagulants such as warfarin. I don’t know if this is already done. You would also have access to more “extreme” types of anticoagulation, chemicals (or higher doses) that aren’t on the medical market because the effects are normally too strong.
I suppose another option would be to suggest that the patient to start taking anticoagulants before death. I’m not sure whether that would have legal implications though.
See the third paragraph of Coagulation—the diagram of the blood clotting cascade is on the right. I’ve never heard of rouleaux having a role in blood clotting—a quick PubMed search turned up this case study, but it was due to mutations in the protein fibrinogen.
I don’t think it has any legal implications, at least the Best’s article doesn’t mention any.
I was thinking that since the drugs are dangerous (even more so if you’re already in a weakened condition), it would be viewed as attempting to hasten their death. Especially if someone overdosed either deliberately or accidentally.
I already read it. That quote doesn’t say anything about rouleaux or clotting; it just describes one of the mechanisms (other than clotting) by which brain ischemia occurs. Can you be more specific?
As long as you recognize that clotting is a different process. =)
It’s been a few years since I studied this, but as far as I know, the physiological significance of rouleaux (including whether they block blood vessels) is unknown—don’t forget that they’re in equilibrium with the non-rouleaux form. Although cold temperatures will slow down that equilibrium, and possibly cause the problems you’re referring to.
I would have been much more convinced by data from a controlled experiment. A lot of things could cut off flow, as you pointed out, and there are a lot of things going wrong in a dying person. I’m actually not sure why he brought rouleaux into it—my understanding is that we already know the RBCs clump and that this blocks capillaries.
In any case, the main point I was trying to make was that reducing the number of RBCs in the brain is probably not the best way to go, unless we can figure out an alternative way to supply oxygen. Destroying the RBCs and letting the hemoglobin travel freely would probably help, but that would set off all sorts of damaging physiological responses as well.
Historically, most drugs have been identified by high-throughput screening, i.e. you purify an enzyme of interest and test billions of different chemicals against it for the desired effect. You then test for an effect in cell culture (compared to healthy cells), or you can screen directly against the cancer cells. Once you have that evidence, you test whether it has effects in mice, and only after that can you test anything in humans.
It’s possible to propose a single chemical and get it right by chance, but testing a single chemical is cheap. In an already-equipped lab, the initial cell culture data will probably take a few weeks and under a thousand dollars, and after that you will have people willing to help and/or fund you. The lack of even this initial evidence is generally a good reason to believe that something doesn’t work.
With regards to hypotheses, a lot of the early drugs were identified by chance—there’s a description at History of cancer chemotherapy. Most of the current interest is in targeted therapy, i.e. intended to act against specific proteins involved in various types of cancer, and the starting point is the identification of that protein. Chemo drugs are a bit different since they’re a very broad class (they target rapidly dividing cells in general, which is also what causes the toxicity), and the metabolic networks they affect are generally well-known, so the initial hypotheses tend to be about new ways that you can intervene in those networks. There are other approaches to the various steps as well, e.g. structure-based drug design has had some success, but not yet enough to replace the screens.
Bleach will control (kill) most bacteria, but since cancer cells are very similar to your own cells, the prior is very low unless there is a specific reason to think that it will target one of those differences. For example, something that is just corrosive will probably affect the different cell types equally. Another thing is that since it’s a charged molecule, it can’t actually enter the cell on its own unless it rips apart the cell membrane, in which case that’s probably the main mechanism of toxicity.
Also, I wouldn’t be surprised if it had been tested. The most likely outcome would be that it failed at an early step in the testing process (along with a large number of other chemicals), and nobody had any reason to publish it or think that anyone would ever actually decide that it might work.
The Equator passes through South America, actually. I think that there is a perception of the world’s land area being divided in two by the Equator, but most of the world’s land area is in the Northern Hemisphere (about 2⁄3, more if you don’t count Antarctica).
Edit: My apologies (see next comment).
I’m sorry—I suppose I’m probably missing something, but I can’t think of any other possible way to interpret this question. I agree that it is far more probable to see a sequence equally containing both heads and tails than one containing only heads, but it seems like you are asking for the relative probabilities of two highly specific sequences of the same length. Could someone please explain?