Wei Dai’s first link was a doc with medical guidelines written by people with medical expertise (though not (explicitly) for civilians, I would expect legal risk to deter medical professionals from making guidelines for civilian use). That link is now dead, but archived here.
It included the South Korean guidelines:
According to the Korea Biomedical Review, the South Korean COVID-19 Central Clinical Task Force guidelines are as follows:
1. If patients are young, healthy, and have mild symptoms without underlying conditions, doctors can observe them without antiviral treatment;
2. If more than 10 days have passed since the onset of the illness and the symptoms are mild, physicians do not have to start an antiviral medication;
3. However, if patients are old or have underlying conditions with serious symptoms, physicians should consider an antiviral treatment. If they decide to use the antiviral therapy, they should start the administration as soon as possible:
… chloroquine 500mg orally per day.
4. As chloroquine is not available in Korea, doctors could consider hydroxychloroquine 400mg orally per day (Hydroxychloroquine is an analog of chloroquine used against malaria, autoimmune disorders, etc. It is widely available as well).
5. The treatment is suitable for 7 − 10 days, which can be shortened or extended depending on clinical progress.
Notably, the guidelines mention other antivirals as further lines of defense, including anti-HIV drugs.
My current strategy is to follow these guidelines (with hydroxychloroquine + zinc) if medical treatment is unavailable, there’s strong evidence that the illness is COVID-19, and serious COVID-19 symptoms are present. I’ll also have activated charcoal on hand to help mitigate accidental overdoses. I’m trying my best to familiarize myself with the risks involved so that I can make good decisions if the situation calls for it. Of course, my primary strategy is prevention in the first place.
BTW, the google doc appears to have been taken down due to a TOS violation.
You can buy hydroxychloroquine here still (as of March 20th): https://fixhiv.com/shop/coronavirus-drugs/hcqs-400-hydroxychloroquine-400-mg/ which imports it from India. This site also lets you easily buy a prescription for it, FWIW.
Check for G6PD deficiency before taking chloroquine (can be done through the 23-and-me interface) as it can cause haemolysis. Apparently not an issue with hydroxychloroquine: https://www.ncbi.nlm.nih.gov/pubmed/28556555
Just because something is dangerous in overdose doesn’t mean that medical supervision is needed: for example acetaminophen, or even water. The relevant thing is that the therapeutic dose is close to the lethal dose for chloroquine, and chloroquine dosing is complicated.
Hydroxychloroquine is 40% less toxic while still being effective, according to this article: https://www.nature.com/articles/s41421-020-0156-0
Medical supervision may not be available if current trends continue, so we must carefully weigh the options available to us.
That sounds right to me.
From what I can tell, it looks like the main danger is with a live vaccine, where the vaccine can give the disease to a large number of people (biggest actual disaster seems to have been the Cutter incident, which infected 40,000 people with polio).
I assume that the trial is also there to catch potential black swan issues.
IIRC the Covid-19 vaccines on trial are not live, so the case for doing the 14 month watch was not as strong as I expected. Certainly worth considering more carefully at least.
This is more mathematically justified than you seem to think. Posets are topological spaces and categories, and every space is weak homotopy equivalent to a poset space, which explains why the intuition works so well.
the traditional presentation of category theory is perfectly adapted to its original purpose
I think this is too generous. The traditional way of conceptualizing a given math subject is usually just a minor modification of the original conceptualization. There’s a good reason for this, which is that updating the already known conceptualization across a community is a really hard coordination problem—but this also means that the presentation of subjects has very little optimization pressure towards being more usable.
I’m planning to go with ACS, which is a lesser known cryonics organization that has been around longer than Alcor and CI. The price for a full suspension is $155,000 which is in between the CI and Alcor prices.
They don’t actually run their own facilities, instead they contract with other organizations, currently CI to hold the vitrified bodies. For doing suspensions, they seem to have their own procedure, and you can additionally choose to have them contract other organizations such as Suspended Animation Inc. (which is the one Alcor uses).
Since they contract, they have increased flexibility which seems quite valuable. In particular, it helps against organizational incompetence which both Alcor and CI seem to have their fair share of. It’s harder to find info about the competence of ACS themselves, but the fact that they’ve been around a long time bodes slightly well.
They also sponsor cryonics research, which is really cool.
Anyway, I’d really appreciate having more people analyze them as a cryonics option before I commit to them!
There’s an upper limit to how relatively bad it can be due to the fact that you are shedding copies of your genome in public all the time.
Yes, lol :)
I noticed after playing a bunch of games of a mafia-type game with some rationalists that when people made edgy jokes about being in the mob or whatever, they were more likely to end up actually being in the mob.
What schedule are you going to posting these at? I’ve been eagerly looking forward to the next installment!
[Note: potential info hazard, but probably good to read if you already read the question.]
[Epistemic status: this stuff is all super speculative due to the nature of the scenarios involved. Based on my understanding of physics, neuroscience, and consciousness, I haven’t seen anything that would rule this possibility out.]
All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I’d be okay with that, as I’m not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
FWIW, I’ve thought about this a lot and independently came up with and considered all the scenarios mentioned in the Turchin excerpt. It used to really really freak me out, and I believed it on a gut level. Avoiding this kind of outcome was my main motivation for actually getting the insurance for cryonics (the part I was previously cryocrastinating on). However, I now believe that QI is not an s-Risk and don’t feel personally worried about the possibility anymore.
One thing to note is that this is a potential problem in any sufficiently large universe, and doesn’t depend on a many-worlds style interpretation being correct. Tegmark has a list of various multiverses, which are different and affect what scenarios we might face. I do believe in many-worlds (as a broad category of interpretations) though.
Lots of the comments here seem confused about how this works, so I’ll recap. If I’m at the point of death where I’m still conscious, the next moment I’ll experience will be (in expectation) whatever conscious state has the highest probability mass in the multiverse, which is also a valid next conscious moment from the previous moment. Note that this next conscious moment is not necessarily in the future of the previous moment. If the multiverse contains no such moments, then we would just die the normal way. If the multiverse includes lots of humans doing ancestor simulations, you potentially could end up in one of those, etc… The key is that out of all conscious beings in the multiverse who feel like this just happened to them, those are (tautologically) the ones having the subjective experience of the next valid conscious moment. And it’s valid to care about these potential beings, and is AFAICT the reason I care about my future selves (who do not exist yet) in the normal sense.
Regarding cryonics, it seems like the best way to preserve a significant amount of information about my last conscious moment. To whatever extent information about this is lost, a civilization that cares about this could optimize for likelihood of being a valid next conscious moment. I think this is the main actionable thing you can do for this. Of course, this only passes the buck to the future, since there is still the inevitable heat death of the universe to contend with.
Another thing that seems especially plausible for sudden deaths Aranyosi’s 1 scenario. In this case, the highest probability mass next conscious moment will be a moment based on the moment from a few seconds before, but with a “false” memory of having survived a sudden death. This has relatively high probability because people sometimes report having kind of experience when they have a close call. But this again simply passes the buck to the future, where you’re most likely to die from a gradual decline.
However, I think that by far, the most likely situation is common to death by aging, illness, or heat death of the universe. At the last moment of consciousness, the only next conscious moments that will be left will be in highly improbable worlds. But which world you are most likely to “wake up” in is still determined by Occam’s razor. People seem to imagine that these improbable worlds will be ones where your consciousness remains in a similar state to the one you died in, but I think this is wrong.
Think carefully about what things are actually happening to support a conscious experience. Some minimal set of neurons would need to be kept functional—but beyond that, we should expect entropy to effect things which are not causally upstream of the functionality of this set of neurons. Since strokes happen often, and don’t always cause loss of consciousness, we can expect them to eventually occur for every non-essential (for consciousness) region of the brain. Because people can experience nerve damage to their sensory neurons without losing consciousness, we can expect that the ability to experience physical pain will decay. Emotional pain doesn’t seem to be that qualitatively different from physical pain (e.g. is also mitigated by NSAIDs), so I expect this will be true for pain in general.
So most of your body and most of your mind will still decay as normal, only the absolutely essential neuronal circuitry (and whatever else, perhaps blood circulation) to induce a valid next conscious moment will miraculously survive. Anesthesia works by globally reducing synapse activity. So the initial stages this would likely feel like going under anesthesia, but where you never quite go out. Because anesthetics stop pain (remember this is still true if applied locally), and because by default, we do not experience pain, I’m now pretty sure that given QI being real: infinite agony is very unlikely.
Yeah, I think the engineer intuition is the bottleneck I’m pointing at here.
This rings really true with my own experiences; glad to see it written up so clearly!
I think that lots of meditation stuff (in particular The Mind Illuminated) is pointing at something like this. One of the goals is to train all of your subminds to pay attention to the same thing, which leads to increasing your ability to have an intention shared across subminds (which feels related to Romeo’s post). Anyway, I think it’s really great to have multiple different frames for approaching this kind of goal!
I think people make decisions based on accurate models of other people all the time. I think of Newcomb’s problem as the limiting case where Omega has extremely accurate predictions, but that the solution is still relevant even when “Omega” is only 60% likely to guess correctly. A fun illustration of a computer program capable of predicting (most) humans this accurately is the Aaronson oracle.
This post has caused me to update my probability of this kind of scenario!
Another issue related to the information leakage: in the industrial revolution era, 30 years was plenty of time for people to understand and replicate leaked or stolen knowledge. But if the slower team managed to obtain the leading team’s source code, it seems plausible that 3 years, or especially 0.3 years, would not be enough time to learn how to use that information as skillfully as the leading team can.
Is there a reason not to take it if you’re younger than 40?