What about virulence? Does it take a lot less viral particles to cause a Covid infection compared to the flu?
I think 6 ft and everyone wearing masks (properly) to avoid infecting others, plus sticking to open spaces only should basically reduce the risk to that of venturing out alone.
The idea is not to remove almost all risk, but to reduce R0 below 1. Six feet is likely to help a lot with that.
Then exploring these crystal spheres without crashing into them might be a thing to do. Applications.
the heliocentric model gives some important implications for the ethics of space travel.
What are those implications? I tend to prefer dealing with applications, not implications, so not sure what you mean.
Sure, you can move the comment around as you see fit. 30% ballpark because China appears to be much more competent at minimizing the impact, and the US is still not even acknowledging the number of cases they likely have, and are doing half-measures, which is the worst possible approach. Yet, even with a likely depression coming, there are too many uncertainties as to how the situation might develop. If I was 90% sure, I’d probably buy stock options or bet on Chinese currency firming up against US dollar.
One very common pitfall here that you mention, and that is inherited from Eliezer’s writings, is related to the potential infinite universes and many worlds. “But many worlds implies...” No, it doesn’t. Whether some physical model of the world that is believed to be the one truth by the site founder gets some experimental evidence for or against some day need not affect your morality here and now. Or ever, for that matter, unless there is some day a proven way to interact with those hypothetical selves. The effects of your actions are limited to a tiny part of the observable universe, and that is only if you believe that you have free will. Which is another pitfall, “but if I don’t have free will, nothing matters.” Nothing objectively matters anyway, the meaning is inside the algorithm that is your mind. Hopefully that algorithm is robust enough to resist the security holes in it, called here infohazards and such.
A 30% probability version: the US bungles the crisis so badly and so thoroughly, the pandemic and the shutdowns will last months and result in a domino effect causing a downturn comparable to the Great Depression and lasting years. In the meantime, China will recover and rebound quickly and offer a support package to bail out the ailing West, an equivalent of the Marshall plan. Just like the latter heralded several decades of US domination, the former will commence the new Pax Sinica, Chinese domination over the world, with unclear but not necessarily negative consequences.
Yeah, that was optimistic, apparently. Woeful underreporting everywhere except Korea and less so Germany and Canada.
Ah, that makes a lot more sense!
Hah! How could just one of many Covid posts here be blocked?!
I think this approach is worth pursuing, at least as a toy model, to identify the salient features of such a system. There is, of course, plenty of research in the area of evolutionary modeling already, but maybe not exactly in the way you are interested in. Consider spending some time on the literature search and review.
Interesting review, looking forward to the next one.
This matches my interpretation of the “community”. Personally I am more of a post-rationalist type, with the instrumentalist/anti-realist bend philosophically, and think that the concept of “the truth” is the most harmful part of rationality teachings. Replacing “true” with “high predictive accuracy” everywhere in the sequences would be a worthwhile exercise.
Sorry, didn’t mean to imply that structural dissociation has anything to do with tulpas. I agree that the birthing a tulpa is likely quite different, and I tried to state as much.
Fiction authors who put extensive effort of modeling their characters often develop spontaneous “tulpas” based on their characters
I see the examples in the linked paper, of the characters having independent agency (not sure why the authors call in an illusion), including the characters arguing with the author, even offering opinions outside the fictional framework, like Moriarty in Star Trek TNG, one of the more famous fictional tulpas.
That said, they seem to mix the standard process of writing with the degree of dissociation that results in an independent mind. I dabble in writing, as well, and I can never tell in advance what my characters will do. In a mathematical language, the equations describing the character development are hyperbolic, not elliptic: you can set up an initial value problem, but not a boundary value problem. I don’t think there is much of agency in that, just basic modeling of a character and their world. I know some other writers who write “elliptically,” i.e. they know the rough outline of the story, including the conclusion, and just flesh out the details. I think Eliezer is one of those.
I wonder how often it happens that the character survives past the end of their story and shares the living space in the creator’s mind as an independent entity, like a true tulpa would.
I have some experience dealing with people who exhibit severe dissociation of the tulpa type, though mostly those who have multiple personalities due to a severe ongoing childhood trauma. The structural dissociation theory postulates that a single coherent personality is not inborn but coalesces from various mental and emotional states during childhood, unless something interferes with it, in which case you end up with a “system”, not a single persona. Creating a tulpa is basically the inverse of that, trying to break an integrated personality. Depending on how “successful” one is, you may end up segregating some of your traits into a personality, and, in some rare cases, there is no appreciable difference between the main and the tulpa, they are on the equal footing.
You can read on the linked site or just by looking up the dissociative identity disorder (there are quite a few youtube videos by those who deal with this condition) to get the idea of what is theoretically possible. Personally, I’d advise extreme caution, mainly because it is entirely possible to have a life-long amnesia about traumatic childhood experiences, but deliberately twisting your mind to create a tulpa may irreparably break those barriers, and the results are not pretty.
Isn’t separability of arbitrary Turing machines equivalent to the Halting problem and therefore undecidable?
I wish I didn’t have to see the deluge of coronaposts in my feed or under Latest Posts.
Now the infinity of a series consists in the fact that it can never be completed through successive synthesis.
I’m surprised how many philosophical arguments are based on the lack of imagination.