That’s partly because I’ve never seen a consistent top-to-bottom reasoning for it.
I think it’s difficult to find a consistent top-to-bottom story because the overall argument is disjunctive.
That is, a conjunction is the intersection of different events (“the sidewalk is wet and it’s raining” requires it to both be true that “the sidewalk is wet” and “it’s raining”), whereas a disjunction is the union of different (potentially overlapping) events (“the sidewalk is wet” can be reached by either “the sidewalk is wet and it’s raining” and “the sidewalk is wet and the fire hydrant is leaking”).
So if you have a conclusion, like “autonomous vehicles will be commercially available in 2030”, the more different ways there are for it to be true, the more likely it is. But also, the more different ways there are for it to be true, the less it makes sense to commit to any particular way. “Autonomous cars are commercially available in 2030 because Uber developed them” has more details, but those details are burdensome.
Also, it seems important to point out that the Bostromian position is about the future. That is, the state of autonomous vehicles today can tell you about whether or not they’ll be commercially available in 2030, but there’s no hard evidence and it requires careful reasoning to just get provisional conclusions.
And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030. Which is a very different question from how you should try to solve practical problems now.
I think you run into a problem that most animal communication is closer to a library of different sounds, each of which maps to a whole message, than it is something whose content is determined by internal structure, so you don’t have the sort of corpus you need for unsupervised learning (while you do have the ability to do supervised learning).
I came in agreeing with several of the author’s conclusions (many ‘aesthetic’ breeds are animal cruelty, owning a dog as a single person with a full-time job is probably cruel, dogs are a poor substitute for children, etc.), and yet found something about the article highly offputting.
First is that I think “dominance” is the wrong frame, and having the wrong frame often generates lots of “well, I don’t disagree with what you’re saying here, but somehow I disagree with the whole thing.”
I think from the dog-owner’s perspective, the right frame is closer to ‘being needed.’ Think about the greentext about shrimp and this bit of The Bell Curve:
The broadest goal is a society in which people throughout the functional range of intelligence can find, and feel they have found, a valued place for themselves. For “valued place,” we offer a pragmatic definition: You occupy a valued place if other people would miss you if you were gone. The fact that you would be missed means that you were valued. Both the quality and quantity of valued places are important. Most people hope to find a soulmate for life, and that means someone who would “miss you” in the widest and most intense way. The definition captures the reason why children are so important in defining a valued place. But besides the quality of the valuing, quantity too is important. If a single person would miss you and no one else, you have a fragile hold on your place in society, no matter how much that one person cares for you. To have many different people who would miss you, in many different parts of your life and at many levels of intensity, is a hallmark of a person whose place is well and thoroughly valued. One way of thinking about policy options is to ask whether they aid or obstruct this goal of creating valued places.
That said, many of the same complaints apply—why not be needed for something productive, instead of manufacturing something where the need is the feature, instead of the bug? If no one in your life needs you, and you buy/rescue a dog and now one dog needs you, is that an improvement / is that healthy?
I think the answer is “yes,” and thinking about the word “healthy” clarifies why. Suppose someone is writing about food, and points out the ways in which food grown without pesticides is healthier than food grown with pesticides. If you’re worried about second-order effects, of what additional chemicals you’re ingesting, this is right; if you’re worried about first-order effects, of whether or not you’ll be malnourished, this is wrong. (As an important background fact, pesticides increase yields, such that organic farms produce fewer calories per unit of land and effort.)
In general, I try to be allergic to the “everyone should have <luxury version> of <good>” argument, because in fact people are better in tenements than living on the street, and if we had more tenements we might have fewer people living on the street, and so banning tenements is probably harmful. Similarly with minimum wage laws, and so on and so on.
Consider this section:
What does it say about a human who enjoys this emotional transaction? It says that on some level they like the idea of having dominance over another being. And, they want that dominance to be a feature of their daily life.I don’t think there is anything wrong with enjoying dominance, per se. Sexual dominance is clearly a popular tendency, and likewise, the desire to dominate others in competitions is a useful inborn characteristic which incentivizes ambitiousness and effort. I think identifying and pursuing both of those forms of dominance can bring pleasure and satisfaction in a healthy way.
What does it say about a human who enjoys this emotional transaction? It says that on some level they like the idea of having dominance over another being. And, they want that dominance to be a feature of their daily life.
I don’t think there is anything wrong with enjoying dominance, per se. Sexual dominance is clearly a popular tendency, and likewise, the desire to dominate others in competitions is a useful inborn characteristic which incentivizes ambitiousness and effort. I think identifying and pursuing both of those forms of dominance can bring pleasure and satisfaction in a healthy way.
That is, the author isn’t opposed to dominance, or A being better than B. They just think there are good ways to do it and sad ways to do it, and dog ownership is one of the sad ways. If we analogize to video games, they’re claiming that playing competitively is good, and only scrubs play against AI instead of other humans.
There’s a part of this that seems right—people who win at competitive video games are better at gaming than people who can’t win, and people who win competitions / status games are better at competing than people who can’t win those competitions—but also a part that seems mistaken, in that it won’t be the case that everyone can be above average, unless you include competitors that are ‘outside everyone’ while still engaging in the correct way.
And in especially in the context of “minimum wage laws” or “looking down on the worse version of things”, it seems especially cruel to cut off opportunities for people who aren’t very needed / aren’t very respected to get an easy source of need and respect, not because it’s harmful but because it reflects poorly on them for being on the bottom of the pyramid. That is, in an ordered system, someone is going to be on the bottom, and we get to decide whether it’s people or dogs.
[There’s a different argument you can make, where you say the relationship is bad for the other side; I currently think it’s the case that humanity has made a pretty good deal with cows from the cow’s point of view, for example, but don’t think that humanity has made a pretty good deal with chickens from the chicken’s point of view. The author considers this argument but only accepts it in a limited way, in approximately the same way I do, but I think the ‘family dogs’ and ‘lapdogs’ have way more meaning than the author thinks.]
I also think that dogs probably have real feelings and don’t just act the part like this creepy robot child, although I wonder how can one actually test this.
I mean, this depends on what you mean by “real feelings,” but as far as I can tell the physiological cause of emotions is basically shared by all mammals. (If anything, emotions likely play a larger part in the mental processing of non-human animals, because there’s less of other deliberative faculties to play against them.)
But ultimately, nurses/EMTs/medical students can be trained to do all this in a few days. If someone’s competent and confident and has adequate back-up in-case of issues.
I think we might want to be in the world where we train a substantial fraction of the recently unemployed, or the National Guard, or whoever to do this, which requires starting from a lower point than nurses/EMTs/medical students.
They assume a fixed ICU capacity (where I think the main limit is ventilators, not just beds). Does anyone have any models/estimates of how much UK/US can expand intensive care capacity (e.g., with “wartime” style all hands on deck manufacturing and innovation)?
My understanding is that the treatment requires significant monitoring and skill; the ventilation is often invasive (they have to get the tube into your lung, rather than just into your mouth).
But for a while people have been suggesting compartmentalizing the medical system further. If you just want someone to be a ‘ventilator nurse’, able to intubate a patient and then manage a ventilator for that patient, could you do that with a 30-day training program? Seems likely and worthwhile, but will require some sort of emergency legislation to authorize in most places, and some rapid development of curricula and testing.
Similarly, expanding production runs into legal issues. You may have heard about the volunteers who 3D printed ICU valves; they asked the company for blueprints, and the company threatened to sue for the IP violation. You might also have heard about the patent troll who sued the makers of a COVID19 test for infringement; they dropped the case once it was public that the use was a COVID19 test. It seems like a potentially sensible government action here is to nationalize (or otherwise force licensing) of technology that’s useful in a disaster, with the government paying for the IP after-the-fact based on actual usage out of the overall disaster response fund.
But in general, our ‘peacetime’ standards for medical devices are very high. If you want to take your toaster factory (or w/e) and start spitting out ventilators instead, there’s a lengthy approval process because this is complicated stuff with many ways things can go wrong. When the alternative is nothing, it’s probably good to have rush jobs available, but there’s nothing in place (that I’m aware of) to allow this sort of rapid ramping.
This abstract says “Sauna takers should avoid bathing during acute respiratory infections.” I haven’t read the paper to figure out why they think that.
Dalio even thought about COVID significantly before the crash.
I think he thought about the reference class of pandemics, more than he thought about COVID. I think the key details in this becoming as bad as it is are mostly missing from that post.
Also, “viable virus could be detected” seems potentially different from “you could get infected from handling it” (in either direction). Your immune system is more robust than the cell culture they use to detect, so many ‘detectable levels of virus’ are still safe, and the detection method requires diluting the contaminated swab, which means that you coming into contact with the undiluted thing might have nonzero risk even when the test can’t detect anything.
But based on my limited understanding of what’s going on, the threshold is within a doubling or two of what’s sensible; not diluting at all should correspond to the 10^0 line on the graph, which their model suggests should be hit within 20-45 (point estimate: 30) hours of when virus is deposited on the box. So unless you’re somehow managing to concentrate the virus, that’s the point at which it couldn’t infect a cell culture without an immune system.
I’m glad that they ran this experiment, since this is a core uncertainty, and think we should accept that the price of posting drafts is that you have to read them carefully to spot potential errors; having the graph of the raw data a few days earlier was worth having to eyeball the analysis myself to check their numbers.
I’m working off this paper, which did test cardboard.
According to some as summarized by wikipedia, there’s not all that much evidence that people who didn’t prepare were bitten by it, or that fixing ahead of time was cheaper / better than fix-on-failure.
From A Technical Explanation of Technical Explanation:
How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn’t. It isn’t going to happen.
If a miracle happens, then a miracle happens. I’m not holding my breath.
The ways in which I do expect Vaniver_2021 to look back at Vaniver_2020 and think “yeah, he was worried about that but it didn’t turn out to be relevant” are various unknowns about the virus that might be fine or might be bad. For example, we don’t know how bad surface transmission will be, but that’s a big factor in what sort of isolation protocols you need to have. We don’t know whether existing anti-virals will be effective. We don’t know how long immunity will last, but that’s a big factor in whether or not ‘herd immunity’ strategies will work, and how valuable it is to not catch it. We don’t know how big a deal antibody-dependent enhancement will be, or how that will interact with the duration of immunity. We don’t know what long-term effects of infection (think fatigue, disability, infertility, etc.) look like. We don’t know how long people are infectious before they show noticeable symptoms.
For all of those things, I put significant probability on the “it’s fine” side of the uncertainty. But it being not fine is quite bad compared to it being fine, such that the expected utility shakes out that I should take it seriously until we know more. For example, I now think that if you’re taking your temperature every day, the “infectious before noticeable symptoms” window is probably about a day, which seems pretty tolerable, but don’t think I made a mistake in my assessment before. If the long-term disability risk turns out to be closer to 1% than 10%, then I’ll adjust my prior on long-term disability for next time (in the obvious way that I’ll have two datapoints instead of one), but I won’t think “oh, I cried wolf.”
wait a day, and the virus will be dead to an extent that you can’t get infected.
The paper that I’ve seen that tried to estimate this only reported TCID50/mL; how do you convert from that to infection risk?
[I think their methodology also might have been the equivalent of ‘licking the box’ instead of, say, touching the box with your finger and then touching your lips with your finger and then licking your lips, but for simplicity’s sake let’s assume I’m licking the box.]
Is somebody keeping track of the “what if we’re wrong and it turns out this is another Y2K” scenario?
That is, a combination of “prevention work successfully means no big disasters” and “absence of prevention work doesn’t cause any major disasters”? I think that cat is already out of the bag on the latter one; people might end up disagreeing on whether it was better to be in Iran or Wuhan, but they won’t be able to disagree that the lockdown in Wuhan had an effect.
I think there will be variation in what sorts of social distancing happen, which we should be able to back out data on, and similarly demonstrate that social distancing had an effect. (I expect it’ll be smaller than many people hope it’ll be, but it’ll still be noticeable.) Like, we could see the effect in 1918 influenza data, and we have a much better ability now to track how people come into contact with each other.
[I expect the main thing to happen is that people take insufficient protective measures, which makes them look like a waste, or we get stuff like “ah look, extensive social distancing meant the peak happened two weeks later!”, which is of unclear value compared to the costs.]
See also discussion of this (and related) papers here.
Online schools have been around for a while, but I think are generally less popular, and the main users are people who live too far from a regular school to think it’s worth the trip, or who got bullied too much by other students, or so on.
More broadly, tho, I think the thing schools are selling is a package, and the ‘book learning’ part of it is something like a third (or less) of the value of that package for the typical student (and parent), but is the main bit that can also be duplicated by online schooling.
Ah, good to know!
Looks like he tested positive and then negative? Hopefully that means he doesn’t have it, but I seem to recall these tests can be inaccurate in both ways.