Checked through to the microCOVID models and found you marked everyone as being silent. Technically, sure, no one would really be shouting while they’re dancing, but they’ll be breathing heavily enough that exhaled droplets/aerosols/whatever would be similar. Choosing “loud” as the option increases everything by a factor of about 20.
Celarix
Deploying UVC to disinfect large spaces might be infeasible, but would it be easier to have smaller UVC lamps inside ventilation ducts and let the air pass under them at a higher rate? You get a much closer lamp, much more airflow, and don’t have to expose anyone to UVC directly.
Good afternoon, everyone. I’m happy to be here.
I’ve been following the rationality movement for a few years now, and I’ve been going back and forth on joining for about as long. My first introduction to LessWrong was through the posts on akrasia and techniques that might help with that. I followed that with reading some of SSC’s greatest hits. Meditations on Moloch haunts me and I now see Molochian influence in a lot of places these days.
I think I’m joining now because I want to handle uncertainty better. Uncertainty gives me a knot in my chest and a buzzing noise in my mind, it makes me uncomfortable and demands my attention. I want the universe to have clear, sharp, definitive answers on everything that could be found with just enough experimentation, logical thinking, and equipment sensitivity… but that’s not the way things work. I want to learn to sit with uncertainty, to not tie myself in knots trying to find the answer.
I’ll likely read more than write. I’m just glad that a place like this exists.
My review mostly concerns the SMTM’s A Chemical Hunger part of this review. RaDVaC was interesting if not particularly useful, but SMTM’s series has been noted by many commenters to be a strange theory, possibly damaging, and there were, as of my last check, no response by SMTM to the various rebuttals.
It does not behoove rationalism to have members that do not respond to critical looks at their theories. They stand to do a lot of damage and cost a lot of lives if taken seriously.
I don’t know that we would have the political will to clearcut the Amazon and switch out our crop supply, even in the face of >70% death by starvation. Political polarization is very high right now. When either side proposes anything, the other will oppose it, even if it’s saving humanity.
You might rightly say “but starvation is a powerful motivator!” and it is, but the people doing the starving won’t be the people who could move the crops—the farmers and the world’s lumber industry, who would be foiled by politics at every turn. The starving people will be too hungry to really do anything.
So maybe not 5 billion dead, but I wouldn’t be surprised at about 3 billion.
This is in the heart of wine country where grapes grow in abundance and wheat waves like golden seas- but not now. Now the wheat burns and the grapes whither to raisins on the vine. This is the end of days. And on Monday morning I’ll return to work and pretend this isn’t happening. It’s complete madness.
This seems like a pretty common pattern in argument and debate, which I’ll tentatively call “piggybacked claims”—make a claim with some evidence (“this river’s dry, it’s really rare, here’s a picture”), then add on additional claims that may logically follow, but have no evidence of their own.
Is the wheat really burning? Are the grapes really raisins on the vine? Is this the end of days? Maybe, but the claimant doesn’t seem to want to demonstrate that. Surely there’d be pictures of the raisins, right?
I’d say kind of… you definitely have to keep your attention and wits about you on the road, but if you’re relying on anxiety and unease to help you drive, you’re probably actually doing a bit worse than optimal safety—too quick to assume that something bad will happen, likely to overcorrect and possibly cause a crash.
This is one of my favorite sequences on this site and I’m quite glad to see a new entry. I do have a question regarding the last section:
Rather, I would suggest opening up to feelings. Becoming familiar with them, understanding where they come from and what they are trying to do, and allowing them to become updated with new evidence and feedback.
How does one gain confidence that the read on their own emotions is an accurate description of the message they’re trying to communicate? That is, how can one be more sure that they’re actually listening to their emotions and not just assuming?
For example, many of us might be familiar with the type that listens to half of your description of an issue, assumes they immediately understand it perfectly, then gives you advice that doesn’t match your problem at all. (“I’ve been feeling sad late-” “oh yeah I know, man. Just get some more sleep, you’ll perk right up!”) How do I know I’m not doing that to my own emotions?
It seems like the Rationalist approach to psychology has reached some incredibly important yet very subtle places where the valuable signals we want to pay attention to (i.e. true intent of emotion) are incredibly weak. People wander the metaphorical wilderness for decades without truly seeing what’s going on in their heads, many who regularly go to therapy. I’m afraid of ascribing the completely wrong message to what my emotions are trying to tell me and getting stuck examining the wrong model for large parts of my life.
Anyway, an excellent post in an excellent sequence. Your work and Valentine’s work, more than many others here, have made things make sense to me. Thank you!
We must be vigilant, they remind us, about taxpayer dollars. The important thing about creating the next pandemic is the same as the important thing about preventing the next pandemic, which is making sure our tax dollars do not pay for it.
I think, as a corollary to never interrupt your enemy while they’re in the process of making a mistake, we may adopt never interrupt someone when they’re doing something you want, even for the wrong reason. Or, at least, maybe “taxpayer dollars” is the good-sounding excuse and they actually want to ban GoF for its world-ending powers and just can’t say that directly.
I’d like to push back a bit against the downsides of being overconfident, which I think you undersell. Investing in a bad stock could lose you all your investment money (shorting even more so). Pursuing an ultimately bad startup idea might not hurt too much, unless you’ve gotten far enough that you have offices and VC dollars and people who need their paychecks. For something like COVID, mere overstocking of supplies probably won’t hurt, but you’ll lose a lot of social clout if you decide to get to a bunker for something that may end up harmless.
Risk is risk, and the more invested you are in something, the more you have to lose—stocks, startups, respiratory diseases. I fear being overconfident would lead to a lot of failure and pain. Almost everything in idea space is wrong, and humanity has clustered around the stuff that’s mostly right already.
Maybe one angle is clean vs. dirty? Ancient imagery brings to mind dust, rust, yellowing of paper and bleaching by the sun. If one looks at the future as the opposite of the past, we’d imagine it clean and bright.
Other future-as-inversion-of-past ideas:
The past was brutal and violent; the future is peaceful and harmonized (well, ignoring extraterrestrial space war)
The past was concerned with frivolity, the future is concerned with important things like science, technology, fairness, etc.
This is a bit of a stretch, but maybe the past was information-poor: lossy, poorly preserved, easily lost documents in ambiguous old language vs. modern, lossless, high-fidelity recordings of knowledge en masse
If you steelman a position and can’t knock it down, that indicates that you may be wrong about your point, which, IMO, is valuable. Recognizing error in ourselves has a much higher return than recognizing error in others.
Yeah, I definitely agree − 20% is a big increase. 400 extra calories over a year (assuming 3500 calories = 1 pound) is an extra 41.7 pounds per year.
I was so excited by A Chemical Hunger when it was coming out. Oh, well.
Good summary, I feel like it makes a lot more sense when not couched in obscure language that seemed to be begging to be misinterpreted.
Someone’s going to link Kaj Sotala’s Multiagent Models of Mind sequence in response, so it might as well be me. Seems to fit nicely with the idea of humans as merely a pile of mesa-optimizers.
One question I’ve wanted to ask about subagents: what should you do if you determine that a subagent wants something that’s actually bad for you—perhaps continuing to use some addictive substance for its qualities in particular (rather than as a way to avoid something else), being confrontational to someone who’s wronged you, or other such things?
In other words, what do you do if the subagent’s needs must be answered with no? I don’t know how that fits in with becoming trustworthy to your subagents.
I’m afraid I don’t have the time for a full writeup, but the Stack Exchange community went through a similar problem: should the site have a place to discuss the site? Jeff Atwood, cofounder, said [no](https://blog.codinghorror.com/meta-is-murder/) initially, but the community wanted a site-to-discuss-the-site so badly, they considered even a lowly phpBB instance. Atwood eventually [realized he was wrong](https://blog.codinghorror.com/listen-to-your-community-but-dont-let-them-tell-you-what-to-do/) and endorsed the concept of Meta StackExchange.
It’s like how on days when you’re sick or depressed, you think that life is always like this, and you can’t remember what it’s like to feel happy or healthy, and then a week later when you’re happy and healthy, it feels like you’ve always been that way.
Can confirm. I call it the “Valley of Fog” effect—either you’re in the valley (sickness, pandemic) among the sharp rocks and rough terrain and you can’t see the sun (happiness, wellness, bustling streets), or you’re above the valley and can’t see the sharp rocks through all the fog. You remember that things used to be bad but you forget the feelings attached to it.
Ancient creation myths filled the universe with human figures – jealous lovers and loving fathers and whatnot. Later it turned out these myths were more like mirrors than telescopes, and only by ditching them in favor of real telescopes and the cold abstractions of mathematics could we make progress.
Excellent saying.
The Pessimist
You are concerned about your health. But you don’t feel empowered to do anything about it.
Health looks to be complicated; exercise, diet, cooking, and so forth. Other people can do it, it’s easy for them, and they are doing it now. You hope to be at their level one day, but, right now, you feel like you’re missing something foundational. You feel like you’re trying to build a skyscraper out of mud and sticks, like you have so far to go before you can even begin to think about your health and its complexities. You’re surrounded by health grad students and you’re just entering health Kindergarten.
Besides that is a gnawing feeling that maybe good health is intractable—that something about Western life is causing us to be less healthy and more obese, and we don’t currently know what. We may never know. Billions of dollars and millions of man-years will go into searching under the streetlamp for the causes of bad health, and 10,000 years pass before surviving humans finally adapt through natural selection.
Calorie intake does lead to higher body mass, and 20% is a big increase. I do not disagree with you, there. However, something made people eat 20% more calories, be it palatable foods, lithium (though, now, probably not), PFAS, etc.
My hope (desperate hope, really) is if we can find the cause of the 20% increase, we can reverse the obesity trends.