How is confidence different from the belief you have in your own competence? Your self-reported confidence and competence should always be the same.
Is there something I’m missing, some way that confidence is distinct from belief in competence?
tivelen
I searched the CDC’s Vaccine Adverse Event Reporting System (VAERS) and there are 474 reported cases of abnormal blood pressure following COVID-19 vaccination. Looking further in the Google search, I found a study (n = 113) which indicated increased risk of high blood pressure after vaccination, especially after previous infection.
Plainly, not everyone in the healthcare system is on the same page about side effects. I’d err on the side of the Walgreens person you talked to being more accurate, given that high blood pressure is a known side effect. Not known by that Nebraska Medicine doctor, apparently.
Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is “optional”, right?
I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.
I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought “That’s just slack applied to morality, isn’t it?”. Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.
Isn’t my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn’t I just about to conclude without actually reconciling the two positions?Oh. Looks like I still have some actual work ahead of me, and some more learning to do.
The only difference between this and current methods of painless and quick suicide is how “easy” it is for such an intention and understanding to turn into an actual case of non-existence.
Building the rooms everywhere and recommending their use to anyone with such an intention (“providing” them) makes suicide maximally “easy” in this sense. On a surface level, this increases freedom, and allows people to better achieve their current goals.
But what causes such grounded intentions? Does providing such rooms make such conclusions easier to come to? If someone says they are analyzing the consequences and might intend to kill themselves soon, what do we do? Currently, we force people to stay alive, tell them how important their life is, how their family would suffer, that suicide is a sin, and so on, as a society, and we do this to everyone who is part of society.None of these classic generic arguments will make sense anymore. As soon as you acknowledge that some people ought to push the button, that anyone might need to consider such a thing at any time, you have to explain specifically why this particular person shouldn’t right now, if you want to reduce their suicidal intentions. The fact that someone considering suicide happens to think of their family as a counter reason, is because of the universal societal meme, not its status as a rational reason (which it may very well happen to be).
We can designate certain groups (i.e. the terminally ill) as special, and restrict the rooms to them, creating new memes for everyone else to use based in their health, but the old memes remain broken, and the new ones may not be as strong.
I suspect that the main impact of providing the rooms will be socially encouraging suicide, regardless of what else we try to do, even if we tell ourselves we are only providing a choice for those who want it.
Interesting, that was something I considered, but didn’t think was included in the idea of confidence. I have experienced that before. The stakes of a situation also seems like an objective fact, like competence. Perhaps the subjective evaluation of stakes and competence are entangled into the feeling of confidence. Maybe it has something to do with low variance of outcomes? If you have done something a lot, or if it doesn’t really matter, then there isn’t anything to worry about, because nothing that matters is up for grabs in the situation.
If you compare deaths to harms, you can end up scared of vaccines or Covid, depending on which you compare. If no one died of a vaccine in your group but one or two people were hurt by Covid, you will be scared of Covid. The question is, where does the framing come from? If no one died of Covid or a vaccine in your group (which seems to be the most likely case for a given group), which do you become scared of, and why?
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.
Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.
A vaccination requirement could result in lower apparent effectiveness; so could risk compensation. In order to determine how much risk compensation occurred, we have to determine how much the vaccination requirement lowered the effectiveness. Without that analysis, concluding that risk compensation has a big enough effect to cause or contribute significantly to negative effectiveness is premature.
I am otherwise unsure of what you are trying to get at. The unvaccinated were prevented from doing a risky activity, and the vaccinated were allowed to do the activity (with a lower risk due to their status), yes.
Suppose 50% of vaccinated people would attend this event, and so would 50% of unvaccinated people, after considering the risks (ergo, there is no risk compensation). However, only vaccinated people are allowed to go to the event. Then the vaccinated people could have increased rates of Covid compared to unvaccinated people because of being more likely to attend superspreader events, even though they did not increase their level of risk compared to the unvaccinated population.
Whether this is the actual reason for the apparent negative effectiveness would depend on the actual percentages, and how common/dangerous superspreader events really are.
I’m wondering what the details of your friends reporting attempts are. Who exactly did they talk to? VAERS is the official U.S. reporting system, what were their experiences with that? If there is an underreporting problem, we need as many specifics as we can get to combat it. Given that some vaccines do have well-known side effects among certain demographics, lots of people have been able to report their side effects successfully. We would need to figure out why your friend group has been far less successful to correct the issue.
Without an explicit probability calculation, how exactly are we supposed to determine what the levels of side effects in reality are, vs what the medical data that has been collected and reported suggests, vs what the average person thinks is true? Perhaps all are biased and/or untrustworthy. I’m not sure where we can go from there. Has personal testimony from our own social groups become the best we can do?
In what way does this post do those bad things you mentioned? There is no mention of breaking innocent secrets, or secrets that would cause unjust ostracization, only patterns of actually harmful behavior.
If this post was made in confidence to you, would you tell others of it anyway?
This is something I’ve thought about recently. Even if you cannot identify your goals, you still have to make choices. The difficult part is in determining the distribution of possible M. In the end, I think the best I’ve been able to do is to follow convergent instrumental goals that will maximize the probability of fulfilling any goal, regardless of the actual distribution of goals. It is necessary to let go of any ego as well, since you cannot care about yourself more than another person if you don’t care about anything, now can you?
What is the mechanism, exactly? How do things unfold differently in high school vs. college with the laptop if someone attempts to steal it?
If an altruist falls on hard times, they can ask other altruists for help, and those altruists can decide to divert their charitable donations if they consider it worth more to help the altruist. If the altruists are donating to the same charities, it is very likely that restoring the ability to donate for the in-need altruist will more than pay for the donations diverted.
If charitable donations cannot be faked, and an altruist’s report of hard times preventing their charity can be trusted, then this will work to provide a financial buffer based purely on mutual interest.
Only if most altruists in the network fall on hard times does this fail, as there aren’t enough remaining charitable donations to redistribute. A global network of diversely employed altruists would minimize this risk.
Cases where an altruist is permanently knocked out of income (and therefore donation) would lack mutual interest. There would need to be a formal agreement to divert some charity for life to help them out, and this would most likely be separate from the prior network of mutual aid, and count as insurance.
I appreciate the benefits of the karma system as a whole (sorting, hiding, and recommending comments based on perceived quality, as voted on by users and weighted by their own karma), but what are the benefits of specifically having the exact karma of comments be visible to anyone who reads them?
Some people in this thread have mentioned that they like that karma chugs along in the background: would it be even better if it were completely in the background, and stopped being an “Internet points” sort of thing like on all other social media? We are not immune to the effects of such things on rational thinking.
Sometimes in a discussion in comments, one party will be getting low karma on their posts, and the other high karma, and once you notice that you’ll be subject to increased bias when reading the comments. Unless we’re explicitly trying to bias ourselves towards posts others have upvoted, this seems to be operating against rationality.Comments seem far more useful in helping writers make good posts. The “score” aspect of karma adds distracting social signaling, beyond what is necessary to keep posts prioritized properly. If I got X karma instead of Y karma for a post, it would tell me nothing about what I got right or wrong, and therefore wouldn’t help me make better posts in the future. It would only make me compare myself to everyone else and let my biases construct reasoning for the different scores.
A sort of “Popular Comment” badge could still automatically be applied to high-karma comments, if indicating that is considered valuable, but I’m not sure that it would be.
TL;DR: Hiding the explicit karma totals of comments would keep all the benefits of karma for the health of the site, reduce cognitive load on readers and writers, and reduce the impact of groupthink, with no apparent downsides. Are there any benefits to seeing such totals that I’ve overlooked?
Who exactly told you that?
What does it mean to Left-box, exactly? As in, under what specific scenarios are you making a choice between boxes, and choosing the Left box?
Perhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question.
I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have sufficient trust in your intuition about the statement in question.
What is your thinking on this prong of the dilemma (retracting your assessment of reasonableness on these probability assessments for which you have no justification)?
My approach was not helpful at all, which I can clearly see now. I’ll take another stab at your question.
You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason.
Are you unable to justify any probability assessments at all? Or is there some specific subset that you’re having trouble with? Or have I failed to understand your question properly?
I have a hypothesis that seems to fit the data. These numbers are not given out for the purpose of collecting data on vaccine side effects (that’s what VAERS is for). They are intended to provide specialized medical care directed at those who have recently gotten vaccines.
Evidence:
One commenter reported calling a Walgreens number. If this is representative, these are local pharmacy/medical practice numbers that people are calling, not some national reporting service.
Reassurance is one of the jobs of a anyone providing medical care. “Even though you aren’t feeling well after the treatment, you have nothing to worry about, the treatment is safe.” is exactly what I would want someone to say if there was nothing either of us could do to help matters, especially if I was worried enough to call. You are especially likely to do so if you personally believe the vaccine is save (which is very likely for someone responding to such a number). If I was simply recording side effects, I wouldn’t bother with that. Y
If you already believe the side effect is caused by the vaccine and think it’s a very big deal, and then during the call they try to give the reassurance, you will instead distrust them, and also want to report their untrustworthiness to friends.
If you never call the number because you are not worried, or you do trust them, you have nothing notable to report. This would explain why every report looks like a reassurance that fell flat. Your sample is biased strongly towards looking exactly that way, regardless of how common side effects or the “there are no side effects” line actually is.
And all that assumes that this game of telephone, chaining between the medical establishment, the people taking the calls, your friends reporting the call, and then your fuzzy recollection, didn’t distort any of the data.
Currently, this “explains” your data for me. As in, I am no longer confused about your reports about your friends. I understand what happened, I think. There is no data collection rejection involved, at least not related to these calls.
Do you doubt this hypothesis? If so, what evidence could you provide against it? What evidence would we need to collect to figure out whether the hypothesis is true?
I would expect that if one called such a number, one could confirm that the other person is doing no data collection about the likelihood of side effects, that the line in context is intended for reassurance if it comes up, and the entire call will otherwise be completely in line with providing post-vaccine medical care. Averaging across multiple calls, of course.
If I’m wrong, I would expect that getting a full description of an entire call would show that the line in question is used as a shutdown, side effects are not being recorded (but they are supposed to be recorded every time according to the rules of the job), there is no reasonable medical triage going on, and the numbers in question are intended purely to advocate for vaccine safety. Also averaging across multiple calls.