I just got a “New users interested in dialoguing with you (not a match yet)” notification and when I clicked on it the first thing I saw was that exactly one person in my Top Voted users list was marked as recently active in dialogue matching. I don’t vote much so my Top Voted users list is in fact an All Voted users list. This means that either the new user interested in dialoguing with me is the one guy who is conspicuously presented at the top of my page, or it’s some random that I’ve never interacted with and have no way of matching.
This is technically not a privacy violation because it could be some random, but I have to imagine this is leaking more bits of information than you intended it to (it’s way more than a 5:1 update), so I figured I’d report it as a bug unanticipated feature.
It further occurs to me that anyone who was dedicated to extracting information from the system could completely deanonymize their matches by setting a simple script to scrape https://www.lesswrong.com/dialogueMatching every minute or so and cross-referencing “new users interested” notifications with the moment someone shoots to the top of the “recently active in dialogue matching” list. It sounds like you don’t care about that kind of attack though so I guess I’m mentioning it for completeness.
Link is broken
Sorry, you don’t have access to this page. This is usually because the post in question has been removed by the author.
All your examples of high-tier axioms seem to fall into the category of “necessary to proceed”, the sort of thing where you can’t really do any further epistemology if the proposition is false. How did the God axiom either have that quality or end up high on the list without it?
Surely some axioms can be more rationally chosen than others. For instance, “There is a teapot orbiting the sun somewhere between Earth and Mars” looks like a silly axiom, but “there is a round cube orbiting the sun somewhere between Earth and Mars” looks even sillier. Assuming the possibility of round cubes seems somehow more “epistemically expensive” than assuming the possibility of teapots.
If you are predicting that two people will never try to censor each other in the same domain, that also happens. If your theory is somehow compatible with that, then it sounds like there are a lot of epicycles in this “independent-mindedness” construct that ought to be explained rather than presented as self-evident.
We only censor other people more-independent-minded than ourselves.
This predicts that two people will never try to censor each other, since it is impossible for A to be more independent-minded than B and also for B to be more independent-minded than A. However, people do engage in battles of mutual censorship, therefore the claim must be false.
The Law of Extremity seems to work against the Law of Maybe Calm The Fuck Down. If the median X isn’t worth worrying about, but most Xs you see are selected for being so extreme they can’t hide, then the fact you are seeing an X is evidence about its extremity and you should only calm down if an unusually extreme X is not worth worrying about.
Surely they would use different language than “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” to describe a #metoo firing.
It’s fine to include my responses in summaries from the dataset, but please remove it before making the data public (Example: “The average age of the respondents, including row 205, is 22.5”)
It’s not clear to me what this option is for. If someone doesn’t tick it, it seems like you are volunteering to remove their information even from summary averages, but that doesn’t make sense because at that point it seems to mean “I am filling out this survey but please throw it directly in the trash when I’m done.” Surely if someone wanted that kind of privacy they would simply not submit the survey?
That’s it! Thanks, I have no idea why shift+enter is special there.
That’s the one. I couldn’t get either solution to work:
>! I am told this text should be spoilered
:::spoiler And this text too:::
There is a narrative-driven videogame that does exactly this, but unfortunately I found the execution mediocre. I can’t get spoilers to work in comments or I’d name it. Edit: It’s
The other reason vegan advocates should care about the truth is that if you keep lying, people will notice and stop trusting you. Case in point, I am not a vegan and I would describe my epistemic status as “not really open to persuasion” because I long ago noticed exactly the dynamics this post describes and concluded that I would be a fool to believe anything a vegan advocate told me. I could rigorously check every fact presented but that takes forever, I’d rather just keep eating meat and spend my time in an epistemic environment that hasn’t declared war on me.
Separate from the moral issue, this is the kind of trick you can only pull once. I assume that almost everyone who received the “your selected response is currently in the minority” message believed it, that will not be the case next year.
Granting for the sake of argument that launching the missiles might not have triggered full-scale nuclear war, or that one might wish to define “destroy the world” in a way that is not met by most full-scale nuclear wars, I am still dissatisfied with virtue A because I think an important part of Petrov’s situation was that whatever you think the button did, it’s really hard to find an upside to pushing it, whereas virtue A has been broadened to cover situations that are merely net bad, but where one could imagine arguments for pushing the button. My initial post framing it in terms of certainty may have been poorly phrased.
Petrov was not the last link in the chain of launch authorization which means that his action wasn’t guaranteed to destroy the world since someone further down the chain might have cast the same veto he did. So technically yes, Petrov was pushing a button labeled “destroy the world if my superior also thinks these missiles are real, otherwise do nothing”. For this reason I think Vasily Arkhipov day would be better, but too late to change now. But I think that if the missiles had been launched, that destroys the world (which I use as shorthand for destroying less than literally all humans, as in “The game Fallout is set in the year 2161 after the world was destroyed by nuclear war), and there is a very important difference between Petrov evaluating the uncertainty of “this is the button designed to destroy the world, which technically might get vetoed by my boss” and e.g. a nuclear scientist who has model uncertainty about the physics of igniting the planet’s atmosphere (which yes, actual scientists ruled out years before the first test, but the hypothetical scientist works great for illustrative purposes). In Petrov’s case, nothing good can ever come of hitting the button except perhaps selfishly, in that he might avoid personal punishment for failing in his button-hitting duties.
It seems quite easy to me. Imagine me stating “The sky is purple, if you come to the party I’ll introduce you to Alice.” If you come to the party then me performing the promised introduction honours a commitment I made, even though I also lied to you.
This is not responding to the interesting part of the post, but I did not vote in the poll because I felt like virtue A was a mangled form of the thing I care about for Petrov Day, and non-voting was the closest I could come to fouling my ballot in protest.To me Petrov Day is about having a button labeled “destroy world” and choosing not to press it. Virtue A as described in the poll is about having a button labeled “maybe destroy world, I dunno, are you feeling lucky?” and choosing not to press it. This is a different definition which seems to have been engineered so that a holiday about avoiding certain doom can be made compatible with avoiding speculative doom due to, for instance, AI.I would prefer that Petrov Day gets to be about Petrov, and “please Sam Altman, don’t risk turning the world into paperclips” gets a different day if there is demand for such a thing.
This explains why the honour system doesn’t do as much as one might hope, but it doesn’t address the initial question of why use explicitly optional vaccination instead of mandatory + honour system. If excluding the unvaccinated is desirable then surely it remains desirable (if subtoptimal) to exclude only those who are both unvaccinated and honest.
Scott Adams predicted Trump would win in a landslide. He wasn’t just overconfident, he was wrong! The fact that he’s not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google ‘Scott Adams Trump prediciton’ in Incognito, the first two results say “landslide” in the first ten seconds and title, respectively).Your first case is an example of something much worse than not updating fast enough.