https://michaelzuo.wordpress.com
michaelyzuo@gmail.com
https://michaelzuo.wordpress.com
michaelyzuo@gmail.com
Of course there could be people that fully read it and didn’t comment?
It clearly wasn’t meant to exclude every single possible reader on the internet that could have come across it. That would be a crazy interpretation.
At most, it can be read as calling out every single commentator underneath the post who did pretend to read all of it. And yes it’s clear not every commentator pretended that, so they wouldn’t fall into that category.
Trying to score points in such an obvious way is also pretty deceptive.
The more interesting question is why would anyone ever assume that to be the case, in the first place?
Unless they’ve literally never encountered a deceptive person in their life, it just seems implausible to not notice this.
How can your opinion even affect the probability of deception in the first place? It seems incapable of moving the needle in that way, so I don’t see the logical connection.
By definition, deception means that there might be some pretense/ulterior motives/deflection/tricks/etc… behind the face value reading of your comments.
This doesn’t make sense as a reply…
How is your opinion on perceived emotional expressiveness even relevant to the prior comment ?
The motivation, after the double edit, is clearly to express suprise after connecting the dots and to enumerate it…
I wrote it in the most straightforward and direct manner possible?
After re-reading it twice, I get that clearly implicates you too, so I get why you may be upset.
But even if it might have been better worded given more time… by definition all commentators under a post at least potentially voted. So I don’t see how the implication could have been avoided entirely while still getting the gist across.
Huh that is a really good point. There are way too many people with US/UK backgrounds to easily differentiate between the expert pretenders and the really substantial experts. It’s even getting harder to do so on LW for many topics as karma becomes less and less meaningful.
And I can’t imagine the secretary general’s office will have that much time to scrutinize each proposed candidate, so it might even be a positive thing overall.
The quoted text wasn’t an argument, it doesn’t make sense to pretend it was…?
It’s clearly an edit to add in my own personal opinion that I wasn’t seeking an argument about.
And frankly, probably no one fully read all of habyrka’s post, including you. So it wouldn’t make sense at all.
Edit: I just realized that does imply the downvoters are also being mildly deceptive, since they would know they didn’t read the full text. So ironically it reinforces the original point in a counterintuitive way, and if you squint at it, it might imply an argument on the meta level of multiple deceivers roaming around… but then pretty much everyone who commented or voted would fall under suspicion too, so that would be a real stretch.
Double Edit: It’s somewhat of a startling implication, could literally everyone who voted under this post be behaving mildly deceptively? I didn’t even consider the possibility when I wrote the original comment but now am leaning towards that being the case, if typical forum norms of reading the full text are taken literally. Thanks for raising the unsettling point. I’ll take a bit of karma loss for that.
It might even be too reasonable…as there’s no real limit on what site administrators can do to their own site, they can replace all of LW with a giant poop emoji if they really wanted to, so such enormously long elaborations might be counterproductive even for the intended purpose.
At least to me, a few paragraphs with flawless airtight logic is more genuinely convincing than dozens of paragraphs of less than airtight logic.
Speaking of which, I got the itch while writing this to add in an extra few sentences to elaborate in further detail… so there may be a subtle memetic effect too.
Edit: I seem to have attracted 4 random downvoters who appear too ashamed to even indicate a rationale. Which seems to indicate my comment touches upon something of substance.
This seems like an odd concern… if you take a walk around Stanford campus on a typical weekday you’ll almost certainly pass a few people much smarter than you likely were, or ever will ever be, at the same age, in every possible way I can think of. And that applies to nearly everyone on LW, yes even Yudowsky.
And clearly there’s no mass demotivation as plenty of smart people, but not literal super genius smart, continue walking around Stanford? (which after all must make up the bulk of the student population)
Honestly, this kind of response doesn’t make sense and is likely to produce the opposite effect, so I’ll try to put it as straightforwardly as possible.
Maybe it works in a perfect ideal world where everyone who touched the text is 100% trustworthy, 100% of the time.
But in the real world where clearly that’s not the case… and everyone shown in the author list has had ulterior motives at least once in their past, there’s simply no way for a passing reader to be sure there weren’t also ulterior motives in this instance.
Of course they can’t prove a negative either… but that’s the inherent nature of making claims without having completely solid proof.
Considering all groups to at least have an incipient potential of cult formation seems sensible?
It might not literally be a point of “cultishness” on a scale out of 10, but on a scale out of 100 that seems more sensible. It is true after all the risk can never be reduced to perfectly zero as long as the group exists.
I can’t think of any exceptions either…
This seems a bit tautological… since roughly half the population is below average in virtue, and will engage in all sorts of bad behavior if they think they can get away with it. Partly because we define good and bad relative to the population average.
And for most of the rest, strong enough incentives can induce them to behave the same, it happens even on this very forum, so when combined that’s most people already.
That doesn’t seem true in my experience. For example I recently wanted to post a comment asking a question about the new book that’s been heavily promoted and I found, only after writing it out, that So8res inexplicably banned me from commenting.
And I can’t see any other place where I could post a specific question about that book “equally well”.
How do you know any of this to any degree of certainty?
Has anyone even demonstrated a semi-rigorous 50⁄50 argument for why “racing” would lead to “ultimately sacrificing the future”? If not, then clearly anything more contentious or claimed to be more certain would have an even higher bar to clear.
And that’s already pretty generous, probably below what it would take to publish into even third tier journals in many fields.
If most other commentators all accept seeing each other’s input… then why should a small minority’s opinion or preferences matter enough to change what the overwhelming majority can see or comment on, anywhere on this site?
I can’t think of any successful forum whatsoever where that is the case, other than those where the small minority is literally paying the majority somehow.
If it was a whitelist system where everyone is forbidden from commenting by default there might be a sensible argument here… but in the current norm it can only cause more issues down the road.
Over the long time frame there will definitely be some who exploit it to play tricks… and once that takes hold I’m pretty sure LW will go down the tubes as even for the very virtuous and respectable… nobody is 100% confident that their decisions are free from any sort politiking or status games whatsoever. And obviously for Duncan I doubt anyone is even 95% confident.
Good evals are better than nothing, but I don’t expect companies’ eval results to affect their safeguards or training/deployment decisions much in practice.
This seems to be a bit circular.
Who gets to decide what is the threshold for “good evals” in the first place… and how is it communicated?
I agree it’s strange to see a lot of writing effort put into something with such basic argumentation mistakes… it really seems disturbingly similar to lightly edited LLM output.
Though to be fair many posts on LW nowadays seem to make really weird assumptions and/or jumping intermediate steps.
Can you clarify what exactly is the argument you used? For why the extinction risk is much higher than most (all?) other things vying for their attention, such as asteroid impacts, WMDs, etc…
After an unknown amount of political influence was expended…. so I don’t really see how this is useful information, unless there’s some way to know all the players involved and approximately gauge the influence expended for each?
Why would I reply on a public comment how exactly I detected this, assuming you do believe there is in fact some technique?
Asking me in a public comment to reveal techniques that would obviously help such pretenders evade better in the future is just nonsensical, at least put it in a DM.
And if you don’t believe there is any such technique, why pretend to ask in the first place?