From London, now living in the Santa Cruz mountains.
Paul Crowley
A plausible strategy would be to buy say 100 bitcoins for $1 each, then sell 10 at $10, 10 at $100, and so on. With this strategy you would have made $111,000 and hold 60 bitcoins.
“Even though gaining too much in pregnancy” is missing the word “weight” I think.
I can’t work out where you’re going with the Qubes thing. Obviously a secure hypervisor wouldn’t imply a secure system, any more than a secure kernel implies a secure system in a non-hypervisor based system.
More deeply, you seem to imply that someone who has made a security error obviously lacks the security mindset. If only the mindset protected us from all errors; sadly it’s not so. But I’ve often been in the situation of trying to explain something security-related to a smart person, and sensing the gap that seemed wider than a mere lack of knowledge.
Please don’t bold your whole comment.
Looks like this hasn’t been marked as part of the “INADEQUATE EQUILIBRIA” sequence: unlike the others, it doesn’t carry this banner, and it isn’t listed in the TOC.
I agree, if the USA had decided to take over the world at the end of WWII, it would have taken absolutely cataclysmic losses. I think it would still have ended up on top of what was left, and the world would have rebuilt, with the USA on top. But not being prepared to make such an awful sacrifice to grasp power probably comes under a different heading than “moral norms”.
There are many ways to then conclude that AGI is far away where far away means decades out. Not that decades out is all that far away. Eliezer conflating the two should freak you out. AGI reliably forty years away would be quite the fire alarm.
I don’t think I understand this point. Is the conflation “having a model of the long-term that builds on a short-term model” and “having any model of the long term”, in which case the conflation is akin to expecting climate scientists to predict the weather? If so I agree that that’s a slip up, but my alarm level isn’t raised to “freaked out” yet, what am I missing?
I move in circles where asking “why is X bad” is as bad as X itself. So for the avoidance of doubt, I do not think that your comment here makes you a bad person.
I’m trying to imagine a conversation where one person expresses a preference about the other’s pubic hair that wouldn’t be inappropriate, and I’m struggling a little. Here’s what I’ve come up with:
A BDSM context in which that sort of thing is a negotiated part.
The two have been playing for a while and are intimate enough for that to be appropriate.
The other person asks, and gets an honest answer.
It sounds like none of these are what you have in mind; can you paint me a more detailed example?
Which parts do you think are not needed?
Dawkins’s “Middle World” idea seems relevant here. We live in Middle World, but we investigate phenomena across a wide range of scales in space and time. It would at least be a little surprising to discover that the pace at which we do it is special and hard to improve on.
Why no total winner?
Thank you! Hooray for this sort of thing :)
Also I have already read them all more than once and don’t plan to do so again just to get the badge :)
Facebook-like reactions.
I would like to be able to publicly say eg “hear hear” on a comment or post, without cluttering up the replies. Where the “like” button is absent eg on Livejournal, I sorely miss it. This is nothing to do with voting and should be wholly orthogonal; voting is anonymous and feeds into the ranking algorithm, where this is more like a comment that says very little and takes up minimal screen real estate, but allows people to get a quick feel for who thinks what about a comment.
Starting with “thumbs up” would be a big step forward, but I’d hope that other reactions would become available later, eg “disagree connotationally” or “haha” or “don’t like the tone” or “I want to help with this”. Each should be associated with a small graphic, with a hover-over to show the meaning as well as who applied the reaction. Like emoji in eg Discord and unlike Facebook, a single user can apply multiple reactions to the same comment, so I can say both “agree” and “don’t like the tone”.
I apologise for having buried this feature request in the depths of not one but two comment threads before putting it here :)
I think these are two wholly orthogonal functions: anonymous voting, and public comment badges. For badges, I’d like to see something much more like eg Discord where you can apply as many as you think apply, rather than Facebook where you can only apply at most one of the six options (eg both “agree” and “don’t like tone”).
EDIT: now a feature request.
I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.
Thank you all so much for doing this!
Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all—I’m all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.
I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I’d vote up the weight of Carl Schulman’s votes even if he never commented.
The feature map link seems to be absent.
Thinking about it, I’d rather not make the self-rating visible. I’d rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.
That makes sense. I’d like people to know when what they’re seeing is out of probation, so I’d rather say that even if you have set the slider to 4, you might still see some 3-rated comments that are expected to go to 4 later, and they’ll be marked as such, but that’s just a different way of saying the same thing.
Ainslie, not Ainslee. I found this super distracting for some reason, partly because his name is repeated so often.