This definitely belongs on the next survey!
Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts
This definitely belongs on the next survey!
Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts
A bit offtopic, but #lesswrong has an IRC bot that posts LessWrong posts, and, well, the proposal ended up both more specific and a lot more radical.
If these paths are viable, I desire to believe that they are viable.
If these paths are nonviable, I desire to believe that they are nonviable.
Does it do any good, to take well-meaning optimistic suggestions seriously, if they will in fact clearly not work? Obviously, if they will work, by all means we should discover that, because knowing which of those paths, if any, is the most likely to work is galactically important. But I don’t think they’ve been dismissed just because people thought the optimists needed to be taken down a peg. Reality does not owe us a reason for optimism.
Generally when people are optimistic about one of those paths, it is not because they’ve given it deep thought and think that this is a viable approach, it is because they are not aware of the voluminous debate and reasons to believe that it will not work, at all. And inasmuch as they insist on that path in the face of these arguments, it is often because they are lacking in security mindset—they are “looking for ways that things could work out”, without considering how plausible or actionable each step on that path would actually be. If that’s the mode they’re in, then I don’t see how encouraging their optimism will help the problem.
Is the argument that any effort spent on any of those paths is worthwhile compared to thinking that nothing can be done?
edit: Of course, misplaced pessimism is just as disastrous. And on rereading, was that your argument? Sorry if I reacted to something you didn’t say. If that’s the take, I agree fully. If one of those approaches is in fact viable, misplaced pessimism is just as destructive. I just think that the crux there is whether or not it is, in fact, viable—and how to discover that.
Not wanting to disagree or downplay, I just want to offer a different way to think about it.
When somebody says I don’t exist—and this definitely happens—to me, it all depends on what they’re trying to do with it. If they’re saying “you don’t exist, so I don’t need to worry about harming you because the category of people who would be harmed is empty”, then yeah I feel hurt and offended and have the urge to speak up, probably loudly. But if they’re just saying when trying to analyze reality, like, “I don’t think people like that exist, because my model doesn’t allow for them”, the first feeling I get is delight. I get to surprise you! You get to learn a new thing! Your model is gonna break and flex and fit new things into it!
Maybe I’m overly optimistic about people.
Katja Grace’s 2015 survey of NIPS and ICML researchers provided an aggregate forecast giving a 50% chance of HLMI occurring by 2060 and a 10% chance of it occurring by 2024.
2015 feels decades ago though. That’s before GPT-1!
(Today, seven years after the survey was conducted, you might want to update against the researchers that predicted HLMI by 2024.)
I would expect a survey done today to have more researchers predicting 2024. Certainly I’d expect a median before 2060! My layman impression is that things have turned out to be easier to do for big language models, not harder.
The surveys urgently need to be updated.
You people are all weird. Showers are time I enjoy spending.
It’s the drying up that I need to optimize, and that’s not dependent on how long I was in the shower.
Minor QoL PSA: if you get indigestion every time you consume milk or milk products, you probably have lactose intolerance, which can be fixed semi-cheaply and effectively by taking a lactase supplement before consuming milk. Lactose intolerance is widespread in adults (statistics range from 33% to 75%). Lactase supplements are available without a prescription. Considering how useful as a source of nutrients, not to mention tasty, milk is ..
Shouldn’t the king just make markets for “crop success if planted assuming three weeks” and “crop success if planted assuming ten years” and pick whichever is higher? Actually, shouldn’t the king define some metric for kingdom well-being (death rate, for instance) and make betting markets for this metric under his possible roughly-primitive actions?
This fable just seems to suggest that you can draw wrong inferences from betting markets by naively aggregating. But this was never in doubt, and does not disprove that you can draw valuable inferences, even in the particular example problem.
So awesomely stupid that it thinks that the goal ‘make humans happy’ could be satisfied by an action that makes every human on the planet say ‘This would NOT make me happy: Don’t do it!!!’
The AI is not stupid here. In fact, it’s right and they’re wrong. It will make them happy. Of course, the AI knows that they’re not happy in the present contemplating the wireheaded future that awaits them, but the AI is utilitarian and doesn’t care. They’ll just have to live with that cost while it works on the means to make them happy, at which point the temporary utility hit will be worth it.
The real answer is that they cared about more than just being happy. The AI also knows that, and it knows that it would have been wise for the humans to program it to care about all their values instead of just happiness. But what tells it to care?
The word “is” in all its forms. It encourages category thinking in lieu of focussing on the actual behavior or properties that make it meaningful to apply. Example: “is a clone really you?” Trying to even say that without using “is” poses a challenge. I believe it should be treated the same as goto: occasionally useful but usually a warning sign.
Not for the wirehead, but for the mind who died to create him.
I find myself thinking: if you’re so consistently unable to guess what people might mean, or why people might think something, maybe the problem is (at least some of the time) with your imagination.
Who cares who “the problem” is with? Text is supposed to be understood. The thing that attracted me to the Sequences to begin with was sensible, comprehensible and coherent explanations of complex concepts. Are we giving up on this? Or are people who value clear language and want to avoid misunderstandings (and may even be, dare I say, neuroatypical) no longer part of the target group, but instead someone to be suspicious of?
The Sequences exist to provide a canon of shared information and terminology to reference. If you can’t explain something without referencing a term that is evidently not shared by everyone, and that you don’t just not bother to define but react with hostility when pressed on, then … frankly, I don’t think that behavior is in keeping with the spirit of this blog.
“Look, if I go to college and get my degree, and I go start a traditional family with 4 kids, and I make 120k a year and vote for my favorite political party, and the decades pass and I get old but I’m doing pretty damn well by historical human standards; just by doing everything society would like me to, what use do I have for your ‘rationality’? Why should I change any of my actions from the societal default?”
You must have an answer for them. Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do more than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults. When you give yourself permission to move away from the “systematized winning” definition, without the fear that you’ll tie yourself in knots of paradox; it’s then that you can really start to think about the subject concretely.
I mean, isn’t the answer to that, as laid out in the Sequences, that Rationality really doesn’t have anything to offer them? Tsuyoku Naritai, Something to Protect, etc. - Eliezer made the Sequences because he needed people to be considering the evidence that AI was dangerous and was gonna kill everyone by default, so short-term give money to MIRI and/or long-term join up as a researcher. “No one truly searches for the Way until their parents have failed them, their Gods are dead and their tools have shattered in their hands.” I think it’s fair that the majority of people don’t have problems with that sort of magnitude of impact in their lives; and in any case, anyone who cared that much would already have gone off to join an EA project. I’m not sure that Eliezer-style rationality needs to struggle for some way to justify its existence when the explicit goal of its existence has already largely been fulfilled. Most people don’t have one or two questions in their life that they absolutely, pass-or-die need to get right, and the answer is nontrivial. The societal default is a time-tested satisficing path.
When you are struggling to explain why something is true, make sure that it actually is true.
I mostly see where you’re coming from, but I think the reasonable answer to “point 1 or 2 is a false dichotomy” is this classic, uh, tumblr quote (from memory):
“People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail.”
This goes especially if the thing that comes after “just” is “just precommit.”
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don’t know if they’re correct, but I’d expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we’d all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
I think this once again presupposes a lot of unestablished consensus: for one, that it’s trivial for people to generate hypotheses for undefined words, that this is a worthwhile skill to begin with, and that this is a proper approach to begin with. I don’t think that a post author should get to impose this level of ideological conformance onto a commenter, and it weirds me out how much the people on this site now seem to be agreeing that Said deserves censure for (verbosely and repeatedly) disagreeing with this position.
And then it seems to be doing a lot of high-distance inference from presuming a “typical” mindset on Said’s part and figuring out a lot of implications as to what they were doing, which is exactly the thing that Said wanted to avoid by not guessing a definition? Thus kind of proving their point?
More importantly, I at least consider providing hypotheses as to a definition as obviously supererogatory. If you don’t know the meaning of a word in a text, then the meaning may be either obvious or obscured; the risk you take by asking is wasting somebody’s time for no reason. But I consider it far from shown that giving a hypothesis shortens this time at all, and more importantly, there is none such Schelling point established and thus it seems a stretch of propriety to demand it as if it was an agreed upon convention. Certainly the work to establish it as a convention should be done before the readership breaks out the mass downvotes; I mean seriously- what the fuck, LessWrong?
Or a common factor caused both.
Yes, this effectively forces the network to use backward reasoning. It’s equivalent to saying “Please answer without thinking, then invent a justification.”
The whole power of chains-of-thought comes from getting the network to reason before answering.
This smells like a framing debate. More importantly, if an article is defining a common word in an unconventional way, my first assumption will be that it’s trying to argumentatively attack its own meaning while pretending it’s defeating the original meaning. I’m not sure it matters how clearly you’re defining your meaning; due to how human cognition works, this may be impossible to avoid without creating new terms.
In other words, I don’t think it’s that Scott missed the definitions as that he reflexively disregarded them as a rhetorical trick.
Requesting evidence is good behavior and should not be discouraged.