Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
This one is probably important. With non-LW newcomers to my meetup, I find explaining that we draw most of our ideas from LW kind of weird in the sense that 50% of my simulations of them conclude “these guys are some wacko internet cult”. Only some of them come back.
I’d like to see at least some work on how to talk about LW without implying insularity. We can’t just drop the LW affiliation, because nowhere else really compares (even if everything on LW exists somewhere else, it doesn’t exist anywhere else all in the same place).
The internet is low status due to the low barriers to entry. Mention higher status things than the internet. Future of Humanity Institute at Oxford (Hey I’ve heard of Oxford) Vinge (a published author many have heard of) Center for Applied Rationality which does real things in real life with real people
Talk about cognitive science. Talk about economics. Talk about anything but the internet. LessWrong? Oh it’s just for coordinating all the interesting people who are interested in these interesting things.
I’d like to see at least some work on how to talk about LW without implying insularity.
Name-drop like a motha...
“Did you read Predictably Irrational by Dan Ariely or Thinking Fast and Slow by Nobel laureate Daniel Kahneman? We study their field of predictable human thinking errors and try to figure out how best to apply those lessons to everyday human life so that we can learn how to make decisions that are more likely to achieve our goals.
“We talk some about Alan Turing’s idea that machines could one day become smarter than humans, and how shortly thereafter we might expect them to become more powerful than humans. One of the mathematicians who worked with him to crack the German Enigma Code, I.J. Good, explained that a smarter-than-human machine could use its intelligence to improve its own inteligence. And since neuroscientists like Paul Glimcher at NYU and Kent Berridge at U Michigan are learning that what humans care about is incredibly complex, it’s unlikely that we’ll be able to figure out how to program smarter-than-human machines to respect every little detail of what we care about.”
Or, more meta-ly, you’re not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn’t have to worry about such things, but then we’re living in the should-world rather than the real-world.
Name dropping is good solution for this but in my experience people very seldom read what you name drop and in certain circles this comes off a bit pretentious.
You can also exchange random Solomonoffs with the string “Solomonoff”. That way you can Solomonoff yourself out of any deep Solomonoff you find yourself in.
This one is probably important. With non-LW newcomers to my meetup, I find explaining that we draw most of our ideas from LW kind of weird in the sense that 50% of my simulations of them conclude “these guys are some wacko internet cult”. Only some of them come back.
I’d like to see at least some work on how to talk about LW without implying insularity. We can’t just drop the LW affiliation, because nowhere else really compares (even if everything on LW exists somewhere else, it doesn’t exist anywhere else all in the same place).
HI WE’RE AN INTERNET MEETUP GROUP!
The internet is low status due to the low barriers to entry. Mention higher status things than the internet.
Future of Humanity Institute at Oxford (Hey I’ve heard of Oxford)
Vinge (a published author many have heard of)
Center for Applied Rationality which does real things in real life with real people
Talk about cognitive science. Talk about economics. Talk about anything but the internet. LessWrong? Oh it’s just for coordinating all the interesting people who are interested in these interesting things.
Name-drop like a motha...
“Did you read Predictably Irrational by Dan Ariely or Thinking Fast and Slow by Nobel laureate Daniel Kahneman? We study their field of predictable human thinking errors and try to figure out how best to apply those lessons to everyday human life so that we can learn how to make decisions that are more likely to achieve our goals.
“We talk some about Alan Turing’s idea that machines could one day become smarter than humans, and how shortly thereafter we might expect them to become more powerful than humans. One of the mathematicians who worked with him to crack the German Enigma Code, I.J. Good, explained that a smarter-than-human machine could use its intelligence to improve its own inteligence. And since neuroscientists like Paul Glimcher at NYU and Kent Berridge at U Michigan are learning that what humans care about is incredibly complex, it’s unlikely that we’ll be able to figure out how to program smarter-than-human machines to respect every little detail of what we care about.”
Or, more meta-ly, you’re not going to be very persuasive if you ignore pathos and ethos. I think this might be a common failure mode of aspiring rationalists because we feel we shouldn’t have to worry about such things, but then we’re living in the should-world rather than the real-world.
Name dropping is good solution for this but in my experience people very seldom read what you name drop and in certain circles this comes off a bit pretentious.
You can also exchange random Solomonoffs with the string “Solomonoff”. That way you can Solomonoff yourself out of any deep Solomonoff you find yourself in.