“That’s just an applause light.” (“That’s just a semantic stopsign.” “That’s just the teacher’s password.”)
“POLITICS IS THE MINDKILLER”
“If keeping my current job has higher expected utility than founding a startup, I wish to believe that keeping my current job has higher expected utility than founding a startup...”
“I think he’s just being metacontrarian.”
“Arguments are soldiers!”
“Not every change is an improvement, but every improvement is a change.”
“There are no ontologically basic mental entities!”
“I’m an aspiring rationalist.”
“Fun Theory!”
“The map is not the territory.”
“Let’s beware evaporative cooling, here.”
“It’s a sunk cost! Abandon it!”
“ERROR: POSTULATION OF GROUP SELECTION DETECTED”
“If you measure it and reward the measurement going up, you’ll get what you measure, not what you want.”
Solomonoff prior gives you 50%, that’s pretty cool! :D
I hope someone will use Alicorn’s (and other) quotes to make a good Eliza-bot. This could be an interesting AI challenge—write a bot that will get positive karma on LW! If there are more bots, the bot with highest nonzero karma wins.
Here’s a few, courtesy of applying JWZ’s dadadodo to all the lines in the thread so far:
What does the best textbook on corrupted hardware. Dark Arts; Escher, Bach?
How could you credibly pre commit to see you as a compartmentalized belief?
I’m trying to be a cult.
Have super powers.
You’ve fallen prey to be condescending.
My current job has higher expected utility than you imagine; but in the sanity waterline.
Everyone is Far.
No idea is reliable? Have a lot of caring.
Conceptspace is the future.
Mysteriousness is a cult.
I’m going to be with the AI. I know the universe future.
Look, just generalize from the territory.
Everyone is bigger than you in Cute Puppies.
Emacs’ M-x dissociated-press yields babble, but with some interesting words in it: “knowledgeneralize”, “metacontrammer”, “contrationalist”, “choosequences”, “the universal priory”, “statististimate”, “fanfused”, “condescendista”, “frobability”, “dissolomonoff”, “optimprovement”, “estimagine”, “cooperaterline”, “pattern matchology”. The only sensible sentence it’s come up with is “I’m running on condescending”.
To give an idea of what these look like raw, here’s a paragraph of dadadodo:
What does the universe with ice cream trees. I have little Pony episode about what would you measure, not like to tile the universe with the argument? That’s just signaling virtue: Death is bad. That’s just a startup. I have little XML tags on corrupted hardware. Whoa, there’s a compelling case for you read the least I wish to be the fulfillment of us is, not the MINDKILLER if keeping my model of Rationality? Tsuyoku naritai! So after all over the bad; result, you’re running on that.
And here’s a similar-sized chunk of M-x dissociated-press:
You shou have now is white’ is true if an you imaginew Methods of in this riggerse tiled in paperclips. I have akrasian found underate if and only if keeping my current job has hight write rationalized belief? I cause can have you regenerate that say ‘moral’ It will bach? What die. You shods of Rationalith a good cause coherent extrapolass? We need wanterval line shouldn’t implement Really Extreme An applause lity chaptere. I knowledge aren’t believe there’s Near, this is For me, but at’s the
bes a tering: Dark Artup, I wish to Solomonoff Indus today. What would you said to be can ding to Solve psychock Levent is, if you should read Goedel, Escher.
Of these, I rather like:
I have little XML tags on corrupted hardware. shods of Rationalith extrapolass
The blended-words effect seems to give M-x dissociated-press a sort of Finnegans Wake atmosphere which dadadodo doesn’t have.
Not totally IT, but I tried it on Eliezer’s “The 5-Second Level”. Highlits include:
I won’t socially kill you
Hope to reflect on consequentialist grounds
Say, what a vanilla ice cream, and not-indignation, and from green?
Associate to persuade anyone of how you were making the dreadful personal habit displays itself in a concrete example.
Rather you can’t bear the 5-second level?
To develop methods of teaching rationality skills, you need more practice to get lost in verbal mazes; we will tend to have our feet on the other person.
Be sufficiently averse to the fire department and see if that suggests anything.
Be sufficiently averse to the fire department and see if that suggests anything.
I do believe it suggests libertarianism. But I can’t be sure, as I can’t simply “be sufficiently averse” any more than I can force myself to believe something.
Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or “I won’t socially kill you”.
I would be amazed if Scott Alexander has not used “I won’t socially kill you” at some point. Certainly he’s used some phrase along the line of “people who won’t socially kill me”.
...and in fact, I checked and the original article has basically the meaning I would have expected: “knowing that even if you make a mistake, it won’t socially kill you.”. That particular phrase was pretty much lifted, just with the object changed.
the bot with the highest nonzero karma wins
I’m taking bets: how long after the bots start maximizing karma until the form is tiled with
/~\ | _ | || || || || || || || || | | `\/′
Imagining this in my head has sold me on this being a good idea. Or at least a mildly amusing idea that will have relatively minor negative externalities. (I’m reminded of Eliezer Facts)
Here at my company we were discussing an issue of linking
economies in two virtual environments (creating a shared
currency), and wrestling with some of the thornier
problems of balance of payments, when it occurred to me
“this is Germany and Greece”, a thought that wouldn’t have
occurred to me without having followed your blog. Rather
than continuing to run an emulator of you in my head, I
thought I’d check to see if we couldn’t get the real you
interested in what we are doing.
Edit: And that reminds me of the
Reamde character
Richard Forthrast giving Zula Forthrast a job at his video
game company because of her geology expertise.
Sad as it is, this has potential to be effective outreach to Reddit, et al. Unless you’d like to do it yourself, or someone gives a good objection within a few days, I’ll be posting it in one or more subreddits, perphaps including the GEB readthrough I’m participating in.
I don’t use Reddit. If there’s interest in turning this into a video, I’m willing to film myself speaking some of my lines, but fear composing an entire video (ideally with several speakers) would take video editing skills and resources I don’t have.
I came into this thread with a negative set point because I see the “Shit X says” meme as thoroughly without value, being merely collections of stereotypes for no other purpose than to collect them. The OP confirmed this, and because my comment sorting had currently been on New, I scrolled through some of the comments, almost all of which continued to confirm this. Then I resorted to Top and saw your post, and my mind was immediately tickled. Some of these are genuinely funny, and in fact has value as a collection of LW memes and short rationality quotes.
It’s a metaphor used in Jonathan Haidt’s book The Happiness Hypothesis: the rider is the conscious or deliberative mind and the elephant is everything underneath.
The “confidence interval” line should have a percentage (“What’s your 95% confidence interval?”).
“You make a compelling case for infanticide.”
“Can you link me to that study?”
“I think I’m going to solve psychology.” (“I think I’m going to solve metaethics.” “I think I’m going to solve Friendliness.”)
“My elephant wants a brownie.”
“Is that your true rejection?”
“I wanna be an upload!”
“Does that beat Reedspacer’s Lower Bound?”
“Let’s not throw all our money at the Society for Rare Diseases in Cute Puppies.”
“I have akrasia.”
“I’m cryocrastinating.”
“Do that and you’ll wind up with the universe tiled in paperclips.”
“So after we take over the world...”
“I want to optimize for fungibility here.”
“This looks like a collective action problem.”
“We can dissolve this question.” (“That’s a dissolved question.”)
“My model of you likes this.”
“Have you read Goedel, Escher, Bach?”
“What do the statistics say about cases in this reference class?”
“We need whiteboards.”
“I’m trying paleo.”
“I might write rationalist fanfiction of that.”
“That’s just an applause light.” (“That’s just a semantic stopsign.” “That’s just the teacher’s password.”)
“POLITICS IS THE MINDKILLER”
“If keeping my current job has higher expected utility than founding a startup, I wish to believe that keeping my current job has higher expected utility than founding a startup...”
“I think he’s just being metacontrarian.”
“Arguments are soldiers!”
“Not every change is an improvement, but every improvement is a change.”
“There are no ontologically basic mental entities!”
“I’m an aspiring rationalist.”
“Fun Theory!”
“The map is not the territory.”
“Let’s beware evaporative cooling, here.”
“It’s a sunk cost! Abandon it!”
“ERROR: POSTULATION OF GROUP SELECTION DETECTED”
“If you measure it and reward the measurement going up, you’ll get what you measure, not what you want.”
“Azahoth!”
“Death is bad.”
This is too much fuuuuuuuun
“She’s just signaling virtue.”
“Money is the unit of caring.”
“One-box!”
“Beliefs should constrain anticipations.”
“Existential risk...”
“I’ll cooperate if and only if the other person will cooperate if and only if I cooperate.”
“I’m going to update on that.”
“Tsuyoku naritai!”
“My utility function includes a term for the fulfillment of your utility function.”
“Yeah, it’s objective, but it’s subjectively objective.”
“I am a thousand shards of desire.”
“Whoa, there’s an inferential gap here that one of us is failing to bridge.”
“My coherent extrapolated volition says...”
“Humans aren’t agents.” (“I’m trying to be more agenty.” “Humans don’t really have goals.”)
“Wait, wait, this is turning into an argument about definitions.”
“Look, just rejecting religion and astrology doesn’t make someone rational.”
“No, no, you shouldn’t implement Really Extreme Altruism. Unless the alternative is doing it without, anyway...”
“I’ll be the Gatekeeper, you be the AI.”
“That’s Near, this is Far.”
“Don’t fall into bottom-line thinking like that.”
I think I’m done. If I think of any more I’ll add them to this comment instead of making a new one.
“How do you operationalize that?”
“‘Snow is white’ is true if and only if snow is white.”
“If I may generalize from one example here...”
“I’m suffering from halo effect.”
“Warning: Dark Arts.”
“Okay, but in the Least Convenient Possible World...”
“We want to raise the sanity waterline.”
“You’ve fallen prey to the illusion of transparency.”
“Bought some warm fuzzies today.”
“What does the outside view say?”
“So the idea is that we make all scientific knowledge a sacred and closely guarded secret, so it will be treated with the reverence it deserves!”
“How could you test that belief?”
RATIONALISTS SAY ALL THE THINGS!
Solomonoff prior gives you 50%, that’s pretty cool! :D
I hope someone will use Alicorn’s (and other) quotes to make a good Eliza-bot. This could be an interesting AI challenge—write a bot that will get positive karma on LW! If there are more bots, the bot with highest nonzero karma wins.
As a start, I copied all Alicorn’s lines into a Markov text synthesizer . Some of the best results were:
I burst out laughing while reading this, so of course my officemates wanted to know what was so funny.
I cannot remember the last time the gulf of inferential distances was so very very wide.
Here’s a few, courtesy of applying JWZ’s dadadodo to all the lines in the thread so far:
Emacs’ M-x dissociated-press yields babble, but with some interesting words in it: “knowledgeneralize”, “metacontrammer”, “contrationalist”, “choosequences”, “the universal priory”, “statististimate”, “fanfused”, “condescendista”, “frobability”, “dissolomonoff”, “optimprovement”, “estimagine”, “cooperaterline”, “pattern matchology”. The only sensible sentence it’s come up with is “I’m running on condescending”.
I visualized that being said simultaneously with the middle-finger gesture.
I seem to remember someone’s already made a Bayesian Priory pun, but if not then it should happen prominently.
EDIT: here
Wrong. Electronic old men are.
To give an idea of what these look like raw, here’s a paragraph of dadadodo:
And here’s a similar-sized chunk of M-x dissociated-press:
Of these, I rather like:
The blended-words effect seems to give M-x dissociated-press a sort of Finnegans Wake atmosphere which dadadodo doesn’t have.
Mysteriousness is a cult, and I am running on condescending.
From another generator:
“I’m going to solve metaethics.” “I’m going, you’re going to found the Society for infanticide.”
“”Snow is white” is failing to solve psychology.”
“Wait, wait, “this is white” is a more technical explanation?”
“My utility function includes a semantic stopsign.”
“If keeping my current job has little XML tags on it that say the Least Convenient Possible World...”″
“Sure, I’d take over the sanity waterline.”
“I’ll be the symbol with ice cream trees.”
“So after we take over the alternative universe that is the Least Convenient Possible World...”
“I want to tile the sanity waterline with the unit of a thing.”
Not totally IT, but I tried it on Eliezer’s “The 5-Second Level”. Highlits include:
I do believe it suggests libertarianism. But I can’t be sure, as I can’t simply “be sufficiently averse” any more than I can force myself to believe something.
Still, that one seems to be a fairly reasonable sentence. If I were to learn only that one of these had been used in an LW article (by coincidence, not by a direct causal link), I would guess it was either that one or “I won’t socially kill you”.
I would be amazed if Scott Alexander has not used “I won’t socially kill you” at some point. Certainly he’s used some phrase along the line of “people who won’t socially kill me”.
...and in fact, I checked and the original article has basically the meaning I would have expected: “knowing that even if you make a mistake, it won’t socially kill you.”. That particular phrase was pretty much lifted, just with the object changed.
If we had signatures on LW, this would be mine.
Surely you mean Eliezer-bot.
Should it be made, it will of course be known as Elieza.
But in any case I think you need to keep in mind that a blank map does not correspond to a blank territory.
I initially read the parent in a straightforward way, but then I noticed it is also a meta-joke.
Usually. It could.
What is your prior? (For Eliezer being empty.)
Hopefully they’d keep improving.
“My utility function includes a term for the fulfillment of your utility function.”
Awww.… :)
Imagining this in my head has sold me on this being a good idea. Or at least a mildly amusing idea that will have relatively minor negative externalities. (I’m reminded of Eliezer Facts)
I’m reminded of http://lesswrong.com/lw/21a/free_copy_of_feynmans_autobiography_for_best/
Gabe Newell (of Valve Software) wrote the following in an email to Yanis Varoufakis (an economist):
Edit: And that reminds me of the Reamde character Richard Forthrast giving Zula Forthrast a job at his video game company because of her geology expertise.
I like this one. Mind if I actually use it?
Go for it. I say it to recommend things to people. (Mostly one person.)
I’m still laughing when I think about this one.
Update: Still laughing and using it in conversations.
Sad as it is, this has potential to be effective outreach to Reddit, et al. Unless you’d like to do it yourself, or someone gives a good objection within a few days, I’ll be posting it in one or more subreddits, perphaps including the GEB readthrough I’m participating in.
I don’t use Reddit. If there’s interest in turning this into a video, I’m willing to film myself speaking some of my lines, but fear composing an entire video (ideally with several speakers) would take video editing skills and resources I don’t have.
I actually want to film this, except I still think it has at least a 25% chance of turning out to be a horrible idea.
I came into this thread with a negative set point because I see the “Shit X says” meme as thoroughly without value, being merely collections of stereotypes for no other purpose than to collect them. The OP confirmed this, and because my comment sorting had currently been on New, I scrolled through some of the comments, almost all of which continued to confirm this. Then I resorted to Top and saw your post, and my mind was immediately tickled. Some of these are genuinely funny, and in fact has value as a collection of LW memes and short rationality quotes.
Could someone explain this reference?
It’s a metaphor used in Jonathan Haidt’s book The Happiness Hypothesis: the rider is the conscious or deliberative mind and the elephant is everything underneath.
More to the point, the analogy is used in one of Luke’s posts.
Is there an original source for this one?
Context.
Thanks Zack. I had a feeling I’d seen it before but couldn’t recall the details.