To what nugget of rationality does this point?
rule_and_line
Meetup : Book Mini-Review: Doug Hubbard’s How to Measure Anything
Meetup : First Meetup in Jacksonville, FL
The idea that a self-imposed external constraint on action can actually enhance our freedom by releasing us from predictable and undesirable internal constraints is not an obvious one. It is hard to be Ulysses.
-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)
The “Ulysses” reference is to the famous Ulysses pact in the Odyssey.
While I don’t read scientific literature that much, I do make formal predictions pretty often. Typically any time I notice something I’m interested in that will be easy to check in the future.
Will I get to bed on time today? Will I be early for the meeting tomorrow? Etc.
I second the anecdotal evidence that this is a “live” exercise. Sidenote: it took me way too long to realize I needed to write all my predictions down. I spent a few weeks thinking I was completely excellent at predicting things.
I endorse (with the possibly-expected caveat about Wilson score ranking).
Unfortunately, I can’t (don’t know how to?) hack the LW backend. Is that something I can look into?
I beseech you, in the bowels of Christ, think it possible that you may be mistaken.
-- Oliver Cromwell
Previously posted two years ago. I’m curious if some things bear repeating. Is there any accepted timeframe for duplicates?
That’s an interesting prediction. Have you tried it? Can you predict what you’d do after filling the notebook?
In my imagination, I’d probably wind up in one of two states:
Feeling tricked and asking myself “What was the point of that?”
Feeling accomplished and waiting for the next instruction.
I’m more an outsider than a regular participant here on LW, but I have been boning up on rhetoric for work. I’m thrown by this in a lot of ways.
I notice that I’m confused.
Good for private rationality, bad for public rhetoric? What does your diagram of the argument’s structure look like?
As for me, I want this as the most important conclusion in the summary.
I don’t get that, because the evidence for this statement comes after it and later on it is restated in a diluted form.
Do you want a different statement as the most important conclusion? If so, which one? If not, why do you believe the argument works best when structured this way? As opposed to, e.g., an alternative that puts the concrete evidence farther up and the abstract statement “Most goals are dangerous when an AI becomes powerful” somewhere towards the end.
Related point: I get frequent feelings of inconsistency when reading this summary.
I’m encouraged to imagine the AI as a super committee of
then I’m told not to anthropomorphize the AI.
Or
I’m told the AI’s motivations are what “we actually programmed into it”,
then I’m asked to worry about the AI’s motivation to lie.
Note I’m talking about rhetorical, a/k/a surface-level feeling of inconsistency here.
You seem like a nice guy.
Let’s put on a halo. Isn’t the easiest way to appear trustworthy to first appear attractive?
I was surprised this summary didn’t produce emotions around this cluster of questions:
Who are you?
Do I like you?
Do I respect your opinion?
Did you intend to skip over all that? If so, is it because you expect your target audience already has their answers?
Shut up and take my money!
There are so many futuristic scenarios out there. For various reasons, these didn’t hit me in the gut.
The scenarios painted in the paragraph that starts with
are very easy for me to imagine.
Unfortunately, that works against your summary for me. My imagination consistently conjures human beings.
Wall Street banker.
Political lobbyist for an industry that I dislike.
(Nobody comes to mind for the “replace almost every worker in the service sector” scenario.)
Chairman of the Federal Reserve.
Anonymous Eastern European hacker.
The feeling that “these are problems I am familiar with, and my society is dealing with them through normal mechanisms” makes it hard for me to feel your message about novel risks demanding novel solutions. Am I unique here?
Inversely, the scenarios in the next paragraph, the one that starts with
are difficult for me to seriously imagine. You acknowledge this problem later on, with
Am I unique in feeling that as dismissive and condescending? Is there an alternative phrasing that takes into account my humanity yet still gets me afraid of this UFAI thing? I expect you have all gotten together, brainstormed scenarios of terrifying futures, trotted them out among your target audience, kept the ones that caused fear, and iterated on that a few times. Just want to check that my feelings are in the minority here.
Break any of these rules
I really enjoy Luke’s post here: http://lesswrong.com/lw/86a/rhetoric_for_the_good/
It’s a list of rules. Do you like using lists of rules as springboards for checking your rhetoric? I do. I find my writing improves when I try both sides of a rule that I’m currently following / breaking.