If the front page link’s broken for Eliezer’s last post “Posting on Politics” (might be fine for your time zone), it’s here.
Recovering_irrationalist
Surely there’s more than social conformity/conflict aversion at work here? In the experiment in the video, an expectation of pattern continuation is set up. For most questions, the 4 spoken words the subject hears before responding do correspond to the apparently correct spoken word response. I’d expect subconcious processes to start interpreting this as an indicator of the correct answer regardless of social effects and be influenced accordingly, at least enough to cause confusion which would then increase susceptibility to the social effects.
I’d expect this effect to also be reduced where the subject is writing down his answers, as that takes out of the equation the close connection between hearing spoken numbers and speaking spoken numbers.
Eliezer: Hindsight bias? No crazier to believe at the time than many truths.
Hey Betty, your disease was given to you by countless little flying monsters, as many as the sands in the desert, but no one can see them. And they make babies by tearing themselves in half. Most of your ancestors were like that.
I hope this post is the start of a series. My main problem is not managing to actually do what I know perfectly well in my head would be the rational thing.
James:”springboard procrastination”
I’m suspicious, but much of this rings true. Please tell or link to more, I found nothing on your blog.
(Just keep an eye out, and you’ll observe much that seems to confirm this theory...)
I hope everyone was paying attention to that bit :-)
Coming tomorrow: How to resist an affective death spiral.
Please include judging how much to resist what may partly be a due to the spiral, so as not to overcompensate. Sometimes a “Great Thingy” is genuinely great.
TGGP: For anyone who hasn’t read it yet, The Gandhi Nobody Knows.
A rebuttal is here. Both are flawed, but I don’t like believing revelations before hearing counterarguments.
Such a genie might already exist.
You mean GOD? From the good book? It’s more plausible than some stories I could mention.GOD, I meta-wish for an ((...Emergence-y Re-get) Emergence-y Re-get) Emergency Regret Button.
As long as you’re wishing, wouldn’t you rather have a genie whose prior probabilities correspond to reality as accurately as possible? I wouldn’t pick an omnipotent but equally ignorant me to be my best possible genie.
TGGP: What did you think of it? I agree till the Socrates Universe, but thought the logic goes downhill from there.
@Eliezer: Each Nazis/genocide mention adds to the risk of thread derailment. Shouldn’t, but does. I’d put them next to Quantum Mechanics on a things to avoid explaining things with list.
Me: AGI is a William Tell target. A near miss could be very unfortunate. We can’t responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.
Caledonian: That’s not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.
Yes, by “take a proper shot” I meant shooting at the proper target with proper shots. And yes, practice on less-dangerous targets is necessary, but it’s not sufficient.
It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren’t capable of evaluating the thinking of gods.
I agree we can’t accurately evaluate superintelligent thoughts, but that doesn’t mean we can’t or shouldn’t try to affect what it thinks or what it’s goals are.
I couldn’t do this argument justice. I encourage interested readers to read Eliezer’s paper on coherent extrapolated volition.
@Doug & Gray: AGI is a William Tell target. A near miss could be very unfortunate. We can’t responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.
PS. Where they can communicate, I’d worry more about rogue evolution in nanobot software rather than hardware. Huge replication potential & speed, hi-fi heritability through many iterations, etc. and then if a half-intelligent virus hits the fabricator software...
Please could you post a link to Perry’s article? I couldn’t find it.
This is, to me, the most frightening thing about grey goo or nanotechnological weapons—that they could eat the whole Earth and then that would be it, nothing interesting would happen afterward.
@Eliezer: So unless that goo can already get off-planet, it won’t ever? Good! Personally, I’m more scared by things that can eat the universe, like UFAI. If it’s only us gets eaten, someone else can step up before the last star burns out.
@Others: All the more reason to support FAI research. The longer it takes to get it right, the more time for someone less careful to crack recursive self-improvement.
@Stefan: I enjoyed your book and was fascinated by your FAI perspective, but your comments here could be read as overly self-promoting, which would be counterproductive. An evil, paranoid maniac might even imagine you write comments to maximize how many links to your blog you can cram onto a page! Maybe limiting the links to yourself might curb such insanity in your audience.
In whatever facets I sounded about as “knowledgeable”, “smart”, “honest”, or “self-assured” then as now, you might take these facets into account in deciding whether someone’s arguments are worth your time to read, but you shouldn’t take them into account in deciding whether the person is right.
Agreed. Having said that, I do find those facets to correlate with truth, but the correlation flattens out for high values. Besides, the first two would be hard for me to judge well between your 1997 and 2007 selves, for obvious reasons. Maybe with the right efforts my 2012 self could get close enough to tell.
Whatever it is that caused you to reject most of my old self’s beliefs regardless, is what’s doing the actual work of discriminating truth from falsehood, not those other perceptions.
That only works if your new self’s views are true, rather than just closer to the truth, or better argued, or less alarm-bell-raising, or fitting better with how my mind works, or what I already believe, or what I want to believe, etc. etc.. That was my point.
Don’t worry, it’s my neurosis not yours. :-)
While I wish I had something reassuring to say on this subject, you should probably be quite disturbed if you find my work from 1997 sounding as persuasive as my work from 2007
But I said...
Although I was convinced by very few of those older mistakes (before I searched and found retractions) that could just as easily mean new arguments got super persuasive rather than super accurate.
All my comments mean in practice is that even though once I study and investigate your (new) arguments they nearly always seem to me to be right, I won’t let myself start lazily suspending critical judgment and investigation of your beliefs before adopting them. I hope you agree that’s a good thing.
If someone changes their positions frequently, AND they’re very confident in their positions, that’s a bad sign.
I don’t think he changes his mind too frequently, and being overly confident at the time of now-abandoned positions isn’t unusual. My point was that in Eliezer’s case a knowledgeable, smart, honest, and self-certain argument doesn’t imply strong evidence of truth, because those qualities appear in arguments that turned out false.
To be honest I think I was hoping someone would leap to his defense and crush my argument, giving me permission to be as sure as he is about his beliefs that I’ve adopted, whereas what I should do is keep a healthy amount of skepticism and resist any urge to read “Posted by Eliezer” as “trust this”.
PS. I know arguments should be judged based on their own worth and not who made them, but there are other factors.
Excellent post, especially the important point that conscious dishonesty or specific conspiracy aren’t required for this to happen.
The best doctor isn’t the one who just gives patients whatever treatment they ask for. It may be the one who gives them what they would ask for if they were equally informed.
In Tony Blair’s first term, he made policy largely based on popular opinion. They asked focus groups their priorities, railways scored low, money was poured out of railways, the railway infrastructure fell apart, people were understandably angry. Joe Public can’t be expected to have the knowledge and experience to judge the possible implications of every political decision.
Here, here. But it would weaken both sides of the currently in power, so...