Wow! This post is particularly relevant to my life right now. On January 5th I start bootcamp, my first day in the military.
MMO of the future lol(some swearing)
And just so I’m not completely off topic, I agree with the original post. There should be games, they should be fun and challenging and require effort and so on. AI’s definetly should not do everything for us. A friendly future is a nice place to live in and not a place wher an AI does the living for us so we might as well just curl up in a fetal position and die.
@ ac: I agree with everything you said except the part about farming a scripted boss for phat lewt in the future. One would think that in the future they could code something more engaging. Have you seen LOTR...
Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it’s a “waste of time”?
What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen’s values and lifestyle?
-Each state will have it’s own constitution and rules.
-Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules.
-The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there.
-There are certain universal meta-rules that supercede the states’ rules such as…
-A citizen may leave a state at any time and may not be held in a state against his or her will.
-No killing or significant non-consensual physical harm permitted; at most a state could permanently exile a citizen.
-There are some exceptions such as the decision power of children and the mentally ill.
Anyways, this is a rough idea of what I would do with unlimited power. I would build this, unless I came across a better idea. In my vision, citizens will tend to move into states they prefer and avoid states they dislike. Over time good states will grow and bad states will shrink or collapse. However states could also specialize and for example, you could have a small state with rules and a lifestyle just right for a small dedicated population. I think this is an elegant way of not imposing a monolithic “this is how you should live” vision on every person in the world yet the system will still kill bad states and favor good states whatever those attractors are.
P.S. In this vision I assume the Earth is “controlled”(meta rules only) by a singleton super-AI with nanotech. So we don’t have to worry about things like crime(forcefields), violence(more forcefields) or basic necessities such as food.
Um… since we’re on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.
″...what are some other tricks to use?”—Eliezer Yudkowsky
“The best way to predict the future is to invent it.”—Alan Kay
It’s unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It’s far more productive to predict a possible future and implement it.
Eliezer, what are you going to do next?
“I think your [Eliezer’s] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved.”
I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don’t expect him to necessarily be good at writing code. (I’m not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer’s greatest contribution will be inspiring others to write AI. We don’t have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.
Slight correction. I said: “Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it’s an attempt to reverse stupidity to get intelligence.” I worded this sentence badly. I mean that stupid people saying things cannot make something false and usually when people commit this fallacy it’s because they are trying to say that the opposite of the “bad” point is true. This is why I said it’s an attempt to reverse stupidity to get intelligence.
Basically when we see “a stupid person said this” being advanced as proof that something is false, we can expect a reverse stupidity to get intelligence fallicy right after.
I disagree with much of what is in the linked essay. One doesn’t have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: “My opponent is a tainted, therefore his arguments are bad.” One can make ad hominim statements without actually saying them by using innuendo.
On the other hand ad hominim isn’t even necessarily a fallacy. Of course an argument cannot become wrong just because a stupid person says it but we can expect that on average people with a bad track record in arguing will continue to argue poorly and people with good track records will argue well. In that sense we can set priors for someone’s arguments being right before hearing them. Just remember to update afterwards. We actually do this all the time whether we admit it or not. We trust more what someone with a PHD in physics has to say about physics than a creationist. Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it’s an attempt to reverse stupidity to get intelligence. However expecting people who normally say stupid things to continue to do so is Bayes compliant.
I see the ad hominim “fallacy” concept as more of an injunction or a hack if you will for human reasoners. It reminds us to examine the substance of the arguments of people we disagree with instead of dismissing them for political reasons. A perfect Bayesian mind could set up priors for people being right and impartially examine their arguments and update correctly without being swept up by political instincts. For humans on the other hand it might be more practical to focus on the substance exclusively and not the messengers unless the gap of expertise is huge (eg. PHD physicist vs. creationist on physics).
I don’t understand. Am I too dumb or is this gibberish?
“You can’t build build Deep Blue by programming a good chess move for every possible position.”
Syntax error: Subtract one ‘build’.
I wonder if liars or honest folk are happier and or more successful in life.
We are missing something. Humans are ultimatly driven by emotions. We should look for which emotions beliefs tap into in order to understand why people seek or avoid certain beliefs.
I thought of some more.
-there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don’t fight the status quo.
-everything is connected with “energy”(mystically): you or special/chosen people might be ably to tap into this “energy”. You might glean information you normally shouldn’t have or gain some kind of special powers.
-Scientists/professionals/experts are “elitists”.
-Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good.
That’s it for now.
-faith: i.e. unconditional belief is good. It’s like loyalty. Questioning beliefs is like betrayal.
-The saying “Stick to your guns.”: Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier.
-The faithfull: i.e. us, we are the best, god is on our side.
-the infedels: i.e. them, sinners, barely human, or not even.
-God: Infenetly powerful alpha male. Treat him as such with all the implications…
-The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the devil is seducing you to sin and suceeding. Anyone opposed to your beliefs is cooperating with/being influenced by the devil.
-Assasination fatwas: Whacking people who are anti-Islam is the will of Allah.
-a sexually satisfying lifestyle is bad: This makes people more angsty(especially young men). This angst is your fault and it’s sin. To be less angsty you should be less sinful ergo fight your sexual urges. And so the cycle of desire, guilt, angst and confusion continues.
-no masturbation: see above.
-you are born in debt to Jesus because he died for your sins 2000 years ago.
That’s all I could think of right now.
Ok, maybe my last post was a bit harsh(it’s tricky to express oneself over the Internet). I will elaborate further. Eliezer said:
“So here are the traditional values of capitalism as seen by those who regard it as noble—the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say “There’s a sucker born every minute”), or Warren Buffett:”
I don’t know much about the latter two but I have read Paul Graham extensively. It sounds like a strawman to me when Eliezer says:
“I regard finance as more of a useful tool than an ultimate end of intelligence—I’m not sure it’s the maximum possible fun we could all be having under optimal conditions. I’m more sympathetic than this to people who lose their jobs, because I know that retraining, or changing careers, isn’t always easy and fun. I don’t think the universe is set up to reward hard work; and I think that it is entirely possible for money to corrupt a person.”
So if we come back to Paul Graham, while reading his essays I’ve never got the impression that he…
-regards finance as the ultimate end of intelligence,
-thinks capitalism is the maximum possible fun we could all be having under optimal conditions,
-is not sympathetic to people who lose their jobs,
-thinks the universe is set up to reward hard work(proportionately as a physical law),
-or that money doesn’t corrupt people.
That’s why I think the post gives off the vibe of a strawman. Look, capitalism isn’t perfect but you need better arguments to dismiss it. Am I being too harsh again? Alright, maybe Eliezer isn’t trying to dismiss capitalism in his post but then what is he actually trying to say? All I got from the post was a weak attempt at refuting things nobody actually believes. If I misunderstand please explain.
The post wasn’t narrow enough to make a point. Elizier stated:
“I regard finance as more of a useful tool than an ultimate end of intelligence—I’m not sure it’s the maximum possible fun we could all be having under optimal conditions.” Are we talking pre or post a nanotech OS running the solar system? In the latter case most of these “values” would become irrelevant. However given the world we have today, I can confidently say that capitalism is pretty awesome. There is massive evidence to back up my claim.
It smells like Eliezer is trying to refute a strawman. Specifically, I mean that there are probably few intelligent people who think of capitalism as a win-win all around. Capitalism is a compromise, it’s the best we could come up with so far.
Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.