“three percent of people completely unable to form mental images” I don’t have photographic memory or anything, but I find it hard to believe some people don’t actually have immaginations. How could they even go through every day life? Somethings got to be wrong here. Kind of reminds me of those people that can’t dream in color. Weird.
Houshalter
No matter how hard I try, I can’t see the dancer on her right foot. Its not possible!
“Though the preference is to build safe AI first.”
Well that has always been a concern of mine. People think that they can define a difference between safe and unsafe AI’s, but I think the “safe” one would actually be more dangerous. Think about it: the safe one has all the properties of regular AI except the only way of making it safe would be to preprogram it with things it can’t do. There is always going to be a situation where does rules do more harm then good.
Err, whats that mean? Do you mean that the dancer is a bad example?
That kind of helps but it seems… I dunno, a little “artificial”. I don’t see it as though it was actually that way, and if I stop concentrating Its still on the left foot. Come to think of it, I never even saw it on the right foot. ERRR!
Well, then its pretty easy, isn’t it? You set the fitness function as predicting what you would want it to do. It then does its best to predict all of your values and desires and decision making. I suppose that would only work for one person, but it can be applied on a larger scale. Suppose you have a code of ethics that a group like SIAI comes up with and approves. You then feed it to the intelligence and test it under various simulations to make sure that it is interpretting them correctly and learns how to. The thing is that all you have to do to make it unsafe is remove those goals, go back to the basic program and give it orders that would require it to do bad things, like a military robot. Boom goes the world.
I’m right handed, and see her on her left foot- always. Also, when I look at it, it looks like the camera is locked, and the dancer is moving. Do you have to switch how you view the camera to see it the other way?
No, but I thought it might be a factor in seeing her on her right foot. Maybe its just me, but I don’t even see how it could be physically possible for her to be on her right foot if the cameras locked. This is frusterating.
But that is a good point, we should do a survey and see what kind of people see it which way. Maybe its just random, but I’m guessing theres a reason people see it differently. It reminds me of those people with motion sickness. As I understand it, theres nothing different about their brain or visual system, its just the way they grew up and their neurons were trained. When the image is shaky, their mind can’t compensate and it confuses them. And there are different degrees of it to.
I’m having a hard time trying to figure out how to explain the phenomenon I just witnessed. I went back to that link feeling that I might have missed something and I’d give it another try. In the first image she is clearly spinning on her left foot, the same way I always see it. I then look at the second image. Same thing. On the third image, I thought I was seeing it the way I always do, but when I looked at which foot was on the ground, it was her right! Since the foot in the middle never moves except for up and down, I wondered how it could possibly be the same image. So I compared it to the one in the middle to see if it really was the same, and both rotate exactly the same! Every pixel is the same (except the ones with color.) How could this be possible! So I looked back at the first and compared it to the one in the middle. Same thing, every pixel where it should be, and her left foot was clearly the one moving up and down. How could this be! I thought about it and came up with the idea that if the backside was similar enough the the front, you could switch between which side you saw as front/back and left/right would chang accordingly. But this is not the case, they are far to different. I am completely dumbfounded to the physics behind this. Looking at it again, I can focus on the spot between the first and the second and watch them move the same, then switch to focusing on the spot between the second and third, making her suddenly stop and turn the other direction. By switching back and forth really fast, she quickly wobbles back and forth. I begging to wonder if magic is possible.
Is there a pausible version of this? If I could freeze the individual frames, I might get it. I refuse to believe that her backside is that similiar, because just look at all the detail. Also, the part that reallly bugs me is not her direction of spin, but the fact I can’t even tell which way is right or left.
Huh. I looked at each frame, and found that when she is facing the observer, she is clearly on her left foot. Weird.
I used to debate creationists online. These people are typical of the crowd, “I’m right, your wrong, as long as I have the last word I win.” Its like trying to play chess with pigeons. They knock over the pieces, shit on the board, show no signs of intelligence, then fly back to their nest and declare victory. At some point you have to ask yourself: “Why do I bother?”
“Indeed, but that wasn’t the problem this post was trying to solve.”
(seperator)
“Why do long, uninspiring, and seemingly-childish debates sometimes emerge even in a community like LessWrong? And what can we do about them?”
So it is mostly focused on LW, but there is a deeper issue here: What are you supposed to do when engaged in a childish/fruitless argument? I have yet to here a good universal answer. Maybe “just walk away” is good enough most of the time, but sometimes its not. Sometimes you can’t walk away. Sometimes the issue is important.
[comment deleted]
Ya, but even beyond the internet, this is a universal problem. Sometimes its more then just a hobby to try to convince someone to do the right thing. Virtually any political debate would be a good example. And the fact that our court system is so inconsistent only testifies to this problem.
I’m a bit confused as to what is Soar and how it works, but it does sound very interesting. Of course, trying to model the way the human mind works is the oppisite of what we should be trying to do. Immagine all of the short-comings of human reasoning highly exagerated in a computer simulation.
If you haven’t already read about CEV yet, I’m pretty impressed. There are some failure modes that would crop up if you’re not careful, but it’s not far from that prima facie workable idea.
Never heard of CEV before, I might look into it later, but I don’t have enough time to read it all right now. If its like what I suggested, the fitness function being to accuratley predict the users long-term and short-term goals, I was going to do that in an older AI project that never got finished.
Generally speaking, a smarter-than-human intelligence with strong goals wouldn’t passively allow people with different goals to modify its goal system. After all, that would prevent it from achieving the goals it has.
Well once you create an artificial intelligence, then what? If you release the source code or the principles behind its design, anyone can build one with whatever goals they want. Your assuming that the only way another one could pop up is if the original was “hijacked” and pirated, but this probably won’t be the case. I am currently working on building the simplest possible self improving system with someone else over the internet. Its for a currently in development higher-level programming language which will (hopefully :P) translate higher level instructions into source code, and learn from its mistakes which the users might point out. Since it is abstracted from the real world and confined to just matching input with output, there really isn’t any danger in it taking over the world, although now that I think about it, it could theoretically write a better version of itself as a virus into an unsuspecting users program. Uh-oh, back to the drawing board :(
Well then the answer is simple: Instead of setting the goal as doing what you would do at that specific point in time (which might actually work, assuming that you didn’t want your will to be modified to want something thats cheap), you set it to do what it thinks you, at the time you created it would want. If you assume that the AI would know you would want it to do what the you in the future would want it to do, but not sickly modify you to want weird things (like death which is the cheapest thing.) Problem solved, although your AI is going to have to have alot of background knowlege and intelligence to actually pull this off.
You haven’t heard of the AI Box Experiment yet, and that’s just one failure mode.
Well the AI has to have a goal that would make it want out of the box, or in my case its isolated program. Is there any way to preprogram a goal that would make it not want out of the box? Eg; “under no circumstances are you to try in any way to leave your isolated and controled enviroment.”
If it’s self-improving and smarter than human… then its goals get achieved. If you can tell that allowing other people to run their own versions of the AI could lead to disaster, then the AI can realize this as well, and act to prevent it.
IMO the most likely scenario is that the first transhuman intelligence takes over the world as an obvious first step to achieving its goals. This need not be a bad thing— it could (for instance) take over temporarily, institute some safety protocols against other AIs and other Bad Things, then recede into the background to let us have the kind of autonomy we value. The future all depends on its goal system.
This sounds like a very, very bad idea, but when I think about it I realise that its the only way to ensure an AI appocalypse will never happen. My idea was that if I ever managed to create a workable AI, I would create a secret and self sufficient micronation in the pacific. It just sounded like a good idea ;)
New here :(
But how do they plan to stop an AI appocalypse, or is that one of those things they haven’t figured out yet? I think the best bet would be to create AI first, and then use it to make safe AI as well as create plans for stopping an AI appocalypse.