This is a test
this is still a test
testing
This is a test
this is still a test
testing
This has helped me stamp out the last vestiges of my arachnophobia. It’s a window with a realistically moving spider that follows your cursors. The reason it helped me is because I knew I could close the window with a single click at any time and the fact that I had complete control over its movement. This amount of control gave me enough self confidence to deal with real life spiders.
There was a comment by KrisC that lists various useful aspects of biodiversity: http://lesswrong.com/lw/2l6/taking_ideas_seriously/2fna
July 14th is the national holiday in France and July 21st is the national holiday in Belgium, expect shops to be closed and public transport might stick to the “sun- and holiday” schedule.
This site has a checklist of things you might need before you leave:
http://goeurope.about.com/library/bl_b4_u_go_short.htm
A lot of European countries (France, for example) have toll roads: before you hit a large motorway you get a ticket and when you leave you have to pay depending on how far you travelled. Other countries (Italy, I believe) require you to buy some kind of sticker, the more expensive the sticker; the longer you can drive around (ranging from weeks to months) Not only can this take a bite out of your money, expect traffic jams when everybody has to pay up, especially during the summer
Thank you, Louie, great work!
She does mean a “close” war and she’s afraid we won’t be able to escape or get separated. Even if we do make it to another country we’d have to live in a refugee camp under horrible conditions.
After reading the current comments I’ve come up with this:
1) Restrict the AI’s sphere of influence to a specific geographical area (Define it in several different ways! You don’t want to confine the AI in “France” just to have it annex the rest of the world. Or by gps location and have it hack satellites so they show different coordinates.)
2) Tell it to not make another AI (this seems a bit vague but I don’t know how to make it more specific) (maybe: all computing must come from one physical core location. This could prevent an AI from tricking someone into isolating a back up, effectively making a copy)
3) Set an upper bound for the amount physical space all AI combined in that specific area can use.
4) As a safeguard, if it does find a way around 2, let it incorporate the above rules, unaltered, in any new AI it makes.
I’m studying to be a (biology) teacher and learning to use the didactic method is big part of our training. In fact this entire partim (December until now) has dealt with giving clear instructions, asking the right questions, etc. We’d give classes to each other and then let the other students point out anything that isn’t crystal clear. Whenever I study something I try to write it down as if I was explaining it to a six year old child. If I can’t then there is still something I don’t quite understand.
Because they are dead and are meant to stay that way, while heads frozen in cryonics are meant to be revived. Severed heads also have faces so they are more human-like than vat-brains so they can evoke a greater emotional reaction.
Sorry about that, I added a summary
I do the same thing
Yes, that list has a lot of the answers I was looking for. However, for my younger self, breaking from religion meant making my own moral rules so there is a good chance I would have rejected it as just another text trying to control my life (yes my younger self was quite dramatic)
That reminds me of something my sister did at her last job. Every time there was trouble she would get an email about it. Clearly a problem appearing and receiving mail was correlated, so her solution was to terminate her mail account. She called it the Shrödinger cat approach to life: as long as you aren’t sure there is a problem, there is a chance there isn’t.
Thanks for this post. I never thought about the overhead ratio like that before, it looks like I’ll be reevaluating the charities I support.
Agreed, I tried to explain Less Wrong to my father and now he thinks we’re some doomsday cult concerned that AI’s will wipe out humanity and rearrange our atoms in smiley faces. He concluded that everyone here has “way to much imagination” and now he won’t listen to anything that comes from this blog.
Maybe it would be interesting to ask deaf people how they think. Sign language, written words, purely visual, …?