I watched the Joe Rogan interview with him where he disavowed his books political leanings. I’m a left-liberal who used to hate him because of his book, but after watching that interview I like him.
RedErin
Interesting idea. I have a strong fear of death and also despite my best effort, I am prone to procrastination.
But my procrastination diverts my attention from the things I really want to be doing. So it wastes my time more than anything.
I’ve liked all of Tim Urbans articles. Very thorough and in depth.
Dogs were domesticated in such a way so that their very existence depends on them being nice to humans.
It it was me, I would have let you out.
Your misanthropy reminds me of myself when I was younger. I used to think the universe would be better off if there were no more humans. I think it would be good for your mental health if you read some Peter Diamandis or Stephen Pinker’s “The Better Angels of our Nature”. They talk about how things are getting better in world.
Leadership?
It’s a rare quality. I didn’t like his book, but I did like him in interviews he’s done. People have a tendency to rally behind anyone who leads.
So if an AI were created that had consciousness and sentience, like in the new Chappie movie. Would they advocate killing it?
Whoa, someone actually letting the transcript out. Has that ever been done before?
Yes, but only when the gatekeeper wins. If the AI wins, then they wouldn’t want the transcript to get out, because then their strategy would be less effective next time they played.
I used to have severe social anxiety. A lot of factors helped me get over it. But talking to people was definitely up there. I’m not scared of people today, but my social skills are still a bit lacking.
Maybe this is a test for Harry. V wants Harry to find a way to win.
The Gatekeeper usually wants to publish if they win, to brag. Their strategy isn’t usually a secret, it’s simply to resist.
It just seemed like you had a great answer to each of his comments. You chipped away at my reservations bit my bit.
Although I do think a FAI is more likely than most people.
Thanks for posting the text. It was very entertaining.
I didn’t see Ray Kurzwiels name on there. I guess he wants AI asap, and figures it’s worth the risk.
I wouldn’t say pouring money into the developing world is a tiny drop.
Bill Gates 2014 Annual Letter gives evidence that it’s a very good investment.
But it is unethical to allow all the suffering that occurs on our planet.
I’m going to provide a paperclip senerio below, please tell me if you think it’s impossible.
Imagine a struggling office supplies company that’s pressureing it’s empoyees to produce innovative results or they’ll be fired. They hired an AI guy who has yet to produce any significant results. After a meeting where the boss basically tells the AI guy to produce something by the end of the month or he’s out. Our AI is a gifted coder, but lacks a lot commen sense, he’s also quite poor, and is desparate to give the company an edge so he can save the day. In a flash of insight combined with some open source deep learning sites (like kaggle), he’s able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
The AI is going to stupid, but it’s going to quickly find out how to turn the world into paperclips. It’s not going to be a general intelligence. But it doesn’t have to be to cause problems.
This one should help you empathize with other people more.
-Neil Gaiman