I think he needed to rehearse this 2 more times and do warmups before going onstage. The middle was great though :D
The section on AI could be refined a bit for fewer audience prerequisites. For example, you could even drop the idea of self-improvement and singularity:
“At some point, it’s likely that someone is going to create a very powerful artificial intelligence—an AI. This would be a program running on silicon that can think faster than a human—it would be a great planner, a great scientist, maybe even great at programming, so it could upgrade itself. If this AI wants to help humanity, this will be very good, but if it doesn’t this has the potential the be a catastrophe. If the AI does not value human freedom, it will end up harming it—it’s like pollution, where if you don’t carefully make things non-toxic you hurt the environment—if you don’t care about love or art or teamwork, then you’ll do things that are toxic to them, just because you’re not being careful. So the safety-conscious person says ‘let’s not take the risk—this is too dangerous a toy to play with’.” [Insert rest of talk]
Hello Lesswrongers, please besides commenting on the original talk linked above do take time to do what Manfred just did and suggest improvements for the TED, if it goes global, it will have many more minutes and I could add suggested content.
But it only goes global if you actually go there and comment on it.
I think he needed to rehearse this 2 more times and do warmups before going onstage. The middle was great though :D
The section on AI could be refined a bit for fewer audience prerequisites. For example, you could even drop the idea of self-improvement and singularity:
“At some point, it’s likely that someone is going to create a very powerful artificial intelligence—an AI. This would be a program running on silicon that can think faster than a human—it would be a great planner, a great scientist, maybe even great at programming, so it could upgrade itself. If this AI wants to help humanity, this will be very good, but if it doesn’t this has the potential the be a catastrophe. If the AI does not value human freedom, it will end up harming it—it’s like pollution, where if you don’t carefully make things non-toxic you hurt the environment—if you don’t care about love or art or teamwork, then you’ll do things that are toxic to them, just because you’re not being careful. So the safety-conscious person says ‘let’s not take the risk—this is too dangerous a toy to play with’.” [Insert rest of talk]
Hello Lesswrongers, please besides commenting on the original talk linked above do take time to do what Manfred just did and suggest improvements for the TED, if it goes global, it will have many more minutes and I could add suggested content.
But it only goes global if you actually go there and comment on it.
Thanks Manfred