In my experience the timeline is not usually the source of disagreement. They usually don’t believe that AI would want to hurt humans. That the paperclip maximizer scenario isn’t likely/possible. E.g. this popular reddit thread from yesterday.
I guess that would be premise number 3 or 4, that goal alignment is a problem that needs to be solved.
In my experience the timeline is not usually the source of disagreement. They usually don’t believe that AI would want to hurt humans. That the paperclip maximizer scenario isn’t likely/possible. E.g. this popular reddit thread from yesterday.
I guess that would be premise number 3 or 4, that goal alignment is a problem that needs to be solved.
Yeah, you’re probably right. I was probably just biased because the timeline is my main source of disagreement with AI danger folks.