Whenever I see discussions about the actual mechanisms by which ASI might actually act against humanity, it seems like a proxy argument for/against the actual position “ASI will/won’t be that much smarter than humans.”
Can it be complex without being messy?
Whenever I see discussions about the actual mechanisms by which ASI might actually act against humanity, it seems like a proxy argument for/against the actual position “ASI will/won’t be that much smarter than humans.”
Can it be complex without being messy?