I’m imagining the CEO having a thought process more like...
- I have no idea how my team will actually react when we crack AGI—
Let’s quickly Google ‘what would you do if you discovered AGI tomorrow?’*
- Oh Lesswrong.com, some of my engineering team love this website
- Wait what?!
- They would seriously try to [redacted]
- I better close that loophole asap
I’m not saying it’s massively likely that things play out in exactly that way but a 1% increased chance that we mess up AI Alignment is quite bad in expectation.
*This post is already the top result on Google for that particular search
I love his books too. It’s a real shame.
″...such as imagining that an intelligent tool will develop an alpha-male lust for domination.”
It seems like he really hasn’t understood the argument the other side is making here.
It’s possible he simply hasn’t read about instrumental convergence and the orthogonality thesis. What high quality widely-shared introductory resources do we have on those after all? There’s Robert Miles, but you could easily miss him.