Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.
Google site:lesswrong.com “me” 5,360 results
Google site:lesswrong.com “I” 7,520 results
Google site:lesswrong.com “it” 7,640 results
Google site:lesswrong.com “a” 7,710 results
Perhaps you overestimate the extent to which google search results on a term reflect the importance of the concept to which the word refers.
I note that:
The best posts on ‘rationality’ are among those that do not use the word ‘rationality’*.
Similar to ‘Omega’ and ‘Clippy’, AI is a useful agent to include when discussing questions of instrumental rationality. It allows us to consider highly rational agents in the abstract without all the bullshit and normative dead weight that gets thrown into conversations whenever the agents in question are humans.