I’m Jérémy Perret. Based in France. PhD in AI (NLP). AI Safety & EA meetup organizer. Information sponge. Mostly lurking since 2014. Seeking more experience, and eventually a position, in AI safety/governance.
Extremely annoyed by the lack of an explorable framework for AI risk/benefits. Working on that.
Let’s see if your post has successfully overcome my mental filters (at the very least, I clicked). Here’s my reformulation of your claims, as if I had to explain them to someone else.
You need a special effort to grab the attention of humans
Humans can’t process all the words thrown at them and select “impressive” content
You need several tries to transmit knowledge properly
Beyond being impressive, words need to be “relevant” to transmit knowledge efficiently
Words can’t create a perfectly impressive and relevant content
Being very impressive doesn’t guarantee relevance
Content impressive for you doesn’t make it more relevant for you
This is a toy model, humans also have incentives to shape which content gets thrown or not
Now that I’ve written the points above, I study again the “what if” part at the end and say, “oh, so the idea is that human language may not be the best way to transmit knowledge because what gets your attention often isn’t what lets you learn easily, cool, then what”
Then… you claim that there might be a Better Language to cut through these issues. That would be extremely impressive. But then I scroll back up and I see the titles of the following posts. I’m afraid that you will only describe issues with human communication without suggesting techniques to overcome them (at least in specific contexts).
For instance, you gave an example comparison in impression (asteroid vs. climate change). Could you provide a comparison for relevance? Something that, by your lights, gets processed easily?