People (citizens of the US, let’s say) can learn things quite quickly from LLMs. The importance of AI safety can be expressed in a simple and intuitive way. The topic is genuinely interesting, relevant to the near future, and people are concerned about it.
As LLM use among the public grows, the basic reproductive number of reasonable sounding ideas will increase. They will propagate quicker than ever. Some which couldn’t propagate will become start to. A minimal seed / conversation starter for people to input into LLMs could be pretty powerful if it caused that LLM to output reasonable statements about the need for AI safety. People can probably recognize true statements about intelligence, agency, and power with some fidelity—these are universal experiences.
What is that minimal prompt? What efforts are being taken to get people to distill it, measure the effect, and eventually circulate it? We have epidemiological models for disease. Who is building this for concepts and rigorously testing the transmission rate and replication fidelity of specific ideas? Maybe some useful ones have R>1 if packaged thoughtfully, or will soon.
People will generate personalized bubbles for matters of taste, but will increasingly turn to the same few LLMs for argument. As LLMs speed up learning, people will get LessWrong.
(Add qualifiers wherever they belong—I wanted to get the point across. I think it’s easy to dismiss the notion of an educated public in today’s world, but the times are a changin’).
The Beast With A Billion Votes
People (citizens of the US, let’s say) can learn things quite quickly from LLMs. The importance of AI safety can be expressed in a simple and intuitive way. The topic is genuinely interesting, relevant to the near future, and people are concerned about it.
As LLM use among the public grows, the basic reproductive number of reasonable sounding ideas will increase. They will propagate quicker than ever. Some which couldn’t propagate will become start to. A minimal seed / conversation starter for people to input into LLMs could be pretty powerful if it caused that LLM to output reasonable statements about the need for AI safety. People can probably recognize true statements about intelligence, agency, and power with some fidelity—these are universal experiences.
What is that minimal prompt? What efforts are being taken to get people to distill it, measure the effect, and eventually circulate it? We have epidemiological models for disease. Who is building this for concepts and rigorously testing the transmission rate and replication fidelity of specific ideas? Maybe some useful ones have R>1 if packaged thoughtfully, or will soon.
People will generate personalized bubbles for matters of taste, but will increasingly turn to the same few LLMs for argument. As LLMs speed up learning, people will get LessWrong.
(Add qualifiers wherever they belong—I wanted to get the point across. I think it’s easy to dismiss the notion of an educated public in today’s world, but the times are a changin’).