Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer’s assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
I’m basically not worried about this.
Google Search has proven pretty OK at preventing spam and content farms from showing up in search results at rates that would make Google Search useless despite the fact the spammers and SEO actors spend billions of dollars per year trying to influence the search results (in ways that are good for the SEO actor or his client, but bad for users of Google Search).
Moreover, even though neither OpenAI, Anthropic nor DeepSeek had access to the expertise, software or data Google was using to filter this bad content from search result, this bad content (spam, content farms and other efforts by SEO actors) has very little influence (as far as I can tell) on the answers given by the current crop of LLM-based services from these companies.
A creator of an LLM is motivated to make the LLM as good as possible at truthseeking (because truthseeking correlates to usefulness to users). If it hasn’t happened already, then in at most a couple of years LLM’s will have become good enough at truthseeking to filter out the kind of spam you are worried about even though the creator of the LLM never directed large quantities of human attention and human skill specifically at the problem like Google has had to do over the last 25 years against the efforts of SEO actors. The labs are also motivated to make the answers provided by LLM services as relevant as possible to the user, which also has the effect of filtering out content produced by the psychotic people.