I’m definitely glad to see people associated with SERI MATS working on this. But LLMs are useable for a much, much wider variety of information control and person-influencing systems; possibly even microtargeting individual employees of major economic analysis firms and security/military/intelligence agencies, but almost certainly larger scale systems where there is extremely large amounts of human behavior data being generated for data scientists and psychologists to work with.
This is a topic that I’ve worked on for several years now, and I’m very interested in the level of understanding that people in the Alignment community currently have, e.g. to prevent people who are already working on this from reinventing the wheel. To what extent did SERI MATS facilitate your research on this topic relative to other things you worked on? Are you alright with me contacting you via Lesswrong DM?
Feel free to DM. I think you’re absolutely correct these systems will eventually be used by intelligence agencies and other parts of the security apparatus for fine-grained targeting and espionage, as well as larger scale control mechanisms if they have the right data. This was just the simplest use of the current technology, and it seems interesting that mass monitoring has still been somewhat labor-constrained but may not remain so. These sorts of immediate concerns may also be useful for better outreach in governance/policy discussions.
This was a post I wrote during SERI MATS and not my main research. Some of the folks working on hacking and security are more explicitly investigating the potential of targeted operations with LLMs.
I’m definitely glad to see people associated with SERI MATS working on this. But LLMs are useable for a much, much wider variety of information control and person-influencing systems; possibly even microtargeting individual employees of major economic analysis firms and security/military/intelligence agencies, but almost certainly larger scale systems where there is extremely large amounts of human behavior data being generated for data scientists and psychologists to work with.
This is a topic that I’ve worked on for several years now, and I’m very interested in the level of understanding that people in the Alignment community currently have, e.g. to prevent people who are already working on this from reinventing the wheel. To what extent did SERI MATS facilitate your research on this topic relative to other things you worked on? Are you alright with me contacting you via Lesswrong DM?
Feel free to DM. I think you’re absolutely correct these systems will eventually be used by intelligence agencies and other parts of the security apparatus for fine-grained targeting and espionage, as well as larger scale control mechanisms if they have the right data. This was just the simplest use of the current technology, and it seems interesting that mass monitoring has still been somewhat labor-constrained but may not remain so. These sorts of immediate concerns may also be useful for better outreach in governance/policy discussions.
This was a post I wrote during SERI MATS and not my main research. Some of the folks working on hacking and security are more explicitly investigating the potential of targeted operations with LLMs.