Agreed. LLMs will make mass surveillance (literature, but also phone calls, e-mails, etc) possible for the first time ever. And mass simulation of false public beliefs (fake comments online, etc). And yet Meta still thinks it’s cool to open source all of this.
It’s quite concerning. Given that we can’t really roll back ML progress… Best case is probably just to make well designed encryption the standard. And vote/demonstrate where you can, of course.
Agreed. LLMs will make mass surveillance (literature, but also phone calls, e-mails, etc) possible for the first time ever. And mass simulation of false public beliefs (fake comments online, etc). And yet Meta still thinks it’s cool to open source all of this.
It’s quite concerning. Given that we can’t really roll back ML progress… Best case is probably just to make well designed encryption the standard. And vote/demonstrate where you can, of course.