I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic.
Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial.
On the other hand, if your p(doom) is 90%, then making sure that non-superhuman AI systems work without incident is alike to clothing kids in asbestos gear so they don’t hurt themselves while playing with matches.
Basically, if you think a road leads somewhere useful, you would prefer that the road goes smoothly, while if a road leads off a cliff you would prefer it to be full of potholes so that travelers might think twice about taking it.
Personally, I tend to favor first-order effects (like fewer crazies being able to develop chemical weapons) over hypothetical higher order effects (like chemical attacks by AI-empowered crazies leading to a Butlerian Jihad and preventing an unaligned AI killing all humans). “This looks locally bad, but is actually part of a brilliant 5-dimensional chess move which will lead to better global outcomes” seems like the excuse of every other movie villain.
This made me laugh out loud.
Otherwise, my idea for a dating system would be that given that the majority of texts written will invariably end up being LLM-generated, it would be better if every participant openly had an AI system as their agent. Then the AI systems of both participants could chat and figure out how their user would rate the other user based on their past ratings of suggestions. If the users end up being rated among each others five most viable candidates,
Of course, if the agents are under the full control of the users, the next step of escalation will be that users will tell their agents to lie on their behalf. (‘I am into whatever she is into. If she is big on horses, make up a cute story about me having had a pony at some point. Just put the relevant points on the cheat sheet for the date’.) This might be solved by having the LLM start by sending out a fixed text document. If horses are mentioned as item 521, after entomology but before figure skating, the user is probably not very interested in them. Of course, nothing would prevent a user from at least generically optimizing their profile to their target audience. “A/B testing has shown that the people you want to date are mostly into manga, social justice and ponies, so this is what you should put on your profile.” Adversarially generated boyfriend?