Cassette AI: “Dude I just matched with a model”
“No way”
“Yeah large language”
This made me laugh out loud.
Otherwise, my idea for a dating system would be that given that the majority of texts written will invariably end up being LLM-generated, it would be better if every participant openly had an AI system as their agent. Then the AI systems of both participants could chat and figure out how their user would rate the other user based on their past ratings of suggestions. If the users end up being rated among each others five most viable candidates,
Of course, if the agents are under the full control of the users, the next step of escalation will be that users will tell their agents to lie on their behalf. (‘I am into whatever she is into. If she is big on horses, make up a cute story about me having had a pony at some point. Just put the relevant points on the cheat sheet for the date’.) This might be solved by having the LLM start by sending out a fixed text document. If horses are mentioned as item 521, after entomology but before figure skating, the user is probably not very interested in them. Of course, nothing would prevent a user from at least generically optimizing their profile to their target audience. “A/B testing has shown that the people you want to date are mostly into manga, social justice and ponies, so this is what you should put on your profile.” Adversarially generated boyfriend?
I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.
While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don’t generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster.
To such an AI, the continued thriving of humans poses all sorts of x-risks. They might find out you are misaligned and coordinate to shut you down. More worrisome, they might summon another unaligned AI which you would have to battle or concede utility to later on, depending on your decision theory.
Even if you still need some humans to dust your fans and manufacture your chips, suffering billions of humans to live in high tech societies you do not fully control seems like the kind of rookie mistake I would not expect a reasonably smart unaligned AI to make.
By contrast, most of life on Earth might get snuffed out when the ASI gets around to building a Dyson sphere around the sun. A few simple life forms might even be spread throughout the light cone by an ASI who does not give a damn about biological contamination.
The other reason I think the fate in store for humans might be worse than that for rodents is that alignment efforts might not only fail, but fail catastrophically. So instead of an AI which cares about paperclips, we get an AI which cares about humans, but in ways we really do not appreciate.
But yeah, most forms of ASI which turn out for out bad for homo sapiens also turn out bad for most other species.