> This has also been my direct experience studying and researching open-source models at Conjecture.
Interesting! Assuming it’s public, what are some of the most surprising things you’ve found open source models to be capable of that people were previously assuming they couldn’t do?
This matters for advocacy for pausing AI, or failing that, advocacy about how far back the red-lines ought to be set. To give a really extreme example, if it turns out even an old model like GPT-3 could tell the user exactly how to make a novel bioweapon if prompted weirdly, it seems really useful to be able to convince our policy makers of this fact, though the weird prompting technique itself should of course be kept secret.
> This has also been my direct experience studying and researching open-source models at Conjecture.
Interesting! Assuming it’s public, what are some of the most surprising things you’ve found open source models to be capable of that people were previously assuming they couldn’t do?
This matters for advocacy for pausing AI, or failing that, advocacy about how far back the red-lines ought to be set. To give a really extreme example, if it turns out even an old model like GPT-3 could tell the user exactly how to make a novel bioweapon if prompted weirdly, it seems really useful to be able to convince our policy makers of this fact, though the weird prompting technique itself should of course be kept secret.