Thanks, but, uhm, try to not specify “your mom” as the background and “what the actual fuck is ai alignment” as your question if you want it to have a writing style that’s not full of “we’re toast”
Maybe the option of not specifying the writing style at all, for impatient people like me?
Unless you see this as more something to be used by advocacy/comms groups to make materials for explaining things to different groups, which makes sense.
If the general public is really the target, then adding some kind of voice mode seems like it would reduce latency a lot
This specific page is not really optimized for any use by anyone whatsoever; there are maybe five bugs each solvable with one query to claude, and all not a priority; the cool thing i want people to look at is the chatbot (when you give it some plausible context)!
(Also, non-personalized intros to why you should care about ai safety are still better done by people.)
I really wouldn’t want to give a random member of the US general public a thing that advocates for AI risk while having a gender drop-down like that.[1]
The kinds of interfaces it would have if we get to scale it[2] would be very dependent on where specific people are coming from. I.e., demographic info can be pre-filled and not necessarily displayed if it’s from ads; or maybe we ask one person we’re talking to to share it with two other people, and generate unique links with pre-filled info that was provided by the first person; etc.
Voice mode would have a huge latency due to the 200k token context and thinking prior to responding.
which I really want to! someone please give us the budget and volunteers!
at the moment, we have only me working full-time (for free), $10k from SFF, and ~$15k from EAs who considered this to be the most effective nonprofit in this field.
reach out if you want to donate your time or money. (donations are tax-deductible in the us.)
Is the 200k context itself available to use anywhere? How different is it from the Stampy.ai dataset? Nw if you don’t know due to not knowing what exactly stampy’s dataset is.
I get questions a lot, from regular ml researchers on what exactly alignment is and I wish I had an actually good thing to send them. Currently I either give a definition myself or send them to alignmentforum.
Nope, I’m somewhat concerned about unethical uses (eg talking to a lot of people without disclosing it’s ai), so won’t publicly share the context.
If the chatbot answers questions well enough, we could in principle embed it into whatever you want if that seems useful. Currently have a couple of requests like that. DM me somewhere?
Thanks, but, uhm, try to not specify “your mom” as the background and “what the actual fuck is ai alignment” as your question if you want it to have a writing style that’s not full of “we’re toast”
Maybe the option of not specifying the writing style at all, for impatient people like me?
Unless you see this as more something to be used by advocacy/comms groups to make materials for explaining things to different groups, which makes sense.
If the general public is really the target, then adding some kind of voice mode seems like it would reduce latency a lot
This specific page is not really optimized for any use by anyone whatsoever; there are maybe five bugs each solvable with one query to claude, and all not a priority; the cool thing i want people to look at is the chatbot (when you give it some plausible context)!
(Also, non-personalized intros to why you should care about ai safety are still better done by people.)
I really wouldn’t want to give a random member of the US general public a thing that advocates for AI risk while having a gender drop-down like that.[1]
The kinds of interfaces it would have if we get to scale it[2] would be very dependent on where specific people are coming from. I.e., demographic info can be pre-filled and not necessarily displayed if it’s from ads; or maybe we ask one person we’re talking to to share it with two other people, and generate unique links with pre-filled info that was provided by the first person; etc.
Voice mode would have a huge latency due to the 200k token context and thinking prior to responding.
Non-binary people are people, but the dropdown creates unnecessary negative halo effect for a significant portion of the general public.
Also, dropdowns = unnecessary clicks = bad.
which I really want to! someone please give us the budget and volunteers!
at the moment, we have only me working full-time (for free), $10k from SFF, and ~$15k from EAs who considered this to be the most effective nonprofit in this field.
reach out if you want to donate your time or money. (donations are tax-deductible in the us.)
Is the 200k context itself available to use anywhere? How different is it from the Stampy.ai dataset? Nw if you don’t know due to not knowing what exactly stampy’s dataset is.
I get questions a lot, from regular ml researchers on what exactly alignment is and I wish I had an actually good thing to send them. Currently I either give a definition myself or send them to alignmentforum.
Nope, I’m somewhat concerned about unethical uses (eg talking to a lot of people without disclosing it’s ai), so won’t publicly share the context.
If the chatbot answers questions well enough, we could in principle embed it into whatever you want if that seems useful. Currently have a couple of requests like that. DM me somewhere?
Stampy uses RAG & is worse.