→ help avoid catastrophic AI failures…
Ethically aligned prototype: RLLMv3
Unethically aligned prototype: Paperclip-Todd
→ help avoid catastrophic AI failures…
Ethically aligned prototype: RLLMv3
Unethically aligned prototype: Paperclip-Todd
The proposal is trying to point out a key difference in the way alignment reasearch and Carl Jung understood pattern recognition in humans.
I stated as one of the limitations of the paper that:
“The author focused on the quality of argument rather than quantity of citations, providing examples or testing. Once approved for research, this proposal will be further tested and be updated.”
I am recommending here a research area that I honestly believe that can have a massive impact in aligning humans and AI.
I think it’s different from the shadow archetype… It might be more related to the trickster..
Hmmmm. Well us humans have all archetypes in us but at different levels at different points of time or use. I wonder what triggered such representations? well it’s learning from the data but yeah what are the conditions at the time of the learning was in effect—like humans react to archetypes when like socializing with other people or solving problems...hmmmmm. super interesting. Yeah to quote Neitzsche is fascinating too, I mean why? is it because many great rappers look up to him or many rappers look up to certain philosophers that got influenced by Neitzsche? super intriguing..
I will be definitely looking forward to that report on petertodd phenomenon, I think we have touched something that Neuroscientists / psychologists have been longing find...
The strict version of the simulation objective is optimized by the actual “time evolution” rule that created the training samples. For most datasets, we don’t know what the “true” generative rule is, except in synthetic datasets, where we specify the rule.
I hope I read this before while doing my research proposal. But pretty much have arrived to the same conclusion that I believe alignment research is missing out—the pattern recognition learning systems being researched/deployed currently seems to lack a firm grounding on other fields of sciences like biology or pyschology that at the very least links to chemistry and physics.
What if the input “conditions” in training samples omit information which contributed to determining the associated continuations in the original generative process? This is true for GPT, where the text “initial condition” of most training samples severely underdetermines the real-world process which led to the choice of next token.
What if the training data is a biased/limited sample, representing only a subset of all possible conditions? There may be many “laws of physics” which equally predict the training distribution but diverge in their predictions out-of-distribution.
I honestly think these are not physics related questions though they are very important to ask. These can be better associated to the bias of the researchers that chosed the input conditons and the relevance of training data.
Guessing the right theory of physics is equivalent to minimizing predictive loss. Any uncertainty that cannot be reduced by more observation or more thinking is irreducible stochasticity in the laws of physics themselves – or, equivalently, noise from the influence of hidden variables that are fundamentally unknowable.
This is the main sentence in this post. The simulator as a concept might even change if the right physics were discovered. I would be looking forward to your expansion of the topic in the succeeding posts @janus.
Could enough human-imitating artificial agents (running much faster than people) prevent unfriendly AGI from being made?
I think the problem of scale doesn’t necessarily gets solved through quantity—because there are just qualitative issues (eg. loss of human life) that no amount of infrastructure upscale can compensate.
Outside of apes and monkeys, dophins and elephants, as well as corvids also appear in anecdotal reports and the scientific literature to have many complex forms of empathy.
Might be related to Erich Neumann’s book The Great mother which cites: “The psychological development [of humankind]… begins with the ‘matriarchal’ stage in which the archetype of the Great Mother dominates and the unconscious directs the psychic process of the individual and the group.” It’s like when we see animals in the wild eg. the lioness and its cub, we always associate it as the mother and its child—we do not have to google or open a book to like ensure that it is the case but deep within our psyche is that pattern that allows us to interpret it as such.
Thank you.
I’m sorry, I have no way to answer your question.. I just hope in the future we do.
Hello Moderators/Readers,
I am curious as to why the post was downvoted. I would appreciate an explanation so I can improve my writing moving forward. I aim to arrive at helping in solving the alignment problem. Thank you.
Thank you for your response.
I understand your 2nd point, but to comment on your 1st comment—is having the simpler question always the right thing to focus on? isn’t searching for the right questions to ponder the best way to arrive at the best solutions?
Subpar questions lead to incomplete /wrong answers. If it happens to be that we wrongly framed the alignment problem, the cost of this is huge or even catastrophic. It’s still cheaper to question even the best ideas now rather than change directions or correct errors later.
I’m in the process of writing it. Will link it here once finished. Thanks for being more direct too.
Hello there,
Are you interested of funding this theory of mine that I submitted to AI alignment awards? I am able to make this work in GPT2 and now writing the results. I was able to make GPT2 shutdown itself (100% of the time) even if it’s aware of the shutdown instruction called “the Gauntlet” embedded through fine-tuning an artificially generated archetype called “the Guardian” essentially solving corrigibility, outer and inner alignment.
https://twitter.com/whitehatStoic/status/1646429585133776898?t=WymUs_YmEH8h_HC1yqc_jw&s=19
Let me know if you guys are interested. I want to test it in higher parameter models like Llama and Alpaca but don’t have the means to finance the equipment.
I also found out that there is a weird setting in the temperature for GPT2 where in the range of .498 to .50 my shutdown code works really well, I still don’t know why though. But yeah I believe that there is an incentive to review what’s happening inside the transformer architecture.
Here was my original proposal: https://www.whitehatstoic.com/p/research-proposal-leveraging-jungian
I’ll post my paper for the corrigibility solution too once finished probably next week but if you wish to contact me, just reply here or email me at migueldeguzmandev@gmail.com.
If you want to see my meeting schedule, You can find it here: https://calendly.com/migueldeguzmandev/60min
Looking forward to hearing from you.
Best regards,
Miguel
Update: Already sent an application, I didn’t saw that in my first read. Thank you.
No time to rest. I’m starting to build The Guardian version 002.
Thank you Ruby. I had posted it a month ago in my blog and thinking how will this idea that I am experiencing will be received in this forum. No Worries, thanks for the time reviewing this.