How do you feel about this strategy today? What chance of success would you give this? Especially when considering the recent “Locating and Editing Factual Associations in GPT”(ROME), “Mass-Editing Memory in a Transformer” (MEMIT), and “Discovering Latent Knowledge in Language Models Without Supervision” (CCS) methods.
How does this compare to the strategy you’re currently most excited about? Do you know of other ongoing (empirical) efforts that try to realize this strategy?
Hey everyone, my name is Kay. I’m 24 years old. My friends would describe me as reliable, ambitious, curious and funny. I was raised by Polish parents in Germany, where I finished my high school and undergrad education in business administration. Later I discovered a love for philosophy which eventually made me study it for some time before I realized that I was itching to learn how to program and build things. So I switched fields and started doing Data Science in 2019. Now, I’m at a point where I’d like to devote the upcoming months to studying reinforcement learning. I can’t think of a more exciting career than working on general artificial intelligence or superintelligence more broadly.
I’m currently looking for a reinforcement learning study-/paper-reading group. It is important for me to be surrounded by ambitious and self-motivated people whom I can learn from. In case you know of any such group, I’d greatly appreciate it if you shared it with me. Otherwise, feel free to contact me and I’ll make sure we’ll create a study group ourselves.