I have some time on my hands and would be interested in doing something meaningful with it. Ideally learn / research about AI alignment or related topics. Dunno where to start though, beyond just reading posts. Anyone got pointers? Got a background in theoretical / computational physics, and I know my way around the scientific Python stack.
AI alignment has been getting so much bigger as a field! It’s encouraging, but we still have a long way to go imo.
Did you see Shallow review of technical AI safety, 2025? I’d recomment looking through that post or their shallow review website and finding something that seems interesting and starting there. Each sub-domain has its own set of jargon and assumptions so I wouldn’t worry too much about trying to learn the foundations since we don’t have a common set of foundations yet.
Just reading posts isn’t bad, but since there isn’t that common set of foundations, it could be confusing when you’re just starting out (or even when you’re quite experienced).
I have some time on my hands and would be interested in doing something meaningful with it. Ideally learn / research about AI alignment or related topics. Dunno where to start though, beyond just reading posts. Anyone got pointers? Got a background in theoretical / computational physics, and I know my way around the scientific Python stack.
AI alignment has been getting so much bigger as a field! It’s encouraging, but we still have a long way to go imo.
Did you see Shallow review of technical AI safety, 2025? I’d recomment looking through that post or their shallow review website and finding something that seems interesting and starting there. Each sub-domain has its own set of jargon and assumptions so I wouldn’t worry too much about trying to learn the foundations since we don’t have a common set of foundations yet.
Just reading posts isn’t bad, but since there isn’t that common set of foundations, it could be confusing when you’re just starting out (or even when you’re quite experienced).
Good luck and glad to have you!