If there are any alignment researchers reading this who think they would benefit from having someone to talk to about improving their research capacity, I’m happy to be that person.
I’m offering free debugging-style conversations about improving research capacity to any alignment researchers who want them. Here’s my calendly link if you’d like to grab time on my calendar: https://calendly.com/dcjones15/60min .
I’m not claiming to have any answers or ready made solutions. I primarily add value by asking questions to elicit your own thoughts and help you come up with your own improvement plans that address your specific needs. A number of researchers have told me these conversations are productive for them, so the same may be true for you.
I’m not new to reading LessWrong, but I am new to posting or commenting here. I plan to be more active in the future. I care about the cause of AI Alignment, and am currently in the process of shifting my career from low-level operations work at MIRI to something I think may be more impactful:
I.e. supporting alignment researchers in their efforts to level up in research effectiveness, by offering myself as a conversational partner to help them think through their own up-leveling plans.
In that spirit, here’s an offer I’d like to make to any interested alignment researchers who come across this comment.
The Offer
Free debugging-style conversations (could be just one, or recurring) aimed at helping you become a more effective researcher. How to sign up?
Here’s my Calendly link: https://calendly.com/dcjones15/60min (Low pressure sign up; It’s fine to cancel later if you change your mind).
DM’ing me also works great!
Questions you may have:
What would the conversation look like?
I’ll mostly try to ask good questions to elicit your own thoughts and ideas.
Help you get unstuck if you feel confused or averse to the subject.
Make the occasional suggestion
Who am I, and why might I be a good person to talk to about this?
I’ve been doing low-level operations work for MIRI for the past four years.
I’m not a researcher, but I have thought a lot about improving my own intellectual processes over the years, and have had some good results with that.
The few of these conversations I’ve had so far seemed good and productive, so the same may be true for you.
Why?
Alignment researchers from various organizations have told me they don’t invest in leveling up as much as they endorse, and that when they try to level up, it’s aversive or difficult. I suspect just having someone to discuss it with can help a lot.
I really enjoy conversations like this, and am hoping to one day be good enough at them to get paid to do it. So, I need lots of practice!