I find trying to find funding or paid roles or even unpaid roles so demoralizing. How do I keep motivated?
I don’t want to focus on trying to survey the landscape of funding opportunities and learning to network with people productively. It’s so much nicer to just focus on the work I want to be doing, but it seems I either can’t make it legible enough fast enough, or it’s actually not valuable and I should go do something else with my time.
I want advice. How do I get funding? How do I think about getting funding? How do I stay motivated to keep thinking about how to get funding?
And a different question: How young are you? Are there experienced people who have worked with you and can vouch for the quality of your work / strategic orientation / etc?
I’m 35. You can view my experience on my linkedin profile. I was working as a technologist at an automotive company, involved with some AI projects in collaboration with the Vector Institute. That’s when GPT-3 was released, prompting me to take the prosaic scaling hypothesis more seriously and change my plan to saving money so I could finish my CS BSc and change my career goal to working on technical AI alignment.
While completing my BSc I had the opportunity to focus on my NDSP project, first as a directed studies project supervised by George Tzanetakis, and then extend it into an honours project supervised by Teseo Schneider. George is a professor focused on classical AI and music algorithms. Teseo is a professor focused on graphics algorithms. They are probably the most relevant experienced people who have worked with me and could vouch for the quality of my work, but neither is focused on technical alignment so probably cannot vouch for my strategic orientation.
My project was mostly self driven, attempting to extend and apply the tools introduced in Visualizing Neural Networks with the Grand Tour to the same network that was examined in Understanding and controlling a maze-solving policy network as part of a long term plan to first build intuition for interactive n-dimensional tools while applying them to relatively easier to understand image networks, before applying them to relatively more difficult to understand transformer networks.
I have reached out to some of the authors of those papers and have had brief correspondences with Mingwei Li, TurnTrout, peligrietzer, and Ulisse Mini, but I’m unsure how deeply any of them have looked into my work.
I think the NDISP project has the clearest value prospect. I’m currently starting over writing the tool to be stand alone and ready for alpha users. I’d recommend skimming the videos for a sense of the project.
All of my other projects seem to be less legible with much lower probability of much greater usefulness. Charitably I they could be described as working on useful paradigm shifts for the field of AI Alignment and rational global coordination. Less charitably they could be described as a crackpot shouting at clouds. I might describe them as Butterfly Ideas that I really want to get out of the stage of being butterfly ideas, but alas, they keep flapping around me.
My guess is you should get more experience before trying to set your own research directions, especially if they diverge considerably from existing ones. The default is that all research directions are bad, and AI safety is becoming mature enough that good ideas come from experience rather than from first principles. Also in the current environment, automation makes it efficient to execute on good ideas and puts a deadline on gaining experience.
That is commonly given advice, and it makes sense. When you are starting out you don’t know what you don’t know and can’t see the flaws with your own ideas. But on the other hand, coming up with your own ideas is its own skill that may not be trained well by only learning from other peoples experience. It’s hard to say. I suppose the obvious ideal is to practice coming up with your own ideas and have experienced mentors to critique them.
What kinds of things do you have in mind when you say “get more experience”? I am applying to fellowships but haven’t been accepted to any yet. I don’t want to do more ML work that doesn’t focus on AI alignment if I can help it. I was considering writing some literature reviews. There are also some papers I would like to try replicating.
But if I’m being honest the things that feels most valuable to me is working on NDISP, OISs, and Maat, or finding other, similar enough projects and contributing to them. I guess I’m gambling with the time I have to focus on these things and I need to accept that if I’m deciding to focus on projects I think will be valuable but other people don’t see the value in, then I’ll have to keep focusing on them without financial or moral support, and accept the consequences for doing so.
I find trying to find funding or paid roles or even unpaid roles so demoralizing. How do I keep motivated?
I don’t want to focus on trying to survey the landscape of funding opportunities and learning to network with people productively. It’s so much nicer to just focus on the work I want to be doing, but it seems I either can’t make it legible enough fast enough, or it’s actually not valuable and I should go do something else with my time.
I want advice. How do I get funding? How do I think about getting funding? How do I stay motivated to keep thinking about how to get funding?
What work are you doing? Is any of it publicly viewable?
And a different question: How young are you? Are there experienced people who have worked with you and can vouch for the quality of your work / strategic orientation / etc?
I’m 35. You can view my experience on my linkedin profile. I was working as a technologist at an automotive company, involved with some AI projects in collaboration with the Vector Institute. That’s when GPT-3 was released, prompting me to take the prosaic scaling hypothesis more seriously and change my plan to saving money so I could finish my CS BSc and change my career goal to working on technical AI alignment.
While completing my BSc I had the opportunity to focus on my NDSP project, first as a directed studies project supervised by George Tzanetakis, and then extend it into an honours project supervised by Teseo Schneider. George is a professor focused on classical AI and music algorithms. Teseo is a professor focused on graphics algorithms. They are probably the most relevant experienced people who have worked with me and could vouch for the quality of my work, but neither is focused on technical alignment so probably cannot vouch for my strategic orientation.
My project was mostly self driven, attempting to extend and apply the tools introduced in Visualizing Neural Networks with the Grand Tour to the same network that was examined in Understanding and controlling a maze-solving policy network as part of a long term plan to first build intuition for interactive n-dimensional tools while applying them to relatively easier to understand image networks, before applying them to relatively more difficult to understand transformer networks.
I have reached out to some of the authors of those papers and have had brief correspondences with Mingwei Li, TurnTrout, peligrietzer, and Ulisse Mini, but I’m unsure how deeply any of them have looked into my work.
Here’s the page describing my projects.
I think the NDISP project has the clearest value prospect. I’m currently starting over writing the tool to be stand alone and ready for alpha users. I’d recommend skimming the videos for a sense of the project.
All of my other projects seem to be less legible with much lower probability of much greater usefulness. Charitably I they could be described as working on useful paradigm shifts for the field of AI Alignment and rational global coordination. Less charitably they could be described as a crackpot shouting at clouds. I might describe them as Butterfly Ideas that I really want to get out of the stage of being butterfly ideas, but alas, they keep flapping around me.
My guess is you should get more experience before trying to set your own research directions, especially if they diverge considerably from existing ones. The default is that all research directions are bad, and AI safety is becoming mature enough that good ideas come from experience rather than from first principles. Also in the current environment, automation makes it efficient to execute on good ideas and puts a deadline on gaining experience.
That is commonly given advice, and it makes sense. When you are starting out you don’t know what you don’t know and can’t see the flaws with your own ideas. But on the other hand, coming up with your own ideas is its own skill that may not be trained well by only learning from other peoples experience. It’s hard to say. I suppose the obvious ideal is to practice coming up with your own ideas and have experienced mentors to critique them.
What kinds of things do you have in mind when you say “get more experience”? I am applying to fellowships but haven’t been accepted to any yet. I don’t want to do more ML work that doesn’t focus on AI alignment if I can help it. I was considering writing some literature reviews. There are also some papers I would like to try replicating.
But if I’m being honest the things that feels most valuable to me is working on NDISP, OISs, and Maat, or finding other, similar enough projects and contributing to them. I guess I’m gambling with the time I have to focus on these things and I need to accept that if I’m deciding to focus on projects I think will be valuable but other people don’t see the value in, then I’ll have to keep focusing on them without financial or moral support, and accept the consequences for doing so.