[Question] As a Washed Up Former Data Scientist and Machine Learning Researcher What Direction Should I Go In Now?

Some background, I was interested in AI before the hype, back when neural networks were just an impractical curiosity in our textbooks. I went through an undergrad in Cognitive Science and decided that there was something to the idea of connectionist bottom up AI having tremendous untapped potential because I saw the working example of the human mind. So I embarked on a Masters in Computer Science focused on ML and eventually graduated at just about the perfect time (2014) to jump into industry and make a splash. It helped that I’d been ambitious and tried to create crazy things like the Music-RNN and the Earthquake Predictor Neural Network, which, though not technically effective, showed surprising amounts of promise. The Music-RNN could at least generate sounds that vaguely resembled the audio data, and the Earthquake Predictor predicted the Ring of Fire: low magnitude, high frequency quakes that weren’t important, but hey, it was better than random...

I also had earlier published two mediocre papers on stuff like occluded object recognition in some fairly inconsequential conferences, but combined with my projects and the AI hype, a Canadian startup called Maluuba (which would later be bought by Microsoft) took a chance on me and hired me as a Data Scientist for a few months doing NLP. Later my somewhat helpful posts on the Machine Learning Reddit attracted the attention of a recruiter from Huawei, and I ended up spending a few years as a Research Scientist at their Canadian subsidiary (specifically in Noah’s Ark Lab), working both on NLP and later Computer Vision. Unfortunately, I was foolish and got into some unfortunate office politics that basically derailed my career within the company and I eventually requested a buyout package to avoid being stuck on projects I didn’t think were relevant under a manager I didn’t agree with.

Alas, I found myself unemployed right when COVID-19 hit. Despite still interviewing at places like Amazon, Facebook, and Deloitte, my lack of a PhD and so-so engineering ability hampered my efforts to get back into the industry, and made me question if I could still compete in a market that seemed a lot more saturated than before.

So recently I started to read some books that had been on my todo list for a while, like Bostrom’s Superintelligence and Russell’s Human Compatible. Before I started my career, I’d spent a fair bit of time reading through the Less Wrong sequences by Eliezer and also posting some of my own naive ideas like the contrarian (contrary to the Orthogonality Thesis in fact) concept of an AI Existential Crisis (where the AI itself would question and potentially change its values), and the Alpha Omega Thereom (which is actually in retrospect very similar to Bostrom’s ideas of Anthropic Capture and the Hail Mary solution to the Alignment Problem). Even while working in my career, I did think about the Alignment Problem, though like most ML practitioners, I thought it was such a far off, amorphous challenge with no obvious avenue of attack that I didn’t see a clear way to directly work on it.

At Maluuba and Huawei, I’d have some casual conversations with colleagues about the Alignment Problem, but we kept on working on our models without really considering if it was right to. After all, I needed to eat and make enough to live comfortably first, and ML was definitely good money, and the models did really cool stuff! But my recent time away from work has given me a chance to think about things a lot more, and wonder if my actions of prodding the technology forward by even a tiny increment could actually be harmful given how far away we seem from having a robust solution to Alignment.

So, naturally, I wondered if I could try to do research directly on the problem and be useful. After doing some reading on the SOTA papers… it seems like we’re still in the very early stages of coming up with definitions for things and making conceptual frameworks and I’m worried now that compared to say, the enormous, ever expanding literature on Reinforcement Learning, things are going waaay too slowly.

But on the other hand, I don’t know where to begin to be useful with this. Or even if, given how important this work could be, that I couldn’t end up making things worse by contributing work that isn’t rigorous enough. One of the things I learned from doing research in industry is that experimental rigor is actually very hard to do properly, and almost everyone, people who publish papers in academia, as well as industry, cut corners to get things out ASAP so they can flag-plant on ArXiv. And then people complain about results that are not reproducible and demand source code. There’s a lot of noise in the signal, even when we have working prototypes and models running. How we can expect to Align models by proving things in advance rather than experimentation… it just seems doubtful to me because the working models we use in industry always have to endure tests and uncertainty means it’s basically impossible to guarantee there won’t be some edge case that fails.

So, I guess the question boils down to, how seriously should I consider switching into the field of AI Alignment, and if not, what else should I do instead? Like should I avoid working on AI at all and just do something fun like game design, or is it still a good idea to push forward ML despite the risks? And if switching to AI Alignment should be done, can it be a career or will I need to find something else to pay the bills with as well?

Any advice is much appreciated. Thank you for your time and consideration.