AI alignment researcher, ML engineer. Masters in Neuroscience.
I believe that cheap and broadly competent AGI is attainable and will be built soon. This leads me to have timelines of around 2024-2027. Here’s an interview I gave recently about my current research agenda. I think the best path forward to alignment is through safe, contained testing on models designed from the ground up for alignability trained on censored data (simulations with no mention of humans or computer technology). I think that current ML mainstream technology is close to a threshold of competence beyond which it will be capable of recursive self-improvement, and I think that this automated process will mine neuroscience for insights, and quickly become far more effective and efficient. I think it would be quite bad for humanity if this happened in an uncontrolled, uncensored, un-sandboxed situation. So I am trying to warn the world about this possibility.
See my prediction markets here:
I also think that current AI models pose misuse risks, which may continue to get worse as models get more capable, and that this could potentially result in catastrophic suffering if we fail to regulate this.
I now work for SecureBio on AI-Evals.
relevant quotes:
“There is a powerful effect to making a goal into someone’s full-time job: it becomes their identity. Safety engineering became its own subdiscipline, and these engineers saw it as their professional duty to reduce injury rates. They bristled at the suggestion that accidents were largely unavoidable, coming to suspect the opposite: that almost all accidents were avoidable, given the right tools, environment, and training.” https://www.lesswrong.com/posts/DQKgYhEYP86PLW7tZ/how-factories-were-made-safe
“The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense. A great deal of new political thinking will be necessary if utter disaster is to be averted.”—Bertrand Russel, The Bomb and Civilization 1945.08.18
“For progress, there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment.”—John von Neumann
“I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)”—Vernor Vinge, Singularity
Yeah, I think there’s something important to be said for ‘incremental decision making’. Getting to try something on for size for a bit, and see if it is what it was advertised to be, and being allowed to cancel the bargain if it turns out it was a bad bargain.
I would feel better about the Qatar situation if it were easier for the workers to quit and go back home, and if the true fatality rate were disclosed to potential workers. Other than that, I think it should be allowed.
Selling kidneys? That’s a trickier situation because it’s high impact and an irreversible life-long change. I still wouldn’t outright forbid it, but I’d for sure want more protections in place to make sure people were properly warned, got to talk to others who had done it about the effects, got time to consider their decision and not be pressure-sold on it, got some third party arbiter making sure they got paid the agreed upon amount.
Selling oneself or one’s children into servitude without cancellation clauses and little to no rights? Humanity has tried this one, and it tends to go really poorly.
There. Now we have some limits and we can discuss where to draw what lines. No longer an unbounded claim towards unfettered contractualism, no longer an unbounded claim towards unfettered paternalism.
What do you think of my stipulations?