The data, as Zack M Davis argues, and one of the takeaways from the deep learning revolution is that inductive biases mattered a lot less than we thought, and data is much more important for AI behavior than we thought.
I agree that induction on data does require an inductive bias without resorting to look-up tables, but I do claim that you were arguing that modern AI’s behavior was more determined by the data than the architecture relative to what Max H was saying.
“for our threat modeling, we can just argue about what AIs will be capable of, rather than needing to argue about inductive biases or whatever”
Where else do you think capabilities come from?
The data, as Zack M Davis argues, and one of the takeaways from the deep learning revolution is that inductive biases mattered a lot less than we thought, and data is much more important for AI behavior than we thought.
While I appreciate being cited, I don’t think this makes sense in context as a response to Wyeth’s remark. (“Inductive biases” and “data” aren’t somehow opposing explanations; doing induction on data trivially requires both, and Wyeth himself has written in favor of imitation leanring as an alignment strategy.)
I agree that induction on data does require an inductive bias without resorting to look-up tables, but I do claim that you were arguing that modern AI’s behavior was more determined by the data than the architecture relative to what Max H was saying.