Other people were commending your tabooing of words, but I feel using terms like “multi-layer parameterized graphical function approximator” fails to do that, and makes matters worse because it leads to non-central fallacy-ing. It’d been more appropriate to use a term like “magic” or “blipblop”. Calling something a function appropriator leads to readers carrying a lot of associations into their interpretation, that probably don’t apply to deep learning, as deep learning is a very specific example of function approximation, that deviates from the prototypical examples in many respects. (I think when you say “function approximator” the image that pops into most peoples head is fitting a polynomial to a set of datapoints in R^2)
Calling something a function approximator is only meaningful if you make a strong argument for why a function approximator cant (or at least is systematically unlikely to) give rise to specific dangerous behaviors or capabilities. But I don’t see you giving such arguments in this post. Maybe I did not understand it. In either case, you can read posts like Gwern’s “Tools want to be agents” or Yudkowsky’s writings, explaining why goal directed behavior is a reasonable thing to expect to arise from current ML, and you can replace every instance of “neural network” / “AI” with “multi-layer parameterized graphical function approximator”, and I think you’ll find that all the arguments make just as much sense as they did before. (modulo some associations seeming strange, but like I said, I think thats because there is some non-central fallacying going on).
hmys
Karma: 16
https://www.richardhanania.com/p/if-scott-alexander-told-me-to-jump