This paragraph also misses the possibility of constructing a LLM and/or training methodology such that it will learn certain functions, or can’t learn certain functions. There is also a conflation of “reliable” with “provable” on top of that.
Perhaps there is some provision made elsewhere in the text that addresses these objections. Nonetheless, I am not going to search. I found that the abstract smells enough like bullshit to do something else.
This paragraph also misses the possibility of constructing a LLM and/or training methodology such that it will learn certain functions, or can’t learn certain functions. There is also a conflation of “reliable” with “provable” on top of that.
Perhaps there is some provision made elsewhere in the text that addresses these objections. Nonetheless, I am not going to search. I found that the abstract smells enough like bullshit to do something else.