The process of finding this model of the world is much more complex than anything our AI can do at inference time, and the intermediate results are too complex and numerous to be “memorized” in the weights of our trained AI. So there doesn’t seem to be any way to break the model-finding work into pieces that can be delegated to an ML assistant (in amplification) or a debater (in debate).
I am not understanding this, but it’s probably a simple ML terminology thing.
First you train a model, then you use it lots as a black box (of the type: input video-camera data → output further (predicted) video-camera data). It has a model of physics, and the broad system it’s in (Earth, 2000s, industrial revolution has happened, etc).
Is this paragraph saying that the learned model does not have an understanding of physics and current-Earth, but deduces all of this every time the model is run? And that’s why the ML assistant isn’t able to analyze this model of physics plus current-Earth?
I am not understanding this, but it’s probably a simple ML terminology thing.
First you train a model, then you use it lots as a black box (of the type: input video-camera data → output further (predicted) video-camera data). It has a model of physics, and the broad system it’s in (Earth, 2000s, industrial revolution has happened, etc).
Is this paragraph saying that the learned model does not have an understanding of physics and current-Earth, but deduces all of this every time the model is run? And that’s why the ML assistant isn’t able to analyze this model of physics plus current-Earth?