At its core, it is an attempt to draw a low-dimensional map of meaning.
How I’m thinking about semantic manifolds in semantic spaces doesn’t seem well represented by “attempting to draw a low dimensional map of meaning”.
I’m sorry, but I’m having trouble connecting with what you’re saying. It seems you are talking about some group of peoples attempt to understand neural networks. I think it would be helpful if you stated your assumptions about that groups assumptions, because I don’t think I share them and don’t know what they are.
In particular, “You cannot morph cat into dog through valid states”, and “Make a table of ~14 sublayers of a transformer and note if the manifold is valid.” seem like meaningless statements to me because it’s unclear what “valid” would mean in this context.
How I’m thinking about semantic manifolds in semantic spaces doesn’t seem well represented by “attempting to draw a low dimensional map of meaning”.
I’m sorry, but I’m having trouble connecting with what you’re saying. It seems you are talking about some group of peoples attempt to understand neural networks. I think it would be helpful if you stated your assumptions about that groups assumptions, because I don’t think I share them and don’t know what they are.
In particular, “You cannot morph cat into dog through valid states”, and “Make a table of ~14 sublayers of a transformer and note if the manifold is valid.” seem like meaningless statements to me because it’s unclear what “valid” would mean in this context.