Yeah, you were one of the “couple other people” I alluded to. The other was Tan Zhi-Xuan (if I was understanding her correctly during our most recent (very brief) conversation).
🤔 I wonder if I should talk with Tan Zhi-Xuan.
I think I know what you’re referring to, but I’m not 100% sure, and other people reading this probably won’t. Can you provide a link? Thanks.
I got the phrase “ontological lock” from adamShimi’s post here, but it only comes up very briefly, so it is not helpful for understanding what I mean and is sort of also me assuming that adamShimi meant the same as I did. 😅 I’m not sure if it’s a term used elsewhere.
What I mean is forcing the AI to have a specific ontology, such as things embedded in 3D space, so you can directly programmatically interface with the AI’s ontology, rather than having to statistically train an interface (which would lead to problems with distribution shift and such).
🤔 I wonder if I should talk with Tan Zhi-Xuan.
I got the phrase “ontological lock” from adamShimi’s post here, but it only comes up very briefly, so it is not helpful for understanding what I mean and is sort of also me assuming that adamShimi meant the same as I did. 😅 I’m not sure if it’s a term used elsewhere.
What I mean is forcing the AI to have a specific ontology, such as things embedded in 3D space, so you can directly programmatically interface with the AI’s ontology, rather than having to statistically train an interface (which would lead to problems with distribution shift and such).