I can see this being automated given the visual capabilities in the latest models along with a healthy dose of input from existing practitioners. Do detailed teardowns of different products across many different industries, with images of each component and subassembly along with detailed descriptions of those images (what the parts are, what they’re made of, how they were made, what purpose they serve as a whole and what purpose various features are serving). This could then start to create the textual training data to then allow the models to generate such information themselves in the opposite direction. And in fact this closely resembles how mechanical engineers often build up experience (along with making things, building them, and seeing why they don’t work like they thought they would).
I can see this being automated given the visual capabilities in the latest models along with a healthy dose of input from existing practitioners. Do detailed teardowns of different products across many different industries, with images of each component and subassembly along with detailed descriptions of those images (what the parts are, what they’re made of, how they were made, what purpose they serve as a whole and what purpose various features are serving). This could then start to create the textual training data to then allow the models to generate such information themselves in the opposite direction. And in fact this closely resembles how mechanical engineers often build up experience (along with making things, building them, and seeing why they don’t work like they thought they would).