I saw an, apparently relevant, video about AI generated music that claimed to be able to detect it by splitting it into its constituent tracks—It turns out that the tools for doing this (which use AI) work well with human music that was actually created from mixing individual tracks but badly for AI generated music (when you listen to the individual tracks they are obviously “wrong”). This is clearly because the AI does not (currently) create music by building it up from individual tracks (although clearly it could be made to do this). Instead it somehow synthesises the whole thing at once—It appears that AI images are similar in that they are not built up from individual components, like fingers.
This does suggest that a way to better identify AI images is to have s/w identify the location of the skeletal joints in an image and check whether they can be mapped onto a model of an actual skeleton without distortion.
I saw an, apparently relevant, video about AI generated music that claimed to be able to detect it by splitting it into its constituent tracks—It turns out that the tools for doing this (which use AI) work well with human music that was actually created from mixing individual tracks but badly for AI generated music (when you listen to the individual tracks they are obviously “wrong”). This is clearly because the AI does not (currently) create music by building it up from individual tracks (although clearly it could be made to do this). Instead it somehow synthesises the whole thing at once—It appears that AI images are similar in that they are not built up from individual components, like fingers. This does suggest that a way to better identify AI images is to have s/w identify the location of the skeletal joints in an image and check whether they can be mapped onto a model of an actual skeleton without distortion.