Oooh, I tend to get these quite often, lesse if i can remember any that’s actually workable...
I had this idea for a narrow AI experiment where you have two populations of algorithms, of many different and unrelated types, in a predator-prey like arms race where one side tries to forge false sensory (for example images or snippets of music), and the other tries to distinguish those falsifications from human or nature supplied data, and the first group is scored on how well it fools the second. That’s the basic idea, if anyone would actually be interested in actually trying it out I thought a bunch more about the details of how to implement it and possible problems and further small things you could do to make it work even better than the raw version and stuff like that.
Oooh, I tend to get these quite often, lesse if i can remember any that’s actually workable...
I had this idea for a narrow AI experiment where you have two populations of algorithms, of many different and unrelated types, in a predator-prey like arms race where one side tries to forge false sensory (for example images or snippets of music), and the other tries to distinguish those falsifications from human or nature supplied data, and the first group is scored on how well it fools the second. That’s the basic idea, if anyone would actually be interested in actually trying it out I thought a bunch more about the details of how to implement it and possible problems and further small things you could do to make it work even better than the raw version and stuff like that.