I feel iffy about negative reinforcement still being widely used in AI. Both human behaviour experts (child-rearing) and animal behavior experts seem to have largely moved away from that being effective, only leading to unwanted behavior down the line
People often use the term “negative reinforcement” to mean something like punishment, where a teacher or trainer inflicts pain or uncomfortable deprivation on the individual being trained. Is this the sort of thing you mean? Is there anything analogous to pain or deprivation in AI training?
Shallow take:
I feel iffy about negative reinforcement still being widely used in AI. Both human behaviour experts (child-rearing) and animal behavior experts seem to have largely moved away from that being effective, only leading to unwanted behavior down the line
People often use the term “negative reinforcement” to mean something like punishment, where a teacher or trainer inflicts pain or uncomfortable deprivation on the individual being trained. Is this the sort of thing you mean? Is there anything analogous to pain or deprivation in AI training?