Basics of Animal Reinforcement
Behaviorism historically began with Pavlov’s studies into classical conditioning. When dogs see food they naturally salivate. When Pavlov rang a bell before giving the dogs food, the dogs learned to associate the bell with the food and salivate even after they merely heard the bell . When Pavlov rang the bell a few times without providing food, the dogs stopped salivating, but when he added the food again it only took a single trial before the dogs “remembered” their previously conditioned salivation response1.
So much for classical conditioning. The real excitement starts at operant conditioning. Classical conditioning can only activate reflexive actions like salivation or sexual arousal; operant conditioning can produce entirely new behaviors and is most associated with the idea of “reinforcement learning”.
Serious research into operant conditioning began with B.F. Skinner’s work on pigeons. Stick a pigeon in a box with a lever and some associated machinery (a “Skinner box”2). The pigeon wanders around, does various things, and eventually hits the lever. Delicious sugar water squirts out. The pigeon continues wandering about and eventually hits the lever again. Another squirt of delicious sugar water. Eventually it percolates into its tiny pigeon brain that maybe pushing this lever makes sugar water squirt out. It starts pushing the lever more and more, each push continuing to convince it that yes, this is a good idea.
Consider a second, less lucky pigeon. It, too, wanders about in a box and eventually finds a lever. It pushes the lever and gets an electric shock. Eh, maybe it was a fluke. It pushes the lever again and gets another electric shock. It starts thinking “Maybe I should stop pressing that lever.” The pigeon continues wandering about the box doing anything and everything other than pushing the shock lever.
The basic concept of operant conditioning is that an animal will repeat behaviors that give it reward, but avoid behaviors that give it punishment3.
Skinner distinguished between primary reinforcers and secondary reinforcers. A primary reinforcer is hard-coded: for example, food and sex are hard-coded rewards, pain and loud noises are hard-coded punishments. A primary reinforcer can be linked to a secondary reinforcer by classical conditioning. For example, if a clicker is clicked just before giving a dog a treat, the clicker itself will eventually become a way to reward the dog (as long as you don’t use the unpaired clicker long enough for the conditioning to suffer extinction!)
Probably Skinner’s most famous work on operant conditioning was his study of reinforcement schedules: that is, if pushing the lever only gives you reward some of the time, how obsessed will you become with pushing the lever?
Consider two basic types of reward: interval, in which pushing the lever gives a reward only once every t seconds—and ratio, in which pushing the lever gives a reward only once every x pushes.
Put a pigeon in a box with a lever programmed to only give rewards once an hour, and the pigeon will wise up pretty quickly. It may not have a perfect biological clock, but after somewhere around an hour, it will start pressing until it gets the reward and then give up for another hour or so. If it doesn’t get its reward after an hour, the behavior will go extinct pretty quickly; it realizes the deal is off.
Put a pigeon in a box with a lever programmed to give one reward every one hundred presses, and again it will wise up. It will start pressing more on the lever when the reward is close (pigeons are better counters than you’d think!) and ease off after it obtains the reward. Again, if it doesn’t get its reward after about a hundred presses, the behavior will become extinct pretty quickly.
To these two basic schedules of fixed reinforcement, Skinner added variable reinforcement: essentially the same but with a random factor built in. Instead of giving a reward once an hour, the pigeon may get a reward in a randomly chosen time between 30 and 90 minutes. Or instead of giving a reward every hundred presses, it might take somewhere between 50 and 150.
Put a pigeon in a box on variable interval schedule, and you’ll get constant lever presses and good resistance to extinction.
Put a pigeon in a box with a variable ratio schedule and you get a situation one of my professors unscientifically but accurately described as “pure evil”. The pigeon will become obsessed with pecking as much as possible, and really you can stop giving rewards at all after a while and the pigeon will never wise up.
Skinner was not the first person to place an animal in front of a lever that delivered reinforcement based on a variable ratio schedule. That honor goes to Charles Fey, inventor of the slot machine.
So it looks like some of this stuff has relevance for humans as well4. Tomorrow: more freshman psychology lecture material. Hooray!
1. Of course, it’s not really psychology unless you can think of an unethical yet hilarious application, so I refer you to Plaud and Martini’s study in which slides of erotic stimuli (naked women) were paired with slides of non-erotic stimuli (penny jars) to give male experimental subjects a penny jar fetish; this supports a theory that uses chance pairing of sexual and non-sexual stimuli to explain normal fetish formation.
3: In technical literature, behaviorists actually use four terms: positive reinforcement, positive punishment, negative reinforcement, and negative punishment. This is really confusing: “negative reinforcement” is actually a type of reward, behavior like going near wasps is “punished” even though we usually use “punishment” to mean deliberate human action, and all four terms can be summed up under the category “reinforcement” even though reinforcement is also sometimes used to mean “reward as opposed to punishment”. I’m going to try to simplify things here by using “positive reinforcement” as a synonym for “reward” and “negative reinforcement” as a synonym for “punishment”, same way the rest of the non-academic world does it.
4: Also relevant: checking HP:MoR for updates is variable interval reinforcement. You never know when an update’s coming, but it doesn’t come faster the more times you reload fanfiction.net. As predicted, even when Eliezer goes weeks without updating, the behavior continues to persist.