Well, I don't post much, but browse occasionally and thought this was interesting that Joel is actually a marine mammal trainer but is seemingly using the clicker in a way that other marine mammal trainers don't or any animal trainers for that matter....as far as I have come to know, at any rate. It should be clarified, simply so that people learning don't get more confused.
A few concepts which I felt like throwing out even though some of them have been thrown out, but thought I'd word it in yet another way...my take on the subject:
It is really, really important to keep in mind that reinforcement is what drives behavior. Cues do not.
Operant conditioning Angelique, is indeed in everything. It's in the air we breath. LOL. It is how all organisms with a brain learn, how behavior is modified. It is the only way behavior is modified. Even the little fishies in my aquarium learn that most times when I approach the aquarium, food is on it's way. They all swim rapidly to the surface in anticipation. There is a conditioned reflex in those little critters. It is very simply consequences for any action. Do this and this happens. Do that and that happens. All behavior has consequences.
Clicker training, (using a clicker) how the vast majority of "trained" trainers utilize it is based on Pavlovian response. It is classical conditioning, as opposed to operant conditioning. Classical conditioning is the creation of a conditioned reinforcer or conditioned reflex in the animal to realize value in an otherwise neutral thing that has no inherent value in itself. But comes to represent something of value to the animal because it is tied to something really wonderful (a primary reinforcer) through association or (Pavlovian response) The value of that thing will be lost if the association to the primary reinforcer (the meat or tug game or whatever) is not paired with it. You might get away with it a few times, (clicking and not treating) but after some time, the association will be lost. This is one of those laws of behavior.
The only time this use of bridging a conditioned reinforcer (the clicker) to a primary reinforcer (treat) is necessary is when learning a new behavior. Once they have had enough history of being reinforced, they no longer need the communication that what they just did earned them the reward. They already know. The cue (introduced a little later) has by this time been tied with the behavior and is used to elicit the behavior. The reinforcement history is enough to propel them to do it again. The clicker is then dropped and not used for a behavior which is being performed quite regularly.
THEN the behavior is put on a variable reinforcement schedule. The clicker is not used in conjunction with a variable reinforcement schedule Joel, because the former, as was mentioned is only needed in the beginning of learning a new behavior....something unfamiliar to the animal. And the variable reinforcement schedule, conversely is not used in the beginning because reward must be (if you want the most bang for your buck) on a continuous schedule at first to make the animal rule out any other incidental behaviors he may be doing at the same time...so eventually the animal can stop guessing at which behavior you're targeting. This consistent, rapid, high rate of reinforcement is what separates the wheat from the chaff, so to speak, to the animal.
Only when he is "getting it" consistently is a variable schedule used....to keep the animal trying harder, keeping interested and thus, strengthening the behavior. In other words, a clicker as a marker and a variable schedule of reinforcement are diametrically opposed in usage...incongruous or irrelevant...however one wants to word it.
Clicking without reward following will make the clicker lose it's meaning. This is behavioral law and what Pavlov discovered when ringing the bell before food for his dogs. When he stopped ringing the bell, after a time, they stopped salivating at the sound of it as they had before when the conditioned reflex had been developed to the sound of the bell.
Duration training:
There is a no reward marker, a reward marker and there is something that some trainers use, myself included called a "keep on going" signal. That is just to verbally let them know that they're on the right track. And encouragement type of thing. But the big reward needs to be reserved for the degree of the behavior you want, regardless of what increments you use. (for example, baby steps for straighter sits, or faster recalls). It's like a reward or NRM. A NRM isn't simply an "eh eh" or a "too bad." They have to know that they WOULD have gotten a treat but didn't. And they don't connect that necessarily simply by words alone. To condition or prime them to the NRM, you can hold a treat in your fist and swish it past them, giving your verbal just before. So, when they hear that NRM, they come to associate it with disappointment. Same thing with the reward marker. It needs to be connected strongly with a consistent consequence....a reward that is a reinforcer to the dog.
As it was said, you can delay the click in order to develop duration....one second more at a time. But still, follow with a reinforcer. If the dog is unable to "get" the duration of something because he's losing focus, instead of clicking without a reinforcer to follow, try reinforcing more frequently for much tinier responses or baby steps. And then more gradually than you have been, add a second longer. Don't expect 10 or even 5 seconds more all at once. Then you avoid the risk of losing the value of the very effective, precise communication tool that classical conditioning provides.
I would think that any animal trainer would find it advantageous to utilize of the laws of behaviorism rather than sloppy guess work. We can all speculate and label dogs as "dominant" or "stubborn" any way we want. We can imagine what they're thinking, what we think they think we're thinking, what kind of hierarchy, (if any) they have and what kind of energy and attitude we all have. But the truth is that none of that has been concluded, that the only thing that has been proven to work consistently and across the board are the laws of learning because they are animals capable of learning.
A few concepts which I felt like throwing out even though some of them have been thrown out, but thought I'd word it in yet another way...my take on the subject:
It is really, really important to keep in mind that reinforcement is what drives behavior. Cues do not.
Operant conditioning Angelique, is indeed in everything. It's in the air we breath. LOL. It is how all organisms with a brain learn, how behavior is modified. It is the only way behavior is modified. Even the little fishies in my aquarium learn that most times when I approach the aquarium, food is on it's way. They all swim rapidly to the surface in anticipation. There is a conditioned reflex in those little critters. It is very simply consequences for any action. Do this and this happens. Do that and that happens. All behavior has consequences.
Clicker training, (using a clicker) how the vast majority of "trained" trainers utilize it is based on Pavlovian response. It is classical conditioning, as opposed to operant conditioning. Classical conditioning is the creation of a conditioned reinforcer or conditioned reflex in the animal to realize value in an otherwise neutral thing that has no inherent value in itself. But comes to represent something of value to the animal because it is tied to something really wonderful (a primary reinforcer) through association or (Pavlovian response) The value of that thing will be lost if the association to the primary reinforcer (the meat or tug game or whatever) is not paired with it. You might get away with it a few times, (clicking and not treating) but after some time, the association will be lost. This is one of those laws of behavior.
The only time this use of bridging a conditioned reinforcer (the clicker) to a primary reinforcer (treat) is necessary is when learning a new behavior. Once they have had enough history of being reinforced, they no longer need the communication that what they just did earned them the reward. They already know. The cue (introduced a little later) has by this time been tied with the behavior and is used to elicit the behavior. The reinforcement history is enough to propel them to do it again. The clicker is then dropped and not used for a behavior which is being performed quite regularly.
THEN the behavior is put on a variable reinforcement schedule. The clicker is not used in conjunction with a variable reinforcement schedule Joel, because the former, as was mentioned is only needed in the beginning of learning a new behavior....something unfamiliar to the animal. And the variable reinforcement schedule, conversely is not used in the beginning because reward must be (if you want the most bang for your buck) on a continuous schedule at first to make the animal rule out any other incidental behaviors he may be doing at the same time...so eventually the animal can stop guessing at which behavior you're targeting. This consistent, rapid, high rate of reinforcement is what separates the wheat from the chaff, so to speak, to the animal.
Only when he is "getting it" consistently is a variable schedule used....to keep the animal trying harder, keeping interested and thus, strengthening the behavior. In other words, a clicker as a marker and a variable schedule of reinforcement are diametrically opposed in usage...incongruous or irrelevant...however one wants to word it.
Clicking without reward following will make the clicker lose it's meaning. This is behavioral law and what Pavlov discovered when ringing the bell before food for his dogs. When he stopped ringing the bell, after a time, they stopped salivating at the sound of it as they had before when the conditioned reflex had been developed to the sound of the bell.
Duration training:
There is a no reward marker, a reward marker and there is something that some trainers use, myself included called a "keep on going" signal. That is just to verbally let them know that they're on the right track. And encouragement type of thing. But the big reward needs to be reserved for the degree of the behavior you want, regardless of what increments you use. (for example, baby steps for straighter sits, or faster recalls). It's like a reward or NRM. A NRM isn't simply an "eh eh" or a "too bad." They have to know that they WOULD have gotten a treat but didn't. And they don't connect that necessarily simply by words alone. To condition or prime them to the NRM, you can hold a treat in your fist and swish it past them, giving your verbal just before. So, when they hear that NRM, they come to associate it with disappointment. Same thing with the reward marker. It needs to be connected strongly with a consistent consequence....a reward that is a reinforcer to the dog.
As it was said, you can delay the click in order to develop duration....one second more at a time. But still, follow with a reinforcer. If the dog is unable to "get" the duration of something because he's losing focus, instead of clicking without a reinforcer to follow, try reinforcing more frequently for much tinier responses or baby steps. And then more gradually than you have been, add a second longer. Don't expect 10 or even 5 seconds more all at once. Then you avoid the risk of losing the value of the very effective, precise communication tool that classical conditioning provides.
I would think that any animal trainer would find it advantageous to utilize of the laws of behaviorism rather than sloppy guess work. We can all speculate and label dogs as "dominant" or "stubborn" any way we want. We can imagine what they're thinking, what we think they think we're thinking, what kind of hierarchy, (if any) they have and what kind of energy and attitude we all have. But the truth is that none of that has been concluded, that the only thing that has been proven to work consistently and across the board are the laws of learning because they are animals capable of learning.