Another honorable mention is Chaser, a border collie who was trained and tested by John Pilley et al with a language design and testing regime that was specifically aimed at making Clever Hans criticisms impossible, and also to make it very clear that certain grammar recognition tasks could be statistically detected VS alternative ways to solve the linguistic challenge that “don’t seem like they are doing language learning right”… What if a dog learns “fetchblue” as a single sound that lacks a breakdown into a coherent verb about an action and a coherent object named blue? What if, in the dog’s head, “fetchblue” is just like a name for a scene that includes the object and the actions normally done with the object, in a giant swirl? Well...
They taught Chaser more than a 1000 proper names, and at least 3 verbs, and did tests with her in front of audiences using objects that (so far as they could tell from their notes?) hadn’t been paired with the tested verb before.
Here’s the kind of performance they could do, for video cameras, as supplementary material for the 2010 paper:
In the paper, they also claim something I’d previously thought impossible for dogs, which was to be able to learn to hear a common noun as a reference to a category of objects.
“Toys” were all the >1000 things Chaser knew the names of and had a right to play with because they were hers. The >100 “balls” were “toys” that were round and obvious to a human as balls. The >20 “Frisbees” were non-”ball” “toys” that were disc shaped and so on. Chaser seems to have learned to be able to “fetch ball” even when the ball was weird or named recently or named long ago or was far from her or was near to her.
(THEY DID NOT show in the paper that she could generalize this to novel un-named (non-”Toy”?) balls or frisbees. I’m not sure if the barrier was “they tried to teach and failed” or “they didn’t have time to teach” or “they didn’t think of it at all” or “Chaser got too old to learn quickly before that part of the curriculum happened” or what.)
The researchers seem to have been imagining objections to the data and processes (and had read objections to previous iterations) and trying to address them. I think dogs can probably be taught the concept and pragmatic linguistic uses of a common noun now (at least for recognition, if not for production), and I did not believe this before reading the paper about Chaser.
I’m not saying that Chaser had the concept of a noun from scratch, however. The methods section of the paper sounded a lot to me like the they used a combo of really empathically effective dog training techniques plus something more or less like Explicit Direct Instruction on “the idea of a common noun”.
What seems to have happened is that every possible opportunity and encouragement was made to give Chaser the ability and incentive to have the insight that the word “frisbee” referred to her >20 named frisbees… and then she DID have the insight.
I put a raw URL in the raw text, and javascript rewrite magic happened to it and made it an embedded video.
I was strongly tempted to try and hit ^z on the rewrite because in my experience embedded stuff changes over times and thus makes the writing “not able to persist in archives for the ages”, but… :shrugs:
I’m not surprised that the article didn’t have it. LessWrong has had the issue that “comment markdown stuff and article markdown stuff work differently” essentially forever.
I guess another possibility: maybe the LW forum software devs changed the WYSIWYG javascript editor(s) to work the same, but you have a different browser than me, or that they tested on?
If you respond with a raw youtube URL, and don’t get an embedded video based on a javascript rewrite of the comment, that would help clarify what might be going on :-)
Yep, sorry, we don’t currently support Youtube embeds for the markdown editor. Just turned out to be much easier to implement for the WYSIWYG editor, since it came out of the box with the framework we are using.
Another honorable mention is Chaser, a border collie who was trained and tested by John Pilley et al with a language design and testing regime that was specifically aimed at making Clever Hans criticisms impossible, and also to make it very clear that certain grammar recognition tasks could be statistically detected VS alternative ways to solve the linguistic challenge that “don’t seem like they are doing language learning right”… What if a dog learns “fetchblue” as a single sound that lacks a breakdown into a coherent verb about an action and a coherent object named blue? What if, in the dog’s head, “fetchblue” is just like a name for a scene that includes the object and the actions normally done with the object, in a giant swirl? Well...
They taught Chaser more than a 1000 proper names, and at least 3 verbs, and did tests with her in front of audiences using objects that (so far as they could tell from their notes?) hadn’t been paired with the tested verb before.
Here’s the kind of performance they could do, for video cameras, as supplementary material for the 2010 paper:
In the paper, they also claim something I’d previously thought impossible for dogs, which was to be able to learn to hear a common noun as a reference to a category of objects.
“Toys” were all the >1000 things Chaser knew the names of and had a right to play with because they were hers. The >100 “balls” were “toys” that were round and obvious to a human as balls. The >20 “Frisbees” were non-”ball” “toys” that were disc shaped and so on. Chaser seems to have learned to be able to “fetch ball” even when the ball was weird or named recently or named long ago or was far from her or was near to her.
(THEY DID NOT show in the paper that she could generalize this to novel un-named (non-”Toy”?) balls or frisbees. I’m not sure if the barrier was “they tried to teach and failed” or “they didn’t have time to teach” or “they didn’t think of it at all” or “Chaser got too old to learn quickly before that part of the curriculum happened” or what.)
The researchers seem to have been imagining objections to the data and processes (and had read objections to previous iterations) and trying to address them. I think dogs can probably be taught the concept and pragmatic linguistic uses of a common noun now (at least for recognition, if not for production), and I did not believe this before reading the paper about Chaser.
I’m not saying that Chaser had the concept of a noun from scratch, however. The methods section of the paper sounded a lot to me like the they used a combo of really empathically effective dog training techniques plus something more or less like Explicit Direct Instruction on “the idea of a common noun”.
What seems to have happened is that every possible opportunity and encouragement was made to give Chaser the ability and incentive to have the insight that the word “frisbee” referred to her >20 named frisbees… and then she DID have the insight.
Thank you for the info, I was not aware of Chaser! By the way, how did you do the YouTube embeds? I couldn’t make them work in my article.
I put a raw URL in the raw text, and javascript rewrite magic happened to it and made it an embedded video.
I was strongly tempted to try and hit ^z on the rewrite because in my experience embedded stuff changes over times and thus makes the writing “not able to persist in archives for the ages”, but… :shrugs:
I’m not surprised that the article didn’t have it. LessWrong has had the issue that “comment markdown stuff and article markdown stuff work differently” essentially forever.
I guess another possibility: maybe the LW forum software devs changed the WYSIWYG javascript editor(s) to work the same, but you have a different browser than me, or that they tested on?
If you respond with a raw youtube URL, and don’t get an embedded video based on a javascript rewrite of the comment, that would help clarify what might be going on :-)
Yep, sorry, we don’t currently support Youtube embeds for the markdown editor. Just turned out to be much easier to implement for the WYSIWYG editor, since it came out of the box with the framework we are using.