[link] Baidu cheats in an AI contest in order to gain a 0.24% advantage

Some of you may already have seen this story, since it’s sev­eral days old, but MIT Tech­nol­ogy Re­view seems to have the best ex­pla­na­tion of what hap­pened: Why and How Baidu Cheated an Ar­tifi­cial In­tel­li­gence Test

Such is the suc­cess of deep learn­ing on this par­tic­u­lar test that even a small ad­van­tage could make a differ­ence. Baidu had re­ported it achieved an er­ror rate of only 4.58 per­cent, beat­ing the pre­vi­ous best of 4.82 per­cent, re­ported by Google in March. In fact, some ex­perts have noted that the small mar­gins of vic­tory in the race to get bet­ter on this par­tic­u­lar test make it in­creas­ingly mean­ingless. That Baidu and oth­ers con­tinue to trum­pet their re­sults all the same—and may even be will­ing to break the rules - sug­gest that be­ing the best at ma­chine learn­ing mat­ters to them very much in­deed.

(In case you didn’t know, Baidu is the largest search en­g­ine in China, with a mar­ket cap of $72B, com­pared to Google’s $370B.)

The prob­lem I see here is that the main­stream AI /​ ma­chine learn­ing com­mu­nity mea­sures progress mainly by this kind of con­test. Re­searchers are in­cen­tivized to use what­ever method they can find or in­vent to gain a few tenths of a per­cent in some con­test, which al­lows them to claim progress at an AI task and pub­lish a pa­per. Even as the AI safety /​ con­trol /​ Friendli­ness field gets more at­ten­tion and fund­ing, it seems easy to fore­see a fu­ture where main­stream AI re­searchers con­tinue to ig­nore such work be­cause it does not con­tribute to the tenths of a per­cent that they are seek­ing but in­stead can only hin­der their efforts. What can be done to change this?