Amazon’s Mechanical Turk (MTurk) is a crowd-working platform that pays people small sums to take part in menial tasks, such as tagging photos or filling out forms. Essentially it is a way to get humans to perform robotic jobs that machines can’t yet manage – but now the bots are getting their revenge by taking on the tasks themselves.
That is a problem, because MTurk is widely used by scientists as a cheap way to carry out research. University of Minnesota social psychologist Hui Bai was using it to collect data on the perception of far-right movements when he noticed a massive spike in support for groups including the KKK and the Nazi party – and it seemed to be coming from an army of bots.
Digging deeper into the data, he discovered a number of responses to his survey weren’t answering a detailed question. Instead, they simply said “Very good” or “Very nice”.
Bai also discovered that around half of his sample of 578 responders had the same GPS location as someone else. Around 50 were supposedly logging on from a statue in Buffalo, New York. A handful of others appeared to have taken the survey in the middle of a lake in Kansas.
These strange locations are a tell-tale sign of bots taking the survey, says Bai. “I was wondering, what is going on?”
Bot or not?
He’s not the only one. Erin Buchanan and John Scofield at Missouri State University and the University of Missouri identified bots completing MTurk tasks around two percent of the time in a separate analysis conducted last year.
We already know that gathering scientific data with MTurk can cause problems. Previous research indicates that between 14 and 18 percent of responses to MTurk surveys are fraudulent in some way. “It makes it harder to see what actually happens in your data,” says Buchanan. Adding bots to the mix will only make things worse.
Bai has set up an online survey for fellow researchers to report any anomalies in their data that may be the work of bots. He has received around two dozen replies so far, suggesting that the use of MTurk bots has grown.
“Three months ago, no more than five or 10 percent of the total subject base were suspected bots,” Bai says. “Now half my participants aren’t human.”
Kurt Gray at the University of North Carolina, Chapel Hill is editor of a psychology journal, where he estimates at least half of the papers he reviews include data from MTurk.
“It’s worrying you’re not observing real people making real decisions,” he says. “We’re in the business of determining how and why people act, and if we’re not looking at people, we’re not doing our jobs.”
Amazon did not respond to a request for comment from New Scientist.