Facebook just recently announced that they were hiring 3,000 people (on top of an existing 4,500) people to review images, videos, and posts for inappropriate content. From Popular Science:
The scale of this labor is vast: Facebook is hiring more people than work in the combined newsrooms of the New York Times, the Wall Street Journal, and the Washington Post. Facebook isn’t saying at this time if the jobs will be employees or contractors, and if they’ll be based in the United States or abroad.
Similar stories have recently popped up. YouTube had a problem when advertisers pulled their ads when it was discovered that some ads were being placed in highly objectionable videos. From Recode:
Google has been scrambling to react over the past few weeks, as newspapers like the Times of London, the Guardian and the Wall Street Journal pointed out ads running next to videos from hate groups and other extremists. Those reports prompted big brands like AT&T and Verizon to pull their ads from YouTube.
Interestingly, in both cases, the companies (Facebook and Google) claim, I believe accurately, that the problem is very small. Again, from Recode:
A top Google executive says the company’s YouTube ad controversy — where big brands have discovered that some of their ads have run next to videos promoting hate and terror — is overblown.
But he says Google is making progress at fixing it anyway.
“It has always been a small problem,” with “very very very small numbers” of ads running against videos that aren’t “brand safe,” said Google’s chief business officer, Philipp Schindler. “And over the last few weeks, someone has decided to put a bit more of a spotlight on the problem.”
It’s interesting that Google’s attitude was that this was a problem that they didn’t really have to fix, but they’re going to fix it anyway, seemingly out of the goodness of their own hearts.
I think the reality is quite different. Machine learning and AI are increasingly being placed in situations where the times where the models don’t work, however infrequent, nevertheless have a huge impact. Think self-driving cars, for example. The common denominator is that even though each of the problems happened at a comically small scale, somehow the entire world ended up knowing about it. So, if the entire world knows about it, how can this be a small problem for the company? No matter, how good Google’s or Facebook’s algorithms get, if even one person sees it, they can post a screenshot for the entire world to see.
Let’s be clear, this problem for Google, Facebook, and others is ultimately a problem of their own making.
Right now, it’s humans who take over when the machines fail. I suppose the goal will be to optimize the algorithms to the point where the need for humans to take over is relatively contained and the frequency with which these failures occur is acceptably low. But I don’t know if this is possible.
It will be interesting to see how this plays out over the next few years as machine learning and AI get put into more and more situations. I’m not confident that machine learning, or more importantly the data that is fed into the algorithms, will ever get to the point where they can detect inappropriate videos or other objectionable content at an acceptable rate. My guess is the solution will be changes to policies to ultimately disincentivize the posting of such content (as was YouTube’s solution).