Earlier this month Facebook announced that it would be rolling out a new program to target advertising discrimination on the platform using artificial intelligence. In addition to strengthening the language in their anti-discrimination policy, Facebook will roll out an AI tool that will identify and flag ads that offer “housing, employment or credit opportunit[ies] and either include or exclude our multicultural advertising segments” — and then kill the ad.
The change came in response to an October ProPublica report that showed how Facebook advertisers could easily violate the Fair Housing and Civil Rights Acts by using Facebook’s demographic targeting tools. Along with monitoring ads with AI, Facebook has also disabled some of those tools for housing, employment or credit ads.
The tool won’t be used to screen other types of ads, however, which means the rollout is more of a compliance effort than one meant to crack down on discriminatory advertising in general. Consider too that in combination with its ongoing struggle to do something about the prevalence of fake news and recent reports that it hamstrung its own efforts to diversify its workforce, this change may have little impact on users’ — and employees’ — perceptions of the platform.
Perception matters here because this isn’t, in truth, just a matter of compliance, it’s also a matter of trust.
How Fake News Is Tarnishing Facebook’s Brand
During the 2016 presidential election and in its immediate aftermath, Facebook was dogged by reports that editors were unduly tinkering with trending topics and that its algorithms were facilitating the spread of fake news and the creation of ideological echo chambers. Now it’s trying to make up for all that bad press and it’s struggling.
The extent to which Facebook creates ideological silos is still a subject of debate, but it can’t be argued that our news feeds look very different, depending on our personal and professional networks, and that the kind and calibre of news we read on a daily basis varies too. A 2016 Pew study reported that 62% of American adults got their news from social media, and 66% of them got their news primarily from Facebook. It’s impossible to deny the role that Facebook now plays in distributing news and in shaping public opinion — after all, that’s why so many brands advertise on the social media platform. That’s where people are playing, reading, debating and just spending time with friends.
But the Facebook experience is a tailored one — so tailored that feeds might seem like they’re from different planets. (Which is another reason why its so attractive to advertisers, who can target the exact demographic they want to reach.) To illustrate how different feeds can be, based on your political beliefs, WSJ made a tool that “demonstrate[s] how reality may differ for different Facebook users,” called Blue Feed, Red Feed. Based on a large 2015 study, the tool allows you to see how different news feeds really can be.
That difference isn’t a bad thing, in and of itself. A tailored experience, based on your own inputted preferences and clicking decisions, is part of what Facebook offers to users. Newsfeeds aren’t a chronological presentation of updates from all of your friends and trending topics aren’t just what people are talking about. Newsfeeds are algorithmically manipulated to boost some stories and ads but not others. Trending topics are determined by a combination of buzz and editorial oversight. Although Facebook’s importance as a source of news was based on the trust relationships one shares with their friends, the trust relationship users’ have with Facebook is muddied by the fact that the platform doesn’t just let you talk and share with your friends, it mediates how you do that. A lot.
For Facebook, It’s All About Compliance
Because of this, Facebook has emerged as one of the better tools for purveyors of fake news to sell their wares. Everything about Facebook’s design is aimed at keeping you on platform and interacting with your friends, be it through games, alerts or sharing news articles. If you understand how it prioritizes stories and ads, Facebook can be a fantastic tool for getting eyeballs on your content. In that sense, Facebook’s basic architecture has stacked the deck against its desire to weed out fast-spreading, confirmation bias affirming, fake news stories. (Twitter, which pushes users to interact with others beyond their feeds, has a different problem — it struggles with bots, trolls, and thanks to their ineffective filtering tools, basic safety.)
Months later, Facebook is still struggling to combat fake news in a way that doesn’t also compromise its sales, marketing and user growth strategies. But it’s an issue that they can’t afford to sit on. German lawmakers have “proposed a rule that would levy a 500,000 euro fine for each piece of fake news it fails to take down within 24 hours,” and they aren’t the only country taking measures to force social media companies to fight fake news. Facebook is also working furiously to crack down on fake news in France, in the lead up to their presidential election, on April 23. Its efforts to combat fake news are prioritized based on PR hotspots and liability. It’s focusing first on regions where the public or elected officials have pushed back, which, in and of itself, is sensible. You should always try to limit risks and comply with local regulations. But does Facebook see fake news, ad discrimination and other abuses for the brand damaging problems that they are? Has Facebook moved to crack down on ad discrimination all over the world or just where people are advocating for fairness the loudest?
Fake Facebook Jobs
The abundance of fake news and the company’s inability to grapple with it matters to HR and other business professionals because news about your company lives or dies on Facebook due to the same factors that can get fake news trending and bury more relevant but less flashy stories. And if users can’t trust Facebook to handle basic technical challenges like illegal or fraudulent posts, how much can they trust your business news or job ads? Or, from a different position, how useful is Facebook as a tool to promote your employer brand and share opportunities if you can’t be sure you aren’t sharing space with fakes and grifters?
Last week Facebook made it possible for any business to share a free job post, giving small businesses a potentially powerful tool. In line with its recent change to employment, housing and credit ads, Facebook has disabled protected class demographic targeting on job posts. For a whole host of reasons, from poor organic reach, to Facebook’s dearth of typical resume data, to its impulse-buy application system, Facebook jobs seem best suited to small and local businesses, and to low skilled, hourly positions. As Jim Durbin points out, for all that Facebook added jobs posts in an effort to be “more useful” it’s done so without much thought to compliance, documentation or potential legal ramifications. Applications go to messenger, not your ATS and chat logs aren’t usually information that companies archive.
Two things strike me though, about the new Facebook Jobs: 1) Posts are not ads and so are not detectable by Facebook’s new anti-discrimination AI tool; 2) User reporting then, is the only way to stop fake (or subtly discriminatory) job posts from going viral. Of course fake job posts crop up on job boards and aggregators too, but Indeed is nowhere near as good at spreading misinformation as Facebook is. And while jobseekers, like buyers, should always beware and do their research before getting their hopes up about an opportunity, the fake news phenomenon shows how easily people can be duped.
The Tools Are Insufficient
What Facebook has done so far to tackle fake news is roll out user-based reporting tools. That is, it is users, not an AI tool, that are meant to find, analyze and flag fake news. Flagged posts go to a group of trusted publications for vetting and if they find that the article is indeed fake news, it’s marked as “disputed,” advertising revenue is killed and the piece is bumped down in news feeds. The effectiveness of this strategy is still an unknown — there are, as yet, no studies on how many users 1) are reporting fake news, 2) are reporting fake news accurately and not based on partisanship, and 3) even care about the presence of a “disputed” tag. Some users could take on “disputed” as a badge of honour, just as Trump supporters did with “deplorable.”
Note too that there’s a marked difference in how Facebook will be tracking and dealing with discriminatory advertising and how it now deals with fake news. On the one hand, Facebook’s new commitment to tackling discriminatory hiring, housing and credit ads should be heartening for relevant advertisers. An employer that genuinely wants to reach the best candidates, no matter their race, gender, orientation or country of origin, can perhaps rest assured that their ads aren’t sharing space with racist or misogynist ads. On the other hand, where exactly will Facebook job posts fit into this system?
All of This Is Bad for Facebook’s Brand — and Maybe Yours
Does it matter that Facebook’s crackdown on fake news and ad discrimination doesn’t exactly look sincere? That it rather looks more motivated by their legal and PR departments than by a real interest in making the platform better for users and advertisers alike? That Facebook Jobs seems to, so far, occupy a policy no man’s land? Yeah, it really does. Facebook’s half-hearted commitment to truth and fairness (“I mean… I guess that’s a problem. I guess we could… do something about it.”) is bad for its brand on every level. But it’s not a bad look that’s unearned — the fumbled rollout of anti-fake news and anti-discrimination tools reminds me so much of how thoroughly they botched efforts to diversify their workforce. It says so much about Facebook as an organization and brand, how it makes decisions and how it responds to crises. With every new disaster my confidence in the platform wanes yet more.
For advertisers, the question is: what’s the cost/benefit of having a strong Facebook presence at this point and how sure are you that you’re reaching the people you need to?
For users, the question is: how much can I trust anything that comes up in my feed? Job posts included?