Will Interacting with the “Wrong” Post Get You Sent to Facebook Jail?
A recent Reuters report about the supposed prevalence of so-called “fake news” ahead of Mexico’s upcoming election this weekend quotes an anonymous Facebook employee who alleged that people are being paid to like, share, and comment on posts, which disturbingly suggests that the company might take punitive action against users who innocently interact with what the company deems to be the “wrong” post out of an “overcautious” concern that they may have been paid to “hack” the algorithm and boost “fake news”.
Reuters casually hid a bombshell of an allegation at the end of one of its latest reports about the supposed prevalence of so-called “fake news” ahead of Mexico’s upcoming election this weekend, quoting an anonymous Facebook employee who said that “some users are getting paid for liking, sharing or commenting on posts”, adding that “paid activity can be a legitimate job or align with sincere beliefs.” This disturbingly suggests that the company now suspects that users who innocently interact with what it and its partners deem to be “fake news” might have been doing so because they were paid, therefore making them a new variation of the standard “Facebook troll” that hadn’t hitherto been discovered. The reason why something as seemingly innocuous as interacting with a “fake news” post is considered to be so “threatening” to the company is because it has the chance of “hacking” the algorithm and boosting the said content in other users’ feeds.
Evidently, Facebook and its fellow censors might have belatedly identified a lot of what they later categorized as “fake news” (i.e. inconvenient facts or analyses that contradict weaponized Mainstream Media ones) floating around people’s feeds, condescendingly assuming that folks just “can’t be that stupid” to innocently interact with it and inadvertently boost these messages across the platform. The very basis of the company’s feed feature is that popular posts, per a secret proprietary algorithm, appear more frequently on other users’ homepages as they scroll, so the entire Facebook experience could forever be altered if the company overreacts to what it believes to be “paid propagators of fake news”. Having said that, it’s almost impossible to independently verify whether anyone really falls within this category beyond a reasonable doubt, which is why the frighteningly real potential exists for this fear mongering “fake news” scare to be abused for further censorship.
It can only be speculated how this might play out in practice, but it wouldn’t be unexpected if Facebook continued to rely on its “think tank” partners such as the Atlantic Council in determining what exactly constitutes “fake news” prior to taking action against users who interact with any posts promoting the aforesaid narratives. As the company has a proven tendency of doing, it might very likely send the violators to “Facebook jail” or “shadow block” their account in order to prevent it from “hacking the algorithm” and spreading more “fake news” across the platform. It remains to be seen just how far this will go, but it would be foolish to assume the best when Facebook’s track record of censorship conditions one to expect the opposite, meaning that it shouldn’t be surprising if people’s accounts are sent to Facebook jail or worse on the basis that they interacted with the “wrong” post and might have therefore been “paid trolls boosting a fake news story”.