Photo // Glen Carrie
Maybe don't ban your transparency researchers.

Facebook sparks controversy over their recent actions against researchers who collected data on political ads and the spread of misinformation within the platform. However, as Facebook banned their fact-checkers it seems as if there could have been ulterior motives. The group of academics under fire are researchers from the NYU Ad observatory, and their work is dedicated to examining the origin and spread of political ads and misinformation. However, under supposedly new privacy checks by the FTC and violations of the terms of service, Facebook has banned the team and their tools leaving them without work.

The tool the researchers used was a browser plug-in called Ad Observer, which collects data on political ads and why the users were targeted. Even though Facebook has spoken against the plug-in, stating that it infringes upon people’s privacy, the tool does not collect personal information. The tool collects the advertiser’s name, the ad’s content, information Facebook provided about how the ad was targeted, and when the ad was shown to users.

This information garnered by the plug-in is then made public. Since the information is already available and is not about the personal Facebook user, it is simply a matter of collecting the data.

As an effect of Facebook’s strategic rhetoric, their statement suggests their researchers were collecting data of private individuals and thus violating their right to privacy. However, what Facebook did not mention is that the research is conducted with advertisers’ accounts and collects information on why they target ads to specific demographics, the times in which they do so, and how their engagement changes when doing so.
This kind of research has proven to be helpful in the recent past exposing how Facebook fails to disclose who pays for certain political ads. The tool also has collected data showing how the fast-paced spread of alt-right misinformation boosts engagement and is continually pushed in the algorithm compared to misinformation from the center or left sources.

A popular example of how Facebook has spread misinformation is when the platform ran with the idea of injecting disinfectant after Donald Trump’s comment on how disinfectant and sunlight could help amidst the Coronavirus pandemic.

This photo went viral on Facebook after Trump’s sarcastic remarks about injecting the disinfectant. However, facebook does not portray sarcasm or any underlying tones making the internet actually believe that swallowing disinfectants could be a solution. This is one of many examples of how the spread of misinformation can be harmful and even deadly. Source : Facebook

Mike Clark, the company’s product management director, stated, “While the Ad Observatory project may be well-intentioned, the ongoing and continued violations of protections against scraping cannot be ignored and should be remediated”. This statement is well-natured and should be taken into consideration. However, as Facebook has a history of selling their users’ private information, conveniently, they have started to crack down on privacy concerns with those bringing transparency to their platform.

Laura Edelson, a member of Cybersecurity for Democracy, stated that Facebook was “silencing us because our work often calls attention to problems on its platform.”

She continued to say, “Worst of all, Facebook is using user privacy, a core belief that we have always put first in our work, as a pretext for doing this. If this episode demonstrates anything it is that Facebook should not have veto power over who is allowed to study them.”

“This evening, Facebook suspended my Facebook account and the accounts of several people associated with Cybersecurity for Democracy, our team at NYU. This has the effect of cutting off our access to Facebook’s Ad Library data, as well as Crowdtangle,” Edelson wrote. “Over the last several years, we’ve used this access to uncover systemic flaws in the Facebook Ad Library, identify misinformation in political ads including many sowing distrust in our election system, and to study Facebook’s apparent amplification of partisan misinformation.”

It is about time Facebook stopped selling user’s privacy and started protecting it. With that said, they should focus on keeping personal information private from third-party participants, not academic researchers. The time and effort of the vast Facebook team should not be put towards protecting advertisers while people’s personal information is being bought and sold right under the public’s nose. Furthermore, the research done by the banned academics was going to benefit the app in functionality and transparency.

Facebook should lean into the researchers and work together to stop the vast spread of misinformation that remains one of Facebook’s largest issues. Even if misinformation gets more clicks, rage-induced Karens are running Facebook and will hold the app back from progressing into a social media platform that could truly stand the test of time.

Transparency is necessary for Facebook to continue to stay relevant, and banning the researchers who were going to make that happen is a misstep Facebook will eventually realize, but it might just be too late.