After receiving legal threats from Facebook, AlgorithmWatch researchers were forced to terminate their research effort monitoring the Instagram algorithm. In a post published Friday morning, the Berlin-based group made the issue public, citing the platform’s recent removal of the NYU Ad Observatory.
According to the post, “there are definitely more incidences of bullying that we are unaware of.” “By stepping forward, we hope to encourage additional groups to share their stories.”
AlgorithmWatch, which launched in March 2020, was a browser plug-in that allowed users to capture data from their Instagram feeds, giving them insight into how the network selects photos and videos. The project’s findings were published on a regular basis, demonstrating that the algorithm favored photographs with bare skin and that photos with faces were ranked better than screenshots with text. For the first year of the initiative, Facebook contested the technique but did not take any additional action against AlgorithmWatch.
According to researchers, Facebook approached the project leaders in May and demanded a meeting, accusing them of violating the platform’s rules of service. Another criticism was that the project broke the GDPR by collecting data from people who had not given their agreement to participate.
In their defense, the researchers state, “We only collected data relating to information that Facebook displayed to the volunteers who installed the add-on.” “In other words, users of the plug-in were only using it to access their own feed, which they shared with us for study purposes.”
Despite this, the researchers decided to end the project because they believed they would risk legal action from the company if it proceeded.
A Facebook spokeswoman confirmed the discussion when contacted for comment, but denied threatening to sue the initiative, stating the company was open to finding privacy-preserving solutions to continue the research.
We were concerned about their practices, so we contacted them several times so they could comply with our requirements and continue their research, as we do with other research groups when similar issues arise,” the spokesman explained. “We aim to continue to collaborate with independent researchers, but in ways that do not jeopardize people’s data or privacy.”
The social structure of the Facebook platforms makes it difficult to isolate any particular user: even if a person opts in, their feed is inevitably made up of other people’s stuff, most of whom have likely not agreed to take part in the study. Since the Cambridge Analytica affair, in which university research data was ultimately utilized for commercial and political influence, Facebook has been very cautious regarding research programs.
Nonetheless, the overall pattern is concerning. The algorithms that control Facebook and Instagram news feeds are extremely powerful yet poorly understood, and Facebook’s regulations make objective research impossible. Following charges of data-stealing, the NYU Ad Observatory, which analyzed political advertising on the network, was banned earlier this month. Similar legal threats were made against a browser called Friendly in November, which allowed users to organize their feeds chronologically. In 2016, the company bought CrowdTangle, another prominent Facebook research tool.
The Ad Library and Facebook’s Social Science One partnerships are two ways for researchers to collect data directly from the firm. However, AlgorithmWatch claims that the data is intrinsically untrustworthy due to the opposing nature of their study.
According to academics, “researchers cannot rely on data provided by Facebook because the firm cannot be trusted.” “There is no reason to expect that if researchers replaced their independently collected data with Facebook’s, the corporation would offer useable data.”