Information Sciences and Technology

'Black box gaslighting' challenges social-media algorithm accountability

IST’s Kelley Cotter examined ongoing debate between influencers who claim content is algorithmically suppressed and the social media platforms that deny it

Kelley Cotter, assistant professor in Penn State's College of Information Science and Technology, recently examined the ongoing debate between social media influencers who claim their content is algorithmically suppressed after seeing uncharacteristic drops in reach and engagement and the social media platforms that deny it but do not provide transparent information on how they operate. Credit: Adobe Stock: rawpixel.com. All Rights Reserved.

UNIVERSITY PARK, Pa. — Social media algorithms drive what content users see and when they see it. These complex systems aim to ensure a positive user experience by amplifying relevant posts and eliminating or limiting the reach of hateful or inappropriate content.

But sometimes, social media influencers claim to experience unexplained drops in the reach of wholesome content on their accounts. This is colloquially known as “shadowbanning,” a term influencers use to describe situations in which accounts display uncharacteristic drops in reach, suggesting that content may be algorithmically suppressed, according to Kelley Cotter, assistant professor at the Penn State College of Information Sciences and Technology. Conversely, said Cotter, social media platforms have mostly denied that shadowbanning exists.

In a recent article published in Information, Communication & Society, Cotter explored how the limited information provided by platforms to explain how their algorithms work, paired with the algorithms’ technical complexity, makes it difficult for stakeholders to refute these denials. Introducing a concept Cotter termed as “black box gaslighting,” she highlighted how shadowbanning denial has led to users’ self-doubt and second-guessing of what they know about how the systems work.

“Black box gaslighting suggests that the lack of transparency around algorithms and our inability to always explain their behavior creates a space to undermine people’s perceptions of reality — to make users think that what they believe about algorithms is wrong or to question their own ability to perceive them clearly,” said Cotter. “Black box gaslighting is a threat to our collective ability to hold social media platforms accountable.”

This is concerning, said Cotter, because significant errors in imperfect algorithmic systems — and the public's lack of information about them — could cause or contribute to bigger social problems. This especially pertains to members of vulnerable populations who often, for no reason that they understand, see their content unexplainably restricted or distributed minimally to different audiences.

“Censorship of different communities restricts the kinds of ideas that circulate online and can impact how we view ourselves and the world around us,” said Cotter. “If we have critical claims about shadowbanning and censoring of different marginalized communities, but social media platforms can deny it and effectively convince the public otherwise, then it’s really hard to ensure that algorithmic systems operate in the public’s best interest and can be held accountable for the ills that they might perpetrate. And, in fact, influencers and other users are often the first to see and experience problems because the problems are so dependent on the who, what, where of algorithmic operations.”

In her study, Cotter aimed to showcase that the ongoing dispute between social media influencers and a social media platform on whether or not shadowbanning exists further explicates the concept of blackbox gaslighting. Cotter conducted interviews with 17 social media influencers and analyzed their online discussions, as well as public statements made by a social media company about shadowbanning, to understand how influencers experience shadowbanning and how the social media platform responded to influencers’ related shadowbanning claims.

Cotter documented claims of shadowbanning among influencers she interviewed and observed online , which noted atypical stark drops in engagement measured in reach, likes and clicks. She also pointed to previously published media articles and social media posts in which the platform’s CEO denied the existence of shadowbanning. In the few cases where media statements were released on the topic, Cotter noted that they offered a definition of shadowbanning that did not fully match that of the influencers. More, through her analysis of those online discourse materials, Cotter suggests that the platform has attempted to debunk the claims of shadowbanning by offering alternate explanations for drops in reach, such as glitches in the system; influencers’ failure to create engaging content; and implying that changes in consumers’ habits, which are beyond the platform’s control, lead to content’s reach being simply a matter of chance.

Cotter said she hopes that the concept of black box gaslighting brings forward some of the ways that social media users have the power to speak up and raise red flags about algorithms, but their individual efforts are often fruitless.

By drawing attention to the issue, Cotter said she aims to give those users shared knowledge and a collective way of holding platforms accountable.

“It can give the companies feedback about what users want and the problems that they see, (and allows users’ concerns to be translated) into actual policies that will create more guardrails for platforms,” said Cotter. “It is a potentially powerful way of making sure that the platforms work best for the people who are actually using them on a day-to-day basis.”

This study is part of a larger body of work supported by the National Science Foundation through which Cotter is examining critical algorithmic literacy — what it means to know algorithms, what that knowledge looks like, and how negotiations of knowledge align with or are influenced by power structures.

“In this case, with blackbox gaslighting, it has to do with the kind of power asymmetry between platforms and influencers who are laboring on those platforms,” she said.

Cotter’s article, “Shadowbanning is not a thing”: black box gaslighting and the power to independently know and credibly critique algorithms” was published online in Information, Communication & Society in October.

Last Updated January 18, 2022

Contact