Research

New tool could help lessen bias in live television broadcasts

Researchers have developed an interactive tool called Screen-Balancer, designed to assist media producers in balancing the presence of different phenotypes in live telecasts. Credit: Adobe Stock: beerphotographer. All Rights Reserved.

UNIVERSITY PARK, Pa. — From Sunday morning news shows to on-air pregame commentary in sports, live telecasts draw viewers into real-time content on televisions around the world.

But in these often-unscripted productions, what the audience sees is not always what the producer intends — especially in regard to equity of on-air time for subjects based on their race or gender.

A team of researchers, which includes Syed Billah from Penn State’s College of Information Sciences and Technology, has developed an interactive tool called Screen-Balancer, designed to assist media producers in balancing the presence of different phenotypes — an individual’s observable physical traits — in live telecasts.

In live broadcasts, producers must make instantaneous decisions that reflect what appears on air — a much different process from pre-recorded shows that have a post-production phase in which those choices could be changed or refined. For live telecast producers, however, these split-second decisions could lead to unconscious bias in the content that is presented in the show.

But when using Screen-Balancer, producers were able to reduce the difference of screen times of male and female actors in live telecasts by 43%, and that of light-skinned and dark-skinned actors by 44%, according to the study.

“Our goal is to ensure the screen times of male and female actors, and actors with different skin tones are balanced,” said Billah, assistant professor of information sciences and technology. “And we can do that using artificial intelligence and data visualization techniques in real-time, without hindering producers' artistic freedom.”

He added, “Before Screen-Balancer, there was no such tool. It was all ad hoc; producers have had to balance the screen times of different actors in their head, which is not easy.”

Screen-Balancer is an interface that is modeled after a switcher, a large multi-monitor control panel used by a producer to view various shots from multiple cameras in the studio and select which will appear on air in real-time.

Using Screen-Balancer, a producer would see the same camera feeds they’d see in the switcher — including several input streams showing each camera’s angle and view; a preview stream, which the producer can use to isolate a particular camera feed from the input feeds; and an output stream, which is what is currently appearing on air.

Screen-Balancer then uses facial recognition and computer vision algorithms to detect the number of males or females or individuals with different skin tones on each camera feed. It displays these distributions in real-time using data visualization techniques.

Not only can Screen-Balancer visually alert a producer to the telecast’s overall average distribution of gender and skin color, it can help guide the producer to choose which input camera feed to display to maintain balance. Using the switcher’s built-in 10-second delay for detecting and censoring profanity in a live telecast, Screen-Balancer can display a bar chart for each camera feed showing the average phenotypic distribution of that camera feed in the next 10 seconds. More, it informs the producer how selecting that feed at that time would impact the overall average distribution for the telecast.

“These bar charts are specially designed to make visual comparison faster. By the end of the show, we hope the bars showing the overall distribution of the screen time of male and female subjects, and subjects with different skin tones have equal heights,” said Billah.

He concluded, “I work in accessible computing. As part of my research, we design assistive technologies for people with special needs and/or people with special situations to promote equality.”

Billah worked with MD Naimul Hoque and Klaus Mueller, both of Stony Brook University; and Nazmus Saquib of MIT Media Lab. The work was supported by the National Science Foundation, and was published in the proceedings of the 23rd ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2020).

Last Updated June 28, 2021

Contact