Buoyed by enormous hype and appearances from tech celebrities Elon Musk and Mark Zuckerberg, audio-based social network Clubhouse has enjoyed explosive growth. However, it is attracting increasing scrutiny over how the app will handle problematic content – including hate speech, harassment, and misinformation.

Moderating real-time discussion is a challenge for a crop of platforms using live voice chat, from video game-centric services like Discord to Twitter Inc’s new live-audio feature Spaces. Facebook is also reportedly dabbling with an offering.

“Audio presents a fundamentally different set of challenges for moderation than text-based communication. It’s more ephemeral and it’s harder to research and action,” said Discord’s chief legal officer, Clint Smith, in an interview with Reuters.

Tools to detect problematic audio content lag behind those used to identify text, and transcribing and examining recorded voice chats is a more cumbersome process for people and machines. A lack of extra clues, like the visual signals of video or accompanying text comments, can also make it more challenging.

“Most of what you have in terms of the tools of content moderation are really built around text,” said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society.

Not all companies make or keep voice recordings to investigate reports of rule violations. While Twitter keeps Spaces audio for 30 days or longer if there is an incident, Clubhouse says it deletes its recording if a live session ends without an immediate user report, and Discord does not record at all.

Instead, Discord, which has faced pressure to curb toxic content like harassment and white supremacist material in text and voice chats, gives users controls to mute or block people and relies on them to flag problematic audio.

Such community models can be empowering for users but may be easily abused and subject to biases.

Clubhouse, which has similarly introduced user controls, has drawn scrutiny over whether actions like blocking, which can prevent users from joining certain rooms, can be employed to harass or exclude users.

The challenges of moderating live audio are set against the broader, global battle over content moderation on big social media platforms, which are criticized for their power and opacity, and have drawn complaints from both the right and left as either too restrictive or dangerously permissive.

Online platforms have also long struggled with curbing harmful or graphic live content on their sites. The 2019 Christchurch mosque shooting was livestreamed on Facebook, and rebroadcasts popped up frequently for months afterward. In 2020, a live video of a suicide was streamed live on Facebook, and later went viral on TikTok.

Google and Facebook have also come under fire for their treatment of human moderators. Silicon Valley’s big social networks contract out moderation services to companies like Accenture and Cognizant. These complaints include, among others, precarious job security and inadequate access to mental health counselling.

An Expanding Service

Last Sunday, during the company’s public town hall, Clubhouse co-founder Paul Davison presented a vision for how the currently invite-only app would play a bigger role in people’s lives – hosting everything from political rallies to company all-hands meetings.

Rooms, currently capped at 8,000 people, would scale “up to infinity” and participants could make money from “tips” paid by the audience.

The San Francisco-based company’s latest round of financing in January valued it at US$1 billion, according to a source familiar with the matter. The funding was led by Andreessen Horowitz, a leading Silicon Valley venture capital firm.

Asked how Clubhouse was working to detect dangerous content as the service expanded, Davison said the tiny startup has been staffing up its trust and safety team to handle issues in multiple languages and quickly investigate incidents.

The app, which said it has 10 million weekly active users, has a full-time staff that only recently reached double digits. A spokeswoman said it uses both in-house reviewers and third-party services to moderate content and has engaged advisors on the issue, but would not comment on review or detection methods.

In the year since it started, Clubhouse has faced criticism over reports of misogyny, anti-Semitism and COVID-19 misinformation on the platform despite rules against racism, hate speech, abuse and false information.

Clubhouse has said it is investing in tools to detect and prevent abuse as well as features for users, who can set rules for their rooms, to moderate conversations.

Getting audio content moderation right could help spark new waves of business and usage for new services and features launched by the major social networks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here