In the wake of Elon Musk’s takeover, Twitter has reportedly frozen some employee access to internal tools used for content moderation and other policy enforcement, limiting the staff’s ability to clamp down on ‘misinformation’ ahead of the midterm elections.
“Most people who work in Twitter’s Trust and Safety organization are currently unable to alter or penalize accounts that break rules around misleading information, offensive posts and hate speech,” Bloomberg reports.
According to people familiar with the matter, only the most high-impact violations that would involve real-world harm can be penalized.
Those posts reportedly were prioritized for manual enforcement.
NEW: A wide swath of Twitter's trust & safety team had access to content moderation/enforcement tools frozen last week. Usually, hundreds of people on the team could remove posts w/misinfo, hate speech etc. It's now down to 15 people. Scoop w/@KurtWagner8 @daveyalba @EdLudlow
— Jackie Davalos (@jackiedavalos1) November 1, 2022
Twitter has frozen some employee access to internal tools used for content moderation and other policy enforcement, curbing the staff’s ability to clamp down on misinformation ahead of a major U.S. electionhttps://t.co/aTdauO5qHN
— TIME (@TIME) November 1, 2022
Per Bloomberg https://t.co/ti7tqvCSDE
— Benny Johnson (@bennyjohnson) November 1, 2022
https://twitter.com/daveyalba/status/1587255912443650049
https://twitter.com/daveyalba/status/1587257530794885123
https://twitter.com/daveyalba/status/1587260390165123074
From Bloomberg:
People who were on call to enforce Twitter’s policies during Brazil’s presidential election did get access to the internal tools on Sunday, but in a limited capacity, according to two of the people. The company is still utilizing automated enforcement technology, and third-party contractors, according to one person, though the highest-profile violations are typically reviewed by Twitter employees.
San Francisco-based Twitter declined to comment on new limits placed on its content-moderation tools.
In response to this story, Yoel Roth, the head of safety and integrity at Twitter, tweeted: “This is exactly what we (or any company) should be doing in the midst of a corporate transition to reduce opportunities for insider risk. We’re still enforcing our Twitter rules at scale.”
Twitter staff use dashboards, known as agent tools, to carry out actions like banning or suspending an account that is deemed to have breached policy. Detection of policy breaches can either be flagged by other Twitter users or detected automatically, but taking action on them requires human input and access to the dashboard tools. Those tools have been suspended since last week, the people said.
This restriction is part of a broader plan to freeze Twitter’s software code to keep employees from pushing changes to the app during the transition to new ownership. Typically this level of access is given to a group of people numbering in the hundreds, and that was initially reduced to about 15 people last week, according to two of the people, who asked not to be named discussing internal decisions. Musk completed his $44 billion deal to take the company private on Oct. 27.
The limited content moderation has raised concerns among Twitter’s Trust and Safety Team that the company will be short-handed to enforce policies before the Nov. 8th midterm elections.
Join the conversation!
Please share your thoughts about this article below. We value your opinions, and would love to see you add to the discussion!