226259 Twitter Extremist dont at me K Radtke

Twitter has a new plan to fight extremism — then Elon arrives

This has long plagued Twitter’s research team. Tasked with tackling some of the platform’s toughest issues around harassment, terrorism and misinformation, the crew fled to Napa Valley in November 2021 for a company retreat. Despite a tumultuous change in leadership — Jack Dorsey recently stepped down to be replaced by former chief technology officer Parag Aggarwal — the group feels united, even optimistic. After months of battling bad actors online, employees took some time off to relax. “We finally felt like we had a cohesive team,” says one researcher.

But at the last day’s farewell brunch, people’s phones started pinging with terrible news: Their boss, Twitter’s vice president of design, Dantley Davis, had been fired. No one knows what is coming. “It’s like a movie,” said one attendee, who asked not to be identified because he was not authorized to speak publicly about the company. “People started crying. I was sitting there eating a croissant, ‘What’s in the mood?’

The news signaled a downward spiral for the research firm. Although the group was used to regrouping, the shakeup in the middle of an outing meant to bond the team was seen as deeply symbolic.

Confusion ensued when Elon Musk signed a deal to buy Twitter in April. Interviews with current and former employees, along with 70 pages of internal documents, say the turmoil surrounding Musk’s purchase has pushed some teams to breaking point, prompting several health researchers to quit and some telling their colleagues to prioritize projects that fight terrorism. Suitable for focusing on bots and spam. The Musk deal may not even happen, but the effects on Twitter’s health efforts are already clear.

The health team, tasked with encouraging civil dialogue on a once famously uncivil platform, has been reduced from 15 full-time staff to two.


In 2019, Jack Dorsey asked a fundamental question about the platform he helped build: “Can we really measure the health of a conversation?”

On stage at the TED conference in Vancouver, the beanie-wearing CEO spoke passionately about investing in automated systems to proactively identify bad behavior and “reduce victimization altogether.”

That summer, the company began recruiting a team of health researchers to carry out Dorsey’s mission. His talk convinced people working in academia or for big tech companies like Meta to join Twitter, inspired by the opportunity to work for positive social change.

When the process worked as intended, health researchers helped Twitter think through potential abuses of the new products. In 2020, Twitter is working on a tool called “Unmention,” which will allow users to limit who can reply to their tweets. The researchers conducted a “red team” exercise, bringing together employees from across the company to explore how the tool could be misused. The reference “can allow powerful people [to] Suppress dissent, discussion, and correction, and enable bullies to approach their targets [to] Compulsory targets to respond individually,” the Red Team wrote in an internal report.

But the process is not always so smooth. In 2021, former Twitter product chief Kevon Bakepour announced that the company’s first priority would be to launch Spaces. (“It took an all-out attack to kill the clubhouse,” said one employee.) The team assigned to the project worked overtime to roll out the feature, and didn’t schedule a red team exercise until Aug. 10 — three months after the launch. In July, the exercise was cancelled. Spaces went live without a comprehensive assessment of key risks, and white nationalists and terrorists took the stage. The The Washington Post reported.

When Twitter eventually conducted a red team exercise for Spaces in January 2022, the report concluded: “We did not prioritize identifying and mitigating health and safety risks before launching Spaces. This red team was too late. Despite critical investments in the first year and a half of building spaces, we have largely responded to the real-world harm that malicious actors in spaces can cause. We rely on ordinary people to identify problems. We launch products and features without adequately exploring potential health implications.”

Earlier this year, Twitter scaled back plans to monetize adult content after the Red Team found the platform failed to adequately address child sexual exploitation cases. It’s a problem researchers have been warning about for years. Employees said Twitter executives were aware of the problem, but the company had not devoted the necessary resources to fix it.


By the end of 2021, Twitter’s health researchers had spent years playing whack-a-mole with bad actors on the platform and decided to implement a more sophisticated approach to dealing with malicious content. Externally, the company has been regularly criticized for allowing dangerous groups to run amok. But internally, it sometimes seemed that even some groups, such as conspiracy theorists, were kicked off the platform. soon – before researchers study their dynamics.

“The old system was almost ridiculously ineffective and very reactive — a manual process of playing catch,” said one former employee, who asked to remain anonymous because he was not authorized to speak publicly about the company. “Defining and catching the ‘bad guys’ is a losing game.”

Instead, researchers hope to identify people who engage with harmful tweets and nudge them towards healthier content using pop-up messages and interstitials. “The pilot will allow Twitter to identify and leverage signals — instead of content — and reach vulnerable users by redirecting them to supportive content and services,” read an internal project brief viewed to the edge.

Twitter researchers partnered with Moonshot, a firm that specializes in studying violent extremists, and launched a project called Redirect, which builds on work done by Google and Facebook to curb the spread of malicious communities. At Google, this work led to a sophisticated campaign to target people searching for extremist content with ads and YouTube videos aimed at removing extremist messages. Twitter plans to do the same.

The goal is to move the company from merely responding to bad accounts and posts to proactively guiding users toward better behavior.

“Twitter’s efforts to curb harmful groups will focus on defining these groups, assigning them to a policy framework, identifying their scope (group affiliation and behaviors) and suspending or deplatforming those within the group,” reads the internal project brief. “This project, instead, seeks to understand and address consumer behaviors upstream. Instead of focusing on identifying bad accounts or content, we try to understand how users find malicious group content on accounts and redirect those efforts.

In the first phase of the project, which began last year, researchers focused on three communities: ethnic or racially motivated violent extremism, anti-government or anti-authoritarian violent extremism, and incels. In a case study about the Boogaloo Movement, an extremist group focused on inciting the Second American Civil War, Moonshot identified 17 influencers with high levels of community engagement using Twitter to share and spread their ideology.

The report outlined possible points of intervention: one when someone tries to search for a boogaloo term and another when they are about to engage with boogaloo content. “Moonshot’s approach to core community identification highlights the movement of users toward this sphere of influence, which prompts interstitial messaging from Twitter,” the report said.

The team also suggested adding a pop-up message before users retweet extremist content. These interventions are intended to add friction to the process of finding and engaging with malicious tweets. Done right, it blunts the impact of extremist content on Twitter, making it harder for groups to recruit new followers.

But before that could be fully implemented, Musk struck a deal with Twitter’s board to buy the company. Shortly thereafter, the employees leading the Moonshot partnership left. And in the months since Musk signed the deal, the entire health research team has evaporated, down to just two from a staff of 15.

“The sale of the company to Elon Musk is just the icing on a long track record of decisions made by top executives at the company that show safety has not been a priority,” one employee said.

Several former researchers said the confusion surrounding Musk’s bid to buy the company was a breaking point and led them to decide to pursue other work.

“The confusion of the contract was that I didn’t want to work at the private, Musk-owned Twitter, but I didn’t want to work at the public, Musk-owned Twitter,” the former employee said. “I don’t want to work for Twitter anymore.”

The second phase of the redirect project — which will help Twitter understand which interventions worked and how users interacted with them — has secured funding. But by the time the money arrived, researchers weren’t available to monitor it. Some of the remaining employees are accused of removing redirection in favor of projects related to bots and spam, which Musk has focused on trying to pull out of the deal.

Twitter spokeswoman Lauren Alexander declined to comment on the record.

One employee summed up the team’s frustration in a tweet: “Totally uninterested in what Jack or any other C-suiter has to say about this takeover,” the employee wrote, with a screenshot of a story about how Twitter CEO Parag Agarwal and former CEO Jack Dorsey stood. To profit from the deal with Musk. “You all could fall down a very tall flight of stairs.” (The employee declined to comment.)

According to current workers, the tweet was reported as threatening to a co-worker and the employee was fired.

Leave a Comment

Your email address will not be published.