Research published by the Institute for Strategic Dialogue (ISD) reveals how TikTok‘s search algorithms process and recommend content across multiple languages.
The study, conducted in July and August 2024, analyzes the platform’s search results through “algorithmic probing,” assessing content moderation through 12 targeted prompts across English, French, German, and Hungarian.
The analysis examined 300 videos, with 25 search results evaluated for each of the 12 prompts. In almost two-thirds of cases (197 videos), TikTok’s search engine and recommender algorithms created connections between derogatory search terms and content featuring members of marginalized groups.
Notably, only 10 out of 300 videos contained the actual search prompts in their content, descriptions, hashtags, sounds, or top comments. Of these 10 videos, five contained explicitly hateful content, three were unrelated to the slurs, and two critiqued their use.
The research identifies four distinct patterns in TikTok’s algorithmic content matching:
118 videos partially matched the original prompts
82 videos contained associated terms, synonyms, or translations
28 videos featured terms with similar spellings
62 videos showed no discernible textual link to the search terms
Language-Specific Results and Technical Implementation
The study reveals consistent patterns across all four languages:
English: 47 out of 71 videos showed algorithmic bias
French: 51 out of 73 videos displayed concerning associations
German: 55 out of 71 videos demonstrated problematic connections
The researchers tested three types of prompts per language: one targeting Black individuals, one targeting Romani people, and one targeting Arab/Muslim women. In each case, the platform’s algorithms made associations between derogatory search terms and videos featuring individuals from these communities, even when the original search terms were absent from the content.
Source: ISD
Regulatory Context and Technical Recommendations
The study arrives as platforms face new requirements under Articles 1, 34, and 35 of the EU’s Digital Services Act (DSA). The researchers propose specific technical solutions:
For TikTok:
Implement AI-based detection systems with human oversight teams
Incorporate gender analysis in algorithm assessment
Improve transparency about content matching criteria
Strengthen commitments to identifying and mitigating algorithmic bias
Include affected communities in DSA risk assessment processes
For Lawmakers:
Expand recommender system transparency requirements
Implement mandatory access to non-public VLOP and VLOSE data
Ensure adequate personnel and financial resources for oversight
The research is part of ISD’s broader examination of online gender-based violence around the European Parliament Election 2024, funded by the German Federal Foreign Office. The study indicates that while TikTok restricts some discriminatory search terms, many remain accessible and produce potentially harmful associations through the platform’s recommendation system.
These findings particularly concern researchers because previous studies have shown that even single instances of online discrimination can negatively impact mental health outcomes for marginalized groups. ISD emphasizes that their conclusions are limited by restricted access to platform data, with analysis confined to publicly available information rather than internal classification systems.
A recent study from Rutgers University suggested that TikTok‘s content distribution patterns in the U.S. differ significantly from other social media platforms when serving content related to China, particularly regarding sensitive political topics.
For instance, when searching for content about “Uyghurs,” TikTok returned 10.7% anti-Chinese Communist Party content, compared to 84% on Instagram and 19% on YouTube.
David Adler is an entrepreneur and freelance blog post writer who enjoys writing about business, entrepreneurship, travel and the influencer marketing space.
Economics Explained—YouTube’s largest economics channel—recently premiered a feature-length documentary on CuriosityStream featuring interviews with former White...
Research published by the Institute for Strategic Dialogue (ISD) reveals how TikTok‘s search algorithms process and recommend content across multiple languages.
The study, conducted in July and August 2024, analyzes the platform’s search results through “algorithmic probing,” assessing content moderation through 12 targeted prompts across English, French, German, and Hungarian.
The analysis examined 300 videos, with 25 search results evaluated for each of the 12 prompts. In almost two-thirds of cases (197 videos), TikTok’s search engine and recommender algorithms created connections between derogatory search terms and content featuring members of marginalized groups.
Notably, only 10 out of 300 videos contained the actual search prompts in their content, descriptions, hashtags, sounds, or top comments. Of these 10 videos, five contained explicitly hateful content, three were unrelated to the slurs, and two critiqued their use.
The research identifies four distinct patterns in TikTok’s algorithmic content matching:
Language-Specific Results and Technical Implementation
The study reveals consistent patterns across all four languages:
The researchers tested three types of prompts per language: one targeting Black individuals, one targeting Romani people, and one targeting Arab/Muslim women. In each case, the platform’s algorithms made associations between derogatory search terms and videos featuring individuals from these communities, even when the original search terms were absent from the content.
Source: ISD
Regulatory Context and Technical Recommendations
The study arrives as platforms face new requirements under Articles 1, 34, and 35 of the EU’s Digital Services Act (DSA). The researchers propose specific technical solutions:
For TikTok:
For Lawmakers:
The research is part of ISD’s broader examination of online gender-based violence around the European Parliament Election 2024, funded by the German Federal Foreign Office. The study indicates that while TikTok restricts some discriminatory search terms, many remain accessible and produce potentially harmful associations through the platform’s recommendation system.
These findings particularly concern researchers because previous studies have shown that even single instances of online discrimination can negatively impact mental health outcomes for marginalized groups. ISD emphasizes that their conclusions are limited by restricted access to platform data, with analysis confined to publicly available information rather than internal classification systems.
The full study is available here.
A recent study from Rutgers University suggested that TikTok‘s content distribution patterns in the U.S. differ significantly from other social media platforms when serving content related to China, particularly regarding sensitive political topics.
For instance, when searching for content about “Uyghurs,” TikTok returned 10.7% anti-Chinese Communist Party content, compared to 84% on Instagram and 19% on YouTube.