The ‘trad wife’ influencers posting lifestyle videos to push Islamophobia Illustration for Hyphen by Gogi Kamushadze
Illustration for Hyphen by Gogi Kamushadze

The ‘trad wife’ influencers posting lifestyle videos to push Islamophobia

A new book reveals how make-up and cooking tutorials can hide anti-Muslim ideologies

For some Muslim women, posting on social media can expose them to extreme forms of Islamophobia. “It’s like you can’t ever relax without being exposed to hatred,” said Yousra Samir Imran, a Muslim lifestyle author who has over 11,000 followers on Instagram and publishes a weekly newsletter

Imran is no stranger to online abuse, including Islamophobic attacks and criticism for wearing a hijab. “Going on TikTok and YouTube for many of us is a form of escapism — but it no longer is when we go from racism and Islamophobia at work and on the streets to finding it on our phone screens,” she said. 

Just as new laws and strategies aimed at tackling the spread of racism, hatred and abuse online are being introduced around the world — including a G7 agreement in 2021 to bolster online safety and the UK’s online safety bill — a recently published book reveals how far-right influencers are finding new ways to disseminate hate speech, including Islamophobia, while avoiding censure. 

According to the recently published The Women of the Far Right: Social Media Influencers and Online Radicalization, by Eviane Leidig, a Marie Skłodowska-Curie postdoctoral fellow at Tilburg University, many of the new faces of online Islamophobia are “trad wives”, typically Christian and ultra-conservative homemakers, posting make-up or lifestyle tips on platforms such as YouTube and Instagram. 

These women, targeting white, female and conservative viewers, are turning “the language of far-right ideology into an aspirational lifestyle”, according to Leidig. 

The author traces the online evolution of these new Islamophobes in her book. Her research shows that some prominent female figures within the US far-right movement began withdrawing from posting overtly political content — including videos of themselves at anti-immigration rallies during the Donald Trump presidency. According to the author, as some of them married and had children, they began posting about parenting, married life and updates on beauty and recipes — often laced with Islamophobia — and found much wider audiences.

Leidig’s book documents this trend by following eight of the most visible accounts for over five years. Her book highlights the example of a now-removed YouTube make-up tutorial first posted in 2017 by the Canadian alt-right influencer Lauren Southern, a prominent anti-migration campaigner who has a large following in Europe and the US. In the video, Southern gave eyeliner tips while instructing her followers to use “nice and sharp strokes, just as sharp as the knives Allah instructs us to use on the throats of disbelievers”. 

Southern, who has over 700,000 followers on YouTube and was refused entry to the UK in 2018, also told viewers that they should “close that eye, just like you would to every single terrorist attack that happens in the Middle East and Europe everyday”. 

The video was labelled as an “ad friendly makeup tutorial”, which meant that the poster did not appear to violate YouTube’s terms of service. It was later removed when the nature of the material became obvious to the platform.

Research shows that while Islamophobia is among the fastest-rising online digital threats, social media platforms remain ill-equipped in tackling it. A 2022 study carried out by the Islamic Council of Victoria in Australia found there were over 3.7m Islamophobic posts made on X, formerly known as Twitter, in the two years to August 2021, but only 15% were removed by moderators. 

In the US, the Center for Countering Digital Hate reported in 2022 that social media companies, including Facebook, Instagram, TikTok, X and YouTube failed to act on 89% of Islamophobic posts reported to moderators.

Leidig said: “As I sunk down the rabbit hole, I saw this fine line between normalcy and extreme political beliefs displayed by influencers on a daily basis. Alongside Instagram photos of cuddling with puppies and perfectly manicured were screenshots of tabloid newspaper headlines about the sexual-grooming activities of Muslim gangs in the UK.”

The tactic is working, according to Imran, who is often abused online just because her image shows her wearing a hijab. “Their far right views are hidden under the guise of video content such as ‘get ready with me’ videos, reviews and beauty tutorials during which there’s their own running commentary expressing their views and beliefs.” 

Leidig first began charting this migration towards hidden hatred in lifestyle content in 2019. “I knew that there were active women within this space but it didn’t seem like they were being profiled,” she said. “What was interesting for me is that they were also talking about topics like dating and relationships. Over time they started to shift their content — but that doesn’t mean they are any less Islamophobic.”

She then observed this shift acceleration during the coronavirus pandemic in 2020 when wellness, motherhood and relationship advice became popular themes on social media channels. Leidig witnessed how some women showcased their lives as homemakers raising children while disseminating dangerous and often far-right messages. Livestreams on YouTube channels on motherhood and parenting styles, for example, sometimes included far-right conspiracy theories about the Great Replacement, which claims that white Americans and Europeans are being replaced by non-white immigrants. 

“When they talk about family values or painting what is a white heteronormative family unit, there is the associated message of what does or doesn’t belong,” said Leidig.

Leidig’s book highlights a number of dangers embedded in this brand of far-right content. By targeting the private sphere, these influencers are able to connect with their followers in a uniquely emotional way, which makes Islamophobic radicalisation easier to achieve. She writes: “Even as someone who can spot the signs of radicalisation, I found it easy to become absorbed in these women’s worlds. And here lies the crux of the problem: these influencers are integral to normalising the far right in the 21st century through their visible social media performances.”

Shelina Zahra Janmohamed, a UK-based author and marketing executive who has also faced Islamophobic abuse online, said she recognised the hidden dangers of the trends described by Leidig. “My worry is this kind of content which has those messages built in is going into places where, as Muslim women, we will never see it. Then it’s very hard to tackle and diffuse because we won’t know where the temperature of Islamophobia is rising,” she said.

So why aren’t social media platforms doing more to tackle these new forms of Islamophobia? Leidig said the approach often taken by social media companies involves producing a list of trigger words that indicate content may be disguising messages of hate. YouTube’s controversial list, which has been criticised by LGBTQ+ video producers, can lead to immediate demonetisation of a creator’s account. 

This strategy has sometimes proved ineffectual: for instance, the term “seed oils”. Leidig warned that while “seed oils” sounds like a wellbeing term, it sometimes masks a commitment to a back-to-nature form of alt-right Christian ideology critical of multiculturalism while espousing conspiracy theories about globalisation and food production. 

Organisations that monitor anti-Muslim content on social media say measures such as trigger words are inadequate. The Bridge Initiative, set up in 2015 by Georgetown University in the US, is a long-term research project on Islamophobia. Its associate director Mobashra Tazamal said companies such as YouTube and Facebook have shown “very little appetite” in acknowledging how “rife” Islamophobia is online.

“One of the primary issues when it comes to the spread of anti-Muslim content online is that social media companies have failed to take the problem seriously, either adopting very limited resources to tackle the issue or not even implementing any sort of regulatory practices,” he said. “Just as those who seek to spread hate become more strategic in their efforts, it is imperative that social media companies continue to enhance and develop their content moderation/regulation guidelines.”

Leidig hopes that her research will help move the fight against hateful online content out of the academic sphere and into the mainstream. “People who do study the far right often lose sight of the real harms this material can cause to communities,” she said. “There is a disconnect between these two worlds and I try to see myself as a bridge between them.”

Topics
, , , ,

Get the Hyphen weekly

Subscribe to Hyphen’s weekly round-up for insightful reportage, commentary and the latest arts and lifestyle coverage, from across the UK and Europe