What motivates mosque arson comments on Facebook?

A recent Huffington Post article highlighted Britain First’s ‘endorsement’ of Facebook posts relating to the destruction of a proposed mosque.

The two ‘endorsements’ relate to the comments ‘Dynamite the hell out of it…grrrr’ and ‘Accidents can have a funny way of happening ;)’.

But the individual calling for the use of dynamite qualified her remarks by stating ‘I didn’t say I wanted to. Read it again then judge me’. Britain First also ‘liked’ that comment. So the idea of a total endorsement is not strictly true. Yet, the tone her entire comment reflects a fatalistic sense that civil war is looming. Others posted ‘it’s nothing a can of petrol and match wouldn’t solve’ and sought to justify it as ‘taking back your country by any means necessary is not terrorism. It is defence’.

Discussion from the Britain First page.

The comments raise some interesting questions: why would an individual post them and why do they gain popularity?

It could reflect how people navigate the internet. John Suler’s influential ‘Online Disinhibition Effect’ breaks down internet behaviour into six factors. For the sake of space and relevance, I will focus on just two.

One is how people attempt to decompartmentalise their identities into ‘online’ and ‘real world’ personas. Behaviour in the online world is considered inconsequential. If anything, the extreme nature of the comments are the blunt end of insecurities around Muslims. Some may never act on these comments offline. Or, some consciously (or otherwise) blur the lines of incitement.

Suler contends that online discussions are asynchronous, which means interactions do not exist in real time. Therefore, it might take hours or several minutes for counter speech to emerge, and is already drowned out. A lack of real time discussion potentially increases the toxic sense of online disinhibition that may encourage individuals to ‘like’ toxic comments.

Facebook’s current policy encourages dialogue between parties in relation to hate speech. Rather than report, the emphasis is placed upon hiding comments from news feeds or directly messaging the person responsible before reporting to Facebook’s Community Standards team.

Nor is it Facebook’s responsibility to police the platform (it is simply too large and free speech is important). The fundamental issue remains the threshold for removing online hate speech. For example, the above comment ‘it’s nothing a can of petrol and match wouldn’t solve’ was not considered in breach of Community Standards policy.

A failure to proactively deal with these comments (when reported) creates a toxic platform where these comments are popularised and potentially encourage others to post similar remarks. Greater dialogue between organisations (like ourselves) and Facebook can help strengthen the platform so all users feel safe.

Cross posted from Tell MAMA.