Policing Out-of-Game Toxicity
“A game company has no rights or responsibility to police Discord, Reddit, et al. The company should not ban in game someone because they are bad (misogyny, racism, homophobia) about OOG people in OOG public forums. But what about people who are obviously ‘bad’ about in-game people/groups? I get the not wanting to police the world and certainly resist the nanny state more than most. But what if someone says something offensive about players/employees on a very public Reddit or Discord? It’s not a free speech issue; in the US you can say most anything. But the game company certainly can determine who can play its game. Do they make more money by letting these people play? I guess at the end of the day, CCP is correct, but it does not feel quite right.” Sally Bowls MOP Reader.
This is one of the more intriguing questions that’s been explored of late, over at Massively Overpowered. I find it particularly interesting because it can be considered as part of a wider ongoing cultural change. It is not uncommon these days for employers to check up on potential job candidates beyond their resume. There have been cases of interview boards and HR departments trawling through people’s social media accounts checking for anything “unsavoury” that could potentially embarrass or compromise their company. Traditional notions of privacy are changing and the “joined up” nature of social media platforms means you theoretically have far more data to act upon. This may be checking to see if your new head of PR is a member of the Flat Earth Society or whether a player of an MMO is continuing to be problematic towards the community outside of the game itself. But just because you can do something doesn’t mean that it should be done, as Sally Bowls states in her question.
Whenever someone of some institution raises the spectre of implementing new rules and regulations to address a problem, I always ask about those that are already in place? Are they sufficient and are they being utilised effectively? More often than not the answer to these questions are “yes” and “no”. In the case of policing out-of-game toxicity there is already adequate provision in place through use of existing legislation. Racism, hate crimes, threats of violence, and other forms of intimidation are all criminal offenses and if they can be proven then the culprit can be dealt with accordingly. Depending on where such individual is causing problems outside of a game, there are usually existing provisions to take care of the problem Twitter, Reddit, Facebook and other platforms all have TOS which should cover such behaviour and deal with it. Sadly, these companies are neither quick or consistent in implementing such checks and balances.
However, all the above is based in law and therefore has to be managed within such a framework. If a game developer or publisher is looking to police out-of-game toxicity beyond the confines of the law, then it becomes more problematic. For example, consider a hypothetical disgruntled gamer who fell out of love with their favourite MMO because the developers changed the running animation on the Steampunk Pangolin mount. This fictitious gamer now runs a blog or You Tube channel and regularly posts negative comments about the game, the developers and the wider gaming community. None of it is technically libellous or in breach of the law, but due to the high profile of this angry gamer, it does impact upon community relations and broader perceptions of the game. The publishers may well want to see if they can “contain” or even “shut down” this individual because it may impact upon their bottom line. They may also wish to do so to simply protect their community. However, we now find ourselves faced with a classic freedom of expression conundrum. The allegedly “toxic” gamer may well be an asshole but as far as I’m aware that’s not yet a hanging offense. To try and stifle that individual right to express themselves is wrong. If you want a true democracy and all the benefits it brings, then enduring assholes is the price of admission and ongoing collateral damage. Until this fictional individual breaks the law, as much as it pains me, we have to let them run around and bark at the moon in their own back yard.
Now I’m not advocating that we just throw in the towel at this point. Trolls and such like should not go unchallenged and we should call them out and highlight what we consider to be wrong. However, we must do so in an appropriate manner. If we wish to occupy the moral high ground, then we need to act accordingly. Some folk may well see this as fighting with one hand tied behind your back but again, this is the price that you pay if you want a free and just society. Therefore, challenge any allegations, lies, or straightforward shitty behaviour. But be gracious, factually correct and never get down in the mud with the source of toxicity. A games publisher can certainly refuse an individuals business or ban them from forums. The TOS that accompany most player accounts usually give the publisher the whip hand in such situations.
The main problem with such problems is that they’re seldom binary issues. Games publishers are not always bastions of morality and champions of consumer rights. Business is designed to look after its own needs first. Let us not forget that some games publishers have actively tried to prevent game reviewers from expressing their legitimate opinions. Also “toxicity” is a difficult term to exactly quantify. As gamers I’m sure we could agree on a lot of common ground but there is a lot of scope for grey areas around the periphery. Exactly who should ultimately get to define the exact parameters of the word? And, we shouldn’t forget that the smart troll can always stay one step ahead of any real problem especially if they mask their identity effectively and compartmentalise their various personas. A ban is hardly the most difficult thing to bypass.
Overall, unless an individual is breaking the law, then I’m not in favour of a game developer or publisher attempting to police the wider community outside of the confines of the game itself and its official social media platform. Blizzard announced earlier this year that they would be proactively policing You Tube with regard to their games, as a way of seeking out toxic behaviour in them. Again it is a notion born of an honest intent. But they weren’t specific as to what criteria they were using. At present, Overwatch players can be suspended simply due to the weight of in-game complaints against them. Although genuine toxic players may be identified and sanction, will it all end there. Will we reach a point where players will simply point to external comments and views they do not like and request that Blizzard sanction the author? Furthermore, beyond gaming, we have seen sports pundits and other media personalities fired for things they’ve said and done outside of their employment. Sometimes it has been justified but on other occasions it has been questionable and raise a lot of wider societal issues. So, I believe caution is required in any form of wider policing, be it in gaming or elsewhere in modern life. Sadly, we do not live in enlightened times and reasoned responses are all too often replaced by knee jerk reactions and baying mobs.