Policing Our Online Communities
It would appear that Discord has a toxicity problem. No, I’m not at all surprised either. Online social platforms attract such issues by their very nature. Discord is far from unique in this respect and is simply the latest online service to join an ever growing list of social platforms to become a hotbed of iniquity. However, I do applaud them for their transparency. They regularly publish reports on the “state” of their service and they don’t try to hide the problem. In fact to combat the problem Discord have recently acquired Sentropy Technologies, who have developed an AI designed to combat toxic behaviour with its “data-driven moderation tools”. Naturally all parties have high hopes that they can tackle the issue. I, on the other hand, do not. Because technology alone is not the solution to the problem. It never is. Has facial recognition software, CCTV and biometric passports solved all the problems that they were supposed to? No they have not.
There are two major problems associated with any service that facilitates the social gathering of people online. The first is an old and very well known one. Anonymity. Discord, Twitter, Instagram and dozens of other platforms don’t really make any serious attempt to verify who you are. If you’re sufficiently tech savvy you can create an account for most services without providing any details that reveal your true identity. The moment you ensure anonymity you effectively forgo any semblance of accountant ability. The most that can happen is that your account gets closed. The second factor that has a bearing on the matter is size. When a community grows over a certain size, it becomes virtually impossible to police it with automated moderation tools and processes. Furthermore, people are very good at circumnavigating rules and regulations. All too often, unhackable get hacked and the impenetrable gets circumnavigated. It’s one of the reasons I’m not overly confident about Sentropy.
Although there are not any quick and easy solutions to these problems, I think there are steps we can take ourselves that can contribute to improving the quality of our online communities and keeping them equitable. Especially with regard to Discord servers which have become ubiquitous these days. These tend to start off as quite small and intimate environments that end up growing over time. However, if some basic procedures are put in place from the start you can keep them from spiralling out of control. The first is to have a clear set of rules and a code of behaviour. Set out what you will and won’t tolerate. That way offenders can never plead ignorance. Lead by example. Politely correct minor transgressions and don’t allow double standards. If a friend breaks the rules then treat them the same as those you don’t know as well. Consistency is key to establishing a fair system.
Secondly, have moderators and let your community know that you have moderators. Be proactive. If you see something that is anomalous and you’re not sure if it’s a joke or something more serious, then make enquiries. Be civil, seek clarification and if the problem was a false positive, then move on. However, if you have a bona fide offender that you caught “bang to rights”, then sanction them according to your rules. If they need to go, then show them the door. Tell them which rule they violated and the consequences of such an act. Then end the conversation and ditch them. You owe them nothing more. This is not a question of free speech. This is a private Discord server with a clear set of rules. And if you do have to dispense with someone, reference their departure to your community but don’t allow a debate about it.
Sometimes, Discord servers can grow in popularity and managing invitations becomes an issue. Smaller servers are usually populated by friends inviting friends. Pre-existing social bonds tend to keep things cordial. However, such screening protocols become less robust the moment you allow open invitations. A possible compromise would be to have a system where an existing and established member of the Discord server has to vouch for anyone they invite. If they make a mistake and bring someone to the server who then becomes a problem, their invitation privileges are temporarily revoked. Bringing this minor level of accountability can prevent a potential faux pas. This particular approach has worked well in several of the MMO guilds I’ve joined over the years.
I don’t believe there’s any substitute for real online community policing. AIs may well be able to parse text and look for keywords and phrases. But often bullying and hectoring is a question of semantics and the deliberate use of ambiguous language that can be interpreted in several ways. I do not doubt that automated moderation tools will intercept ill humoured abuse from a young gamer who is hot under the collar. But will it really pick up on the subtle needling the lifelong malcontent and bully uses? I’m not so sure. However, human intervention also comes with its own set of problems. It is inherently labour intensive and no one wants to do it, as it’s quite a responsibility. And then you have to make sure that the person who has taken the job is not a closet sociopath themselves. But if we want to reclaim our online spaces then we have to show willing and someone has to shoulder the burden. It’s how we police our communities in the real world.