How Online Community and Forum Software Can Improve to Address Common Guideline Violations
In my final article of 2013, I reviewed data from member reports of inappropriate posts on my community, outlining the most popular reasons that posts are reported. For my first article of 2014, I thought I’d take a look at how community software platforms can address these issues and make all of our lives a little easier.
Not all automation is good, but I’m a fan of automation that works well without having a negative impact on member experience. I am going to discuss solutions that I feel could fit into this mold, as well as other manual solutions that could be built into software.
Some ideas could be impacted by technical limitations, such as server resources, but I am going to approach this from an ideal perspective. I think about this sort of thing all the time and I wanted to share some ideas freely. Any software vendor reading this, please feel free to take them (though credit is always nice). I do think it would be fun to take a role at a vendor where my job would be to focus on features and functionality, especially on the manager end of the spectrum. Maybe I’ll do that some day.
Advertising and Spam
Of all of the issues I’ll mention, this is probably the one that vendors have been most focused on and for good reason. A lot of the things that make sense are already happening. Software is being tied into spam databases and algorithms that work to identify spam before it is posted. Some of these are free or low-cost and some are not, such as Sitebrains. It’s important that good, free or very low cost solutions are available because if they are not, they will not impact the online community landscape because the vast majority of communities won’t be able to justify them.
Of course, there is CAPTCHA and I think that has a place, though I lean toward the creative CAPTCHAs, rather than the ones that simply ask you to type a jagged string of letters and numbers.
There are numerous manual solutions that community managers have employed, but they tend to be added via a plugin and not be available by default. There are platforms that have at least some of them, I’m sure. Here are the main ones that come to mind:
- Automatically pre-moderate the first X number of posts a new member makes, until approved.
- Automatically pre-moderate any post with a link in it within the first X number of posts a new member makes, until approved.
- Prevent a new member from mentioning a link within the first X number of posts.
- Block specific URLs from being posted via the software’s word censor feature.
I haven’t really had to implement any of these solutions, but there are definitely community managers out there who have used and liked them. It is always useful to have the option. And it sometimes can be used in other contexts. For example, if someone was pushing a stream of people to your forums specifically to disrupt them, it can be useful to pre-moderate. In that case, it’s not about spam, but it is just as useful.
Double Posts and Cross Posting
A double post is when the same content is posted more than once. Cross posting is when the same content is posted in two or more sections.
Pretty much the only thing I’ve seen a software platform attempt for this issue is to offer flood controls, which require you to wait a minimum amount of time before you can make a new post. They exist for more than just double posts and cross posts, but they help a little bit.
It is very hard to stop someone who is determined. For example, you could have someone who posts about the same thing, but uses totally different words. In such a case, it is hard for software to know. But what I would like to cut down on is the verbatim copies. The people who want to circumvent it will do so, but it will cut down on many duplicate posts, especially in these 2 circumstances:
- People who don’t know what they are doing is frowned upon. Not every community discourages double posts.
- People who don’t realize they are doing it. For example: the page is loading slow and they hit the submit button twice because they think it didn’t go through.
That is who you should focus on because that is who can be helped. I would love to see new posts checked against other posts that the particular member has made (not all posts made period, for the sake of not overtaxing your database or server). It should only apply to posts of a certain length – perhaps longer than 5-10 words. If they have made the same post before, a dialogue should be displayed, letting them know that they have already made this post and that our community guidelines do not allow for content to be posted more than once. Finally, a link to their existing post should be included, so that they can head right to it.
Inflammatory and Disrespectful Comments
This is a tricky one because it is about reading behaviors and, if done badly, it can wreak havoc with false positives. How do you stop people from being mean to each other? You can prevent them from using certain terms that are obviously disrespectful and that will have a small impact.
There are service providers that work in this space, such as Crisp Thinking. Understandably, they exist to serve communities with substantial budgets. Their arrangement with Moot seems to indicate that they are interested in working with communities that have less money. Though pricing for Moot integration has yet to be announced.
I think there are many communities that could justify paying something for a service like this, but is it enough to sustain a business model? That seems doubtful. For example, many could pay $10 a month. Some could pay $20. A small percentage could get above a $100 a month tier. Few could do $500 or more. Is it reasonable to expect a vendor like Crisp Thinking to support $10-$20 a month customers and have a good business? Probably not, because their services cost more to create and maintain. They are worth what they charge for them, I’m sure, so it doesn’t make sense to cater to these communities.
But it would be interesting to see someone come out with a similar service aimed at smaller communities or those without the backing of a major corporation.
Inappropriate Content
Similarly to inflammatory and disrespectful comments, this is in large part a behavioral issue and it requires smart algorithms and machine learning. With image and video content (easy example: pornography), specific filtering technology is needed. Crisp Thinking also offers this, but I don’t know of anyone who is offering it at a level that most online communities can afford. The smallest ones wouldn’t need it but there is a sweet spot of maybe between 100,000 posts and a few million posts, where a community manager could make use of a solution like this, if they could afford it.
Profanity and Vulgarity
It seems like most software vendors are content with a word censor feature that changes a vulgar term to something else. For years, I have been promoting an idea I came up with, called Censor Block. As far as I know, my communities were the first one to have this functionality, targeted specifically at profanity.
Here’s how it used to work: you add a term to your censor list. Let’s say you replace the term with an asterisk (*). The member makes a post, the word is replaced by an asterisk. Maybe that is good enough for you, maybe it isn’t. For me, that was simply a pointer to the fact that a vulgar term was included in the post. The post still had to be removed and documented, with the member being contacted.
Here’s how it works now: you add a term to your censor list. The member makes a post, using that term. Instead of the post going through, they are informed that the term used is a violation of your guidelines. The specific term is highlighted and their post is included below, so they can edit it and re-submit. Once the term is adjusted, the post goes through.
This is an example of automation that works. I would estimate that it dropped guideline violations for profanity roughly 90% overnight. I like it because it means that the vulgarity is not used and is never posted. This means that we never have to remove it, document it and contact the member. Members like it because their post is never removed and they can simply adjust it and go. It saves everyone time.
Some people might say that it simply tips people off to profane terms or encourages them to circumvent it. But that is what any word censor does. People will do that anyway. Censor Block allows well-meaning members to adjust their posts on their own. In my experience, having used it for 8 years or so, I would never want to run a forum without it.
Signatures and Profile Fields
The main thing I want to say is that whatever filtering you have in place for posts should also apply to signatures, profile fields, avatars, etc. Don’t exclude profiles from whatever technological enhancements that you have. For example, Censor Block applies to profile fields on my communities. Most software already does a good job with signature restrictions, such as length and whether or not images should be included. Limitations on BBCode or formatting options are also helpful (no bright red signatures).
Hotlinking and Direct Linking
Do any community software platforms care about this? On my communities, we don’t allow people to embed images from sites if it seems like they do not have permission to hotlink to the server in question. We extend this to a direct link, when a member posts a link directly to an image or file.
With most community software, you can disable embedding or disable the BBCode that makes it possible. You can turn on your own file upload so that you have to host any embedded files. That will work for some. But for many others, it is not ideal, so here is what I would like to see:
- Allow community managers to define a list of URLs from which embedding of files is permitted. For example, free file hosts. If individual members want their sites white listed, they request the URL be added.
- Furthermore, you could allow members to hotlink to the domain they have listed on their profile, which is presumably their own website. This is sketchy because some people use free services where hotlinking is discouraged. But it is an option.
- If someone tries to embed an image from a URL not allowed, a notice should be displayed politely informing them and providing them with a couple of suggested file hosting services to use.
- Community managers should have the option to extend this functionality to also cover posted URLs, not just embedded images. So if a URL ends in a certain way, for example, in a graphic file format like .jpg (example: http://www.ifroggy.com/photo.jpg), then this white list could apply to that, as well. This should include an on/off switch for this functionality as well as the ability to define file types that would trigger it.
I think this would be wonderful and it would also help forums and online communities, as a whole, to be even greater citizens of the internet.
What’s Out There and What Else?
I would love to see software vendors make these issues a priority.
Have you seen any community or forum software options that offer the feature ideas mentioned in this article? Also, what creative solutions have you thought of for the challenges I wrote about? Please let me know in the comments.