Filtering forum posts is unique from filtering real-time chat and other user-generated content since forums are focused on specific topics. Implementing a profanity filter not only keeps the content free of profanities, hate speech, and the like; it can also help ensure that conversations stay on topic. This post covers how to best utilize a profanity filter to aid your moderation processes, limit user frustration, and keep content productive and appropriate within forums.
Consider the five potential immediate results when a post is submitted from a user’s perspective:
The post is displayed immediately
The filtered post is displayed immediately
The post is rejected
The user is alerted that the post contains inappropriate content, then is allowed to modify the post and resubmit
The user is notified that the post must be reviewed by a moderator prior to being published
Ideally, your profanity filtering solution will allow you to adjust the above filter reactions based on the type of match that was found. For the most egregious words and phrases, you may choose to outright reject the post, where lesser offensive statements may be posted and flagged for review by a moderator.
The latter reaction can be applied when the profanity filter finds keywords or triggers that indicate the post is off-topic. For example, if you prohibit political discussions, when the word “republican” or “democrat” is found in a post by the filter, the user may be alerted that a moderator must review the post prior to being published.
Rather than having moderators pre-approve posts, allowing community members to modify their submissions when there is a match from the profanity filter is extremely effective for adult communities. Mature users will greatly appreciate not having to retype their entire post. For younger audiences, however, adding human moderation is recommended and will be covered in a follow-up blog post.
New vs Existing Members
Posts generated by new users have a significantly higher potential for inappropriate content than posts from established community members. Again, if your profanity filter provides details on the types of matches found, you can adjust your filter reactions and moderation efforts based on the matches and user history. For example, ambiguous triggers such as “hard” can be sent for moderation for new users but published immediately for trusted, established users.
Adjusting Filter Behavior per Forum Section
A common practice of forums is to have a rant section of the forum where users can “vent their frustrations” in mature constructive manner. Limitations should still apply and be clearly communicated by the community guidelines. In most cases, the rant section may allow profanities to be used, but continue to restrict members from threatening each other and such. Forums can also offer sections for “off-topic” conversations where the filter is adjusted accordingly. Yet again, if your profanity filter provides details on the types of matches found, you can tune the filter integration based on the forum section and configure the filter reaction and moderation efforts appropriately.