It is difficult to talk about profanity filters without a common vocabulary among industry participants. People looking for technology solutions will benefit from standardization of filter names, as well as some clarity around what distinguishes each type of filter. We’re going to name those profanity filters, explain each, and give some examples using screen shots. Here are the filter types: Black List Filtering, Free Form White List Filtering, Restricted Entry White List Filtering, Menu Messaging, and Bozo Filtering.
Black List Filtering
This filter allows the user to type any message they want except for words and phrases that are on the black list. Some communities also disallow characters such as numbers and some forms of punctuation like !,.@$?\ and |. Once the user types and submits their message, it is sent to the server to be processed by the filter; acceptable content is posted in the community, but restricted content is returned with a predetermined response, which might include replacing the letters with stars, blocking the message altogether or blocking the message from presentation in the community, but displaying it in the author’s message stream (bozo filter).
Example: The message is blocked and a warning screen is displayed
Free Form White List Filtering
The user can type any text they choose. Upon submission, the filter will prevent use of any words not on the whitelist. It will also disallow any phrases on the blacklist even though they are comprised of whitelisted (acceptable) words.
Restricted Entry White List Filtering
This filter is similar to the Free Form White List option, however users are presented with predictive text of whitelist words as they type. This way, they can see which words are acceptable while they compose their message. The restricted entry whitelist filter will also block blacklisted phrases. Some deployments of this filter type will also notify a user when they’ve typed a word that is on the blacklist.
Example: Unacceptable words are highlighted in red
This technique for managing user generated content is much more restrictive. It does not actually have a filter mechanism; rather it is a collection of predetermined (canned) words and phrases from which users can select to communicate with others. This is the safest and simplest form of chat messaging.
Example: Users can select a number of predetermined chat messages
Author only filtering isn’t actually a filter at all. It’s a technique whereby the filter presents the entered text only on the screen of the author, but blocks it from view for all other users. This tricks the user into thinking the comment was posted when it was actually blocked.
So there you have it, a comprehensive list of the profanity filter types complete with descriptions and visual examples. Let’s standardize around these names throughout the industry to minimize confusion and simplify the conversation around online safety.