Image & Video Moderation
Image moderation, simple
Community tolerance for vulgar (sexual, violent) images and videos varies widely. CleanSpeak provides tools that can support every tolerance level.
CleanSpeak approaches image moderation with an ensemble approach, combining three tools to best protect a community while keeping the cost to do so as low as possible. CleanSpeak provides the following tools:
Image Approval Queue
Every action that a moderator takes is recorded within CleanSpeak. You can run this report to show a summary of the various actions that moderators take (e.g., Approvals, Dismissals, Edits, User Actions). With this information you can verify that your policies are being enforced appropriately by your moderation team and make any changes necessary. Learn more.
Google Vision API
CleanSpeak utilizes Google’s industry leading Vision API to allow/reject images automatically (based on machine learning scoring). If images fall into a grey area, CleanSpeak allows you to dictate what happens to those images: reject, queued for review, or allowed.
A mechanism for users to report images they deem inappropriate. Flagging can result in image removal (usually after a certain number of users flag the image) or human moderation (the image is placed in a queue up for a community moderator to review).
Each image moderation technique can be leveraged to eliminate or vastly reduce the workload of human moderators.
Which solution is right for you?
If you have a manageable number of images and human moderation resources on hand, manual image moderation is the best option. When the number of images you need to review increases beyond your human moderation budget you can turn on CleanSpeak’s automated image analysis (which leverages Google’s Vision API) to reject vulgar/sexual/harmful/violent images based on a risk level you set.
Lastly, we always recommend enabling flagging for users to report images that have slipped through the cracks.
Video moderation works in a similar fashion. Your team will utilize CleanSpeak to display videos for review & approval. Due to the current lack of reliable automated methods of detecting harmful content in videos, we recommend:
Enabling direct human moderation of videos before they are displayed in the community. This method works best if your community is targeted toward children or the display of a harmful video before it is flagged would damage the community in any way.
Allowing videos and enable the flagging mechanism to empower your users to remove harmful videos. This method works best if your community is geared towards an adult audience.