Posted On: Aug 9, 2019
HAQM Rekognition is a deep learning-based image and video analysis service that can identify objects, people, text, scenes, as well as support content moderation by detecting unsafe content. Starting today, you can detect content related to 'Violence' and 'Visually Disturbing" themes, such as blood, wounds, weapons, self-injury, corpses, and more. Further, HAQM Rekognition's ability to identify 'Explicit Nudity' and 'Suggestive' content has been improved through a 68% lower false positive rate and a 36% lower false negative rate (on average). Additionally, HAQM Rekognition now supports detection of new categories of adult content, such as unsafe anime or illustrated content, adult toys, and sheer clothing.
By using HAQM Rekognition for image and video moderation, human moderators can review a much smaller set of content flagged by AI. This allows them to focus on more valuable activities and still achieve full moderation coverage at a fraction of their existing cost. Moreover, HAQM Rekognition provides a hierarchical set of top-level and second-level moderation categories that can be used to create business rules to handle different geographic and demographic requirements. For a full list of all supported unsafe categories and their hierarchy, please see this page.
Updated image and video moderation is now available in all AWS Regions supported by HAQM Rekognition at no additional cost. To get started, you can try the feature with your own content using the HAQM Rekognition Console.