Koo launches new safety features for proactive content moderation


India’s microblogging platform, Koo announced the launch of new proactive content moderation features designed to provide users with a safe and secure social media experience. New features developed in-house are able to proactively detect and block any form of nudity or child sexual abuse content in less than 5 seconds, labeling misinformation and toxic comments and hate speech on the platform Hiding the language.

According to the company, it has identified certain areas that have a high impact on user safety, such as child sexual abuse material, toxic comments and hate speech, misinformation and misinformation, with a view to proactively removing their occurrence on the platform. Ku is working for. The new content moderation features are an important step towards achieving this goal.

Security Features:

Nudity: Koo’s in-house ‘No Nudity Algorithm’ actively and immediately detects and blocks any attempt by a user to upload child sexual abuse material or a photo or video containing nudity or sexual content. It takes less than 5 seconds to detect and block.

Poisonous comments and hate speech:

Proactively detects and hides or removes toxic comments and hate speech in less than 10 seconds so that they are not available for public viewing.


Content containing excessive blood/gore or violence is overlaid with warnings for users. 


Koo's in-house 'MisRep Algorithm' constantly scans the platform for profiles that use content or photos or videos or descriptions of celebrities to detect and block impersonated profiles.  The company claims that photos and videos of well-known personalities are removed from profiles immediately upon detection and such accounts are flagged to monitor for bad behavior in the future. 

Mayank Bidawatka, Co-founder, Koo said, “At Koo, our mission is to unite the world and create a friendly social media space for healthy discussions. We are committed to providing the safest public social platform for our users. While moderation is an ongoing journey, we will always be on the move in this area with our focus on it. It is our endeavor to proactively detect and remove harmful content from the Platform and to continue to develop new systems and procedures to restrict the spread of viral misinformation. Our active content moderation procedures are probably the best in the world!”

The post Koo Launches New Safety Features for Proactive Content Moderation appeared first on Techlusive.

Read full article here

Leave a Reply