Meta Unveils New AI Content Enforcement Systems, Reduces Dependence on Third-Party Vendors

Meta Unveils New AI Content Enforcement Systems, Reduces Dependence on Third-Party Vendors

1 Min Read

Meta announced the rollout of advanced AI systems for content enforcement, aiming to reduce the use of third-party vendors. These systems will cover tasks like identifying terrorism content, child exploitation, drugs, and scams. Once these AI systems show consistent performance superiority, they will be deployed across Meta’s apps. Despite this shift, human reviewers will remain for critical tasks and complex decisions. Meta’s AI systems have shown promising results in trials, including doubling the detection rate of violating adult content and reducing errors by over 60%. They are also improving impersonation detection, thwarting account takeovers, and blocking about 5,000 scam attempts daily. Human experts will be involved in designing, training, and supervising the AI, ensuring high-impact decisions are handled appropriately. This move follows recent shifts in Meta’s content moderation policies and comes amid ongoing legal scrutiny over social media’s impact on young users. Additionally, Meta introduced a 24/7 AI support assistant available globally on Facebook and Instagram.

You might also like