This increase in user-generated content then raises a question of whether NSFW AI is what the future holds for platforms, both big and small. That same year, the world created about 2.5 quintillion bytes of data daily — and a lot if it requires some form moderation. This surge in content has significantly strained traditional moderation methods, which are labor-intensive and rely on human to pore over uploaded data for issues like inappropriate or offensive speech — resulting often in delays and enforcement inconsistences. A manual moderation process is increasingly costly, which makes it less attractive than AI based solutions. In particular, Meta devoted a massive $13 billion over the years 2016 through to 2023 for AI and human content moderation combined.
AI is a model system that operates based on years of pictures, videos and text inputs to function mainly in the identification process like identifying NSFW content. When trained on these datasets, NSFW AI can classify explicit content to over 90% accuracy in some instances with more precision and at a significantly faster rate than manual effort. Critics point out that for all its efficiency, the AI we have today is still far from what it could be. We continue to struggle with false positives (i.e. when non-explicit content gets flagged by mistake). Last year, Instagram’s automated moderation tools were found to have flagged 11% more improperly in 2022 than before it, making real posts invisible.
But still, progress is being made and Google and OpenAI are both spending at least billions of dollars on making these models better. They are advancing quickly enough that in little over a year even AI community moderation may offer levels of accuracy and scale superior to the human-oriented standards. For context, content moderation tool with AI is able to handle up to 10k pieces of content a second; something practically any team even could not hope achieve. In combination with economic benefits, AI solutions also lead to a reduction in operational costs over the long-term: one just has to look at how 40% lower moderation expenses were reported by firms who embraced automated systems last year.
The issue becomes whether it is ethical for content moderation to be based upon NSFW AI. These AI systems are often called out for bias — that means they flag content more likely to be a violation based on race or gender, due to the imbalance in training data. Cases like the controversy surrounding Facebook’s moderation policies in 2020 have brought these issues into sharp relief. However, as content volume increases and guideline compliance falls heavily on the shoulders of AI, we continue to hear that no matter if it sucks or not — both are true— this is where things are headed. With companies working to improve these algorithms, there might not even be a question of if NSFW AI will become the go-to technology for content moderation — but when.
If you put it in this context then nsfw ai is having more reliant role of content moderation on the platforms so its light should not be faint. Even with some negatives, the pros for this use case are potentially very strong from a price/performance perspective. How fast these tools advance, indicates a future where systems would do heavy lifting perhaps up until the edge cases are to be resolved by human moderators.
When presented with the numbers, it is clear that AI-based in content moderation will be front-and-center of this industry change else who can afford to employ humans year after year as global labor rates soar.