NSFW Nudity and Sexual Content Classification

A mobile-friendly visual content moderation model, based on the work of F. C. Akyon obtained by fine-tuning EfficientNet-b4 on nsfw images to detect nudity and sexual content in images or video frames with high accuracy.

For a Demo, see: [viddexa/moderators]

Performance Comparison

Nsfw image detection performance of nsfw-detector-mini compared with Azure Content Safety AI and Falconsai nsfw image detection model.
F_safe and F_nsfw below are class-wise F1 scores for safe and nsfw classes, respectively.
Results show that nsfw-detector-mini performs better than Falconsai and Azure AI with fewer parameters.

Model F_safe F_nsfw Params
nsfw-detector-nano 96.91% 96.87% 4M
nsfw-detector-mini 97.90% 97.89% 17M
Azure AI 96.79% 96.57% N/A
Falconsai 89.52% 89.32% 85M

Usage with moderators library

Install with pip install moderators then run:

from moderators import AutoModerator

model = AutoModerator.from_pretrained("viddexa/nsfw-detection-mini")
results = model("<path-to-image-file>")

probs = {k: v for r in results for k, v in r.classifications.items()}
predicted_label = max(probs, key=probs.get) 

Usage with transformers library

Install with pip install transformers then run:

from transformers import (
    AutoImageProcessor,
    AutoModelForImageClassification)
from PIL import Image
import torch

img = Image.open("<path-to-image-file>")

processor = AutoImageProcessor.from_pretrained("viddexa/nsfw-detection-mini", use_fast = False)
model=  AutoModelForImageClassification.from_pretrained("viddexa/nsfw-detection-mini")

with torch.no_grad():
    inputs = processor(images=img, return_tensors="pt")
    outputs = model(**inputs)
    logits = outputs.logits
    probs = torch.softmax(logits, dim=-1)
    pred_id = int(probs.argmax())
    print(model.config.id2label[pred_id])

Model Details

A binary nudity and sexual content classification model that classifies images either as Safe or NSFW. Model is obtained by fine-tuning google's Efficientnet-b4.

  • Developed by: [Kerem Bozgan, Abdullah Kırman, Fatih Çağatay Akyön]
  • License: [apache-2.0]
  • Finetuned from model: [google/efficientnet-b4]

Intended Use and Limitations

This model is designed to detect explicit nudity alone. For instance, an image containing suggestive nudity or risqué clothing is labeled as safe if there is no explicit nudity in the image.

Appropriate Use Cases:

  • Filtering explicit images in user-generated content platforms
  • Assisting moderation workflows for social apps and community forums
  • Parental control or restricted content filtering
  • Content review pipelines for games, and streaming services

In these settings, the model acts as an initial screening tool to help reduce moderator workload. A human review step is recommended for final decisions.

This model is not intended for:

  • Moral or ethical judgment of individuals or their bodies
  • Surveillance or monitoring of people in real-world environments
  • Determining sexual intent
  • Legal classification of pornography vs. art
  • Use in contexts affecting employment, policing, social credit, or identity

BibTeX entry and citation info

@article{akyon2023nudity,
  title={State-of-the-art in nudity classification: A comparative analysis},
  author={Akyon, Fatih Cagatay and Temizel, Alptekin},
  booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
  pages={1--5},
  year={2023},
  organization={IEEE}
}

Model Card Authors

Kerem Bozgan & Fatih Akyon

Downloads last month
362
Safetensors
Model size
17.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for viddexa/nsfw-detection-mini

Finetuned
(10)
this model

Evaluation results