RyanGao/mental-roberta-depression-onnx

This is an ONNX conversion of karangupta224/mental_roberta_depression for use with Transformers.js and browser/serverless deployments.

Model Description

Task: Text classification for depression detection

Base Model: Based on mental/mental-roberta-base, a RoBERTa model pretrained on mental health-related Reddit posts.

Labels: depression / non-depression

Intended Use

This model is designed for:

  • Mental health content analysis
  • Crisis detection in text
  • Support systems for identifying individuals who may need help
  • Research purposes in mental health NLP

โš ๏ธ Important: This model is NOT a replacement for professional mental health diagnosis. It should be used as a screening tool only, and anyone showing signs of mental distress should be referred to qualified mental health professionals.

Performance

Original PyTorch model metrics:

Usage

With Transformers.js

import { pipeline } from '@xenova/transformers';

const classifier = await pipeline('text-classification', 'RyanGao/mental-roberta-depression-onnx');
const result = await classifier('I feel very sad and hopeless');
console.log(result);

With Python (ONNX Runtime)

from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer

model = ORTModelForSequenceClassification.from_pretrained("RyanGao/mental-roberta-depression-onnx")
tokenizer = AutoTokenizer.from_pretrained("RyanGao/mental-roberta-depression-onnx")

inputs = tokenizer("I feel very sad and hopeless", return_tensors="pt")
outputs = model(**inputs)

Training Data

The original model was fine-tuned on mental health-related datasets. See the original model card for details.

Limitations and Bias

  • Domain-specific: Trained on Reddit mental health posts, may not generalize to other platforms
  • Language: English only
  • Bias: May reflect biases present in the training data
  • Not diagnostic: Cannot and should not be used for clinical diagnosis

Ethical Considerations

  • Privacy: Be cautious with personal mental health data
  • Harm prevention: Use as part of a larger system that includes human oversight
  • False negatives: The model may miss some cases of distress
  • False positives: May flag content that doesn't indicate actual distress

Citation

If you use this model, please cite the original MentalRoBERTa paper:

@inproceedings{ji2022mentalbert,
  title     = {{MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare}},
  author    = {Shaoxiong Ji and Tianlin Zhang and Luna Ansari and Jie Fu and Prayag Tiwari and Erik Cambria},
  year      = {2022},
  booktitle = {Proceedings of LREC}
}

Contact

License

This model is licensed under CC-BY-NC-4.0, same as the original model.

Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support