NeuroCLR

NeuroCLR is a self-supervised learning (SSL) framework for learning robust, disorder-agnostic neural representations from raw, unlabeled resting-state fMRI (rs-fMRI) regional time series. NeuroCLR is designed for multi-site generalization and transfer to downstream disorder classification with limited labeled data.

[GitHub Repo] | [Cite]


Abstract

Self-supervised learning (SSL) is a powerful technique in computer vision for drastically reducing the dependency on large amounts of labeled training data. The availability of large-scale, unannotated, rs-fMRI data provides opportunities for the development of superior machine-learning models for classification of disorders across heterogeneous sites, and diverse subjects. In this paper, we propose NeuroCLR, a novel self-supervised learning (SSL) framework. NeuroCLR extracts robust and rich invariant neural representations - consistent across diverse experimental subjects and disorders - using contrastive principles, spatially constrained learning, and augmented views of unlabeled raw fMRI time series data. We pre-trained NeuroCLR using a combination of heterogeneous disorders from more than 3,600 participants across 44 different sites, and 720,000 region-specific time series fMRI data. The resultant disorder-agnostic pre-trained model is fine-tuned for downstream disorder-specific classification tasks on limited labelled data. We evaluate NeuroCLR on diverse disorder classification tasks and find that it outperforms both deep-learning, and SSL models that have been trained on a single disorder. Experiments also confirmed robust generalizability, consistently outperforming baselines across neuroimaging sites. This study is the first to present robust and reproducible self-supervised methodology with anatomically consistent contrastive objective that operates on raw unlabelled fMRI data, capable of reliable transfer across diagnostic categories. This will cultivate stronger participation by computational and clinical researchers, setting the stage for the development of sophisticated diagnostic models, for various neurodegenerative and neurodevelopmental disorders, leveraging NeuroCLR.


Model Structure

This repository provides two loadable model artifacts:

  • Root model (default)
    Self-supervised pretraining encoder + projector (contrastive SSL)

  • classification/ subfolder
    Encoder + ResNet1D classification head for downstream tasks

All models rely on custom architectures, so trust_remote_code=True is required.


Model Details

1) Pretraining Model (Default, Loaded from Repo Root)

  • Input: region-wise rs-fMRI time series
    Shape: [B, 1, L], where L = 128 time points
  • Output:
    • h: pooled representation, shape [B, 128]
    • z: projected representation, shape [B, projector_out_dim]

This model is intended for:

  • representation learning
  • feature extraction
  • transfer learning

2) Classification Model (classification/)

  • Input: ROI-by-time representation
    Shape: [B, N_ROIs, 128] (e.g., N_ROIs = 200)
  • Output:
    • logits: shape [B, num_labels]
    • loss: returned when labels are provided

Note
The encoder is bundled with the classification model and may be frozen by default (recommended).
See the GitHub repository for training and fine-tuning scripts.


Usage (PyTorch)

import torch
from transformers import AutoModel

model = AutoModel.from_pretrained(
    "SaeedLab/NeuroCLR",
    trust_remote_code=True
)

model.eval()

x = torch.randn(4, 1, 128)  # [batch, 1, time_points]

with torch.no_grad():
    outputs = model(x)

print(outputs["h"].shape)  # [4, 128]
print(outputs["z"].shape)

Load the Downstream Classification Model

import torch
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(
    "SaeedLab/NeuroCLR",
    subfolder="classification",
    trust_remote_code=True
)

model.eval()

x = torch.randn(4, 200, 128)  # [batch, n_rois, embedding_dim]
labels = torch.tensor([0, 1, 0, 1])

with torch.no_grad():
    outputs = model(x, labels=labels)

print(outputs["logits"].shape)  # [4, 2]
print(outputs["loss"])

Citation

The paper is under review. As soon as it is accepted, we will update this section.

Contact

For any additional questions or comments, contact Fahad Saeed ([email protected]).

Downloads last month
40
Safetensors
Model size
290k params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support