modularStarEncoder/ModularStarEncoder-finetuned-9 Feature Extraction • 0.3B • Updated May 21 • 12.5k • 1
modularStarEncoder/ModularStarEncoder-finetuned-18 Feature Extraction • 0.6B • Updated May 21 • 3.86k
modularStarEncoder/ModularStarEncoder-finetuned-27 Feature Extraction • 0.8B • Updated May 21 • 2.62k
One Model to Train them All: Hierarchical Self-Distillation for Enhanced Early Layer Embeddings Paper • 2503.03008 • Published Mar 4 • 1
modularStarEncoder/ModularStarEncoder-finetuned-27 Feature Extraction • 0.8B • Updated May 21 • 2.62k
modularStarEncoder/ModularStarEncoder-finetuned-18 Feature Extraction • 0.6B • Updated May 21 • 3.86k
modularStarEncoder/ModularStarEncoder-finetuned-9 Feature Extraction • 0.3B • Updated May 21 • 12.5k • 1
RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content Paper • 2406.11811 • Published Jun 17, 2024 • 16
XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference Paper • 2404.15420 • Published Apr 23, 2024 • 11