CoPE Collection CoPE is a drop-in enhancement of RoPE that delivers consistent gains within the training context and during long-context extrapoaltion. • 9 items • Updated 3 days ago • 2
CoPE: Clipped RoPE as A Scalable Free Lunch for Long Context LLMs Paper • 2602.05258 • Published 4 days ago • 6
Accurate Failure Prediction in Agents Does Not Imply Effective Failure Prevention Paper • 2602.03338 • Published 6 days ago • 25
Llama-3.1-FoundationAI-SecurityLLM-Reasoning-8B Technical Report Paper • 2601.21051 • Published 12 days ago • 12
Quartet II: Accurate LLM Pre-Training in NVFP4 by Improved Unbiased Gradient Estimation Paper • 2601.22813 • Published 10 days ago • 55
EEG Foundation Models: Progresses, Benchmarking, and Open Problems Paper • 2601.17883 • Published 15 days ago • 20
OCRVerse: Towards Holistic OCR in End-to-End Vision-Language Models Paper • 2601.21639 • Published 11 days ago • 49
Self-Improving Pretraining: using post-trained models to pretrain better models Paper • 2601.21343 • Published 11 days ago • 16
CGPT: Cluster-Guided Partial Tables with LLM-Generated Supervision for Table Retrieval Paper • 2601.15849 • Published 18 days ago • 14
AVMeme Exam: A Multimodal Multilingual Multicultural Benchmark for LLMs' Contextual and Cultural Knowledge and Thinking Paper • 2601.17645 • Published 16 days ago • 23
Linear representations in language models can change dramatically over a conversation Paper • 2601.20834 • Published 12 days ago • 21
view article Article Introducing Waypoint-1: Real-time interactive video diffusion from Overworld +3 21 days ago • 37
The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models Paper • 2601.10387 • Published 25 days ago • 12
LucaOne Collection Generalized biological foundation model with unified nucleic acid and protein language(Nature Machine Intelligence),https://github.com/LucaOne/LucaOne • 6 items • Updated Dec 31, 2025 • 2