-
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 68 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 50 -
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 137 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper • 2403.05525 • Published • 48
Collections
Discover the best community collections!
Collections including paper arxiv:2401.02954
-
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 376 -
Qwen2.5-Coder Technical Report
Paper • 2409.12186 • Published • 151 -
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
Paper • 2409.12122 • Published • 4 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 211
-
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
Paper • 2405.14333 • Published • 43 -
DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search
Paper • 2408.08152 • Published • 60 -
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Paper • 2501.12948 • Published • 429 -
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 137
-
deepseek-ai/DeepSeek-V3-Base
685B • Updated • 10.9k • 1.68k -
TransMLA: Multi-head Latent Attention Is All You Need
Paper • 2502.07864 • Published • 58 -
Qwen2.5 Bakeneko 32b Instruct Awq
⚡2Generate detailed responses to text prompts
-
Deepseek R1 Distill Qwen2.5 Bakeneko 32b Awq
⚡3Generate text responses to user messages in a chat interface
-
Scaling Laws for Neural Language Models
Paper • 2001.08361 • Published • 9 -
Scaling Laws for Autoregressive Generative Modeling
Paper • 2010.14701 • Published • 1 -
Training Compute-Optimal Large Language Models
Paper • 2203.15556 • Published • 11 -
A Survey on Data Selection for Language Models
Paper • 2402.16827 • Published • 4
-
Attention Is All You Need
Paper • 1706.03762 • Published • 104 -
LoRA Learns Less and Forgets Less
Paper • 2405.09673 • Published • 89 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 50 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72
-
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 68 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 50 -
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 137 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper • 2403.05525 • Published • 48
-
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
Paper • 2405.14333 • Published • 43 -
DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search
Paper • 2408.08152 • Published • 60 -
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Paper • 2501.12948 • Published • 429 -
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 137
-
deepseek-ai/DeepSeek-V3-Base
685B • Updated • 10.9k • 1.68k -
TransMLA: Multi-head Latent Attention Is All You Need
Paper • 2502.07864 • Published • 58 -
Qwen2.5 Bakeneko 32b Instruct Awq
⚡2Generate detailed responses to text prompts
-
Deepseek R1 Distill Qwen2.5 Bakeneko 32b Awq
⚡3Generate text responses to user messages in a chat interface
-
Scaling Laws for Neural Language Models
Paper • 2001.08361 • Published • 9 -
Scaling Laws for Autoregressive Generative Modeling
Paper • 2010.14701 • Published • 1 -
Training Compute-Optimal Large Language Models
Paper • 2203.15556 • Published • 11 -
A Survey on Data Selection for Language Models
Paper • 2402.16827 • Published • 4
-
Qwen2.5 Technical Report
Paper • 2412.15115 • Published • 376 -
Qwen2.5-Coder Technical Report
Paper • 2409.12186 • Published • 151 -
Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement
Paper • 2409.12122 • Published • 4 -
Qwen2.5-VL Technical Report
Paper • 2502.13923 • Published • 211
-
Attention Is All You Need
Paper • 1706.03762 • Published • 104 -
LoRA Learns Less and Forgets Less
Paper • 2405.09673 • Published • 89 -
DeepSeek LLM: Scaling Open-Source Language Models with Longtermism
Paper • 2401.02954 • Published • 50 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72