TypeScript-SLM-7B-Reasoning-Full
TypeScript-SLM-7B-Reasoning is a 7B-parameter DeepSeek-based model fine-tuned for step-by-step TypeScript reasoning. It merges the base model with LoRA adapters and includes GGUF quantization for local/Ollama workflows.
This repository hosts the full merged model plus GGUF (q4_k_m) for lightweight inference.
Model Description
- Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- Model Type: Causal LM (code reasoning)
- Parameters: 7B
- Context Length: Inherits base DeepSeek-R1-Distill-Qwen-7B window
- Fine-tuning: LoRA on TypeScript reasoning/debugging tasks
- License: MIT
- Language: English, TypeScript/JavaScript code
- System Prompt: Focus on step-by-step debugging, refactoring, and design-level explanations before giving the final typed solution.
What it is good at
- ✅ Explaining TypeScript bugs and fixes
- ✅ Refactoring and API design discussions
- ✅ Generating strongly-typed code for React/Next.js/Angular/Node.js
- ✅ Producing clear reasoning traces before final answers
Intended Uses
Primary: TypeScript reasoning, debugging, refactoring, and guided code generation.
Out-of-scope: Arbitrary natural-language chat unrelated to code; safety-sensitive or factual tasks outside TypeScript.
Prompt Examples
"Debug this TypeScript function and explain the bug step by step:\n\nfunction add(a?: number, b?: number) { return a + b; }"
"Design a typed API surface for a Next.js todo service. Explain design choices, then show the final code."
How to Use
Ollama (recommended for local)
ollama create typescript-slm-7b-reasoning -f gguf/Modelfile-q4_k_m
ollama run typescript-slm-7b-reasoning "Explain why this React hook re-renders too often..."
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"sylvester-francis/typescript-slm-7b-reasoning-full",
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("sylvester-francis/typescript-slm-7b-reasoning-full")
prompt = "Refactor this TypeScript service for better typing and error handling..."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.3,
top_p=0.95,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
GGUF (llama.cpp)
huggingface-cli download sylvester-francis/typescript-slm-7b-reasoning-full \
gguf/typescript-slm-7b-reasoning-q4_k_m.gguf --local-dir ./models
./llama-cli -m ./models/gguf/typescript-slm-7b-reasoning-q4_k_m.gguf \
-p "Explain and fix this TypeScript type error..."
Model Files
gguf/typescript-slm-7b-reasoning-q4_k_m.gguf(≈4.7GB)gguf/Modelfile-q4_k_m(Ollama import)
Training Data (summary)
- Curated TypeScript code from popular GitHub repos (React, Next.js, Angular, Node.js)
- StackOverflow Q&A focused on debugging and reasoning
- Filters for strong typing, framework best practices, and reasoning-rich examples
Training Configuration (LoRA)
Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Method: LoRA fine-tuning
Target Domains: TypeScript reasoning, debugging, refactoring
LoRA Rank / Alpha: tuned for stability and reasoning depth
Optimizer: AdamW
Max Sequence Length: inherits base model context window
Evaluation
Qualitative checks on TypeScript debugging/refactoring prompts show:
- Clear reasoning steps before final code
- Strong type usage and framework-aware patterns
- Concise, actionable fixes
Safety & Limitations
- May generate incorrect code or hallucinate APIs—review before production use.
- Not a security scanner; do not rely on it for vulnerability assessments.
- Avoid non-code or high-stakes factual tasks.
License
MIT for the fine-tuned model; base model license and dataset terms also apply.
Contact
- Maintainer: Sylvester Francis (
@sylvester-francison Hugging Face) - Issues/feedback: open a discussion on the model repo
- Downloads last month
- 81
Model tree for sylvester-francis/typescript-slm-7b-reasoning-full
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B