File size: 3,042 Bytes
d7a6d2a
 
 
 
 
a4c5d0f
d7a6d2a
 
 
 
5b717f8
cc6a80f
 
 
 
 
 
 
368d009
cc6a80f
 
 
 
 
 
 
d2deb00
 
368d009
cc6a80f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72e575d
 
 
 
 
cc6a80f
 
 
 
 
 
 
 
 
 
 
8dfe361
cc6a80f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: llama3.1
language:
- en
base_model:
- dphn/Dolphin3.0-Llama3.1-8B
tags:
- code
- text-generation-inference
- medical
- uncensored
---




# Dolphin 3.0 – Llama 3.1 8B

This repository hosts the Quantized versions of **Dolphin 3.0 – Llama 3.1 8B** uncensored model, an instruct-tuned 8 billion-parameter model designed for versatile local use coding, general reasoning, conversational assistance, agentic workflows and more.

---

## Model Overview

- **Model Name**: Dolphin 3.0 – Llama 3.1 8B  
- **Base Architecture**: Meta Llama‑3.1, 8 billion parameters (8 B)  
- **Developer / Curator**: Cognitive Computations (Curated & trained by Eric Hartford, Ben Gitter, BlouseJury)
- **License**: Llama 3.1 License (inherits from base model)
- **Intended Use**: General-purpose, local deployment model instruction following, conversation, coding, function-calling, agentic behaviour.

---

## What is Dolphin?

Dolphin is a series of instruction-tuned large language models built for local use and full user control. Dolphin 3.0 aims to be the “ultimate general-purpose local model,” supporting:

- Coding tasks (multiple programming languages)  
- Mathematical reasoning  
- Function‐calling and agentic workflows  
- Conversational and chat-assistant scenarios  
- Custom alignment and system-prompt steering  


---

## Chat Template & System Prompt

This model uses the **ChatML** format for interactions:  
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.
<|im_end|>
<|im_start|>user
{your prompt here}
<|im_end|>
<|im_start|>assistant
```

## Key Features & Capabilities

- Strong instruction-following across coding, mathematics, reasoning & conversation
- Function calling/agentic workflow support (via specified templates and runtime) 
- Designed for local deployment, ensuring user control over alignment, prompts and data
- Supports long context windows (depending on runtime and variant)
- Tuned for adaptivity: deploy as coding tool, tutor, assistant, or domain-specific agent


## Intended Use Cases

- **Coding assistant** — multi-language code generation, debugging, refactoring
- **Mathematical & scientific reasoning** — step-by-step problem solving
- **Chat/assistant prototype** — customizable assistant for your domain
- **Agentic workflows** — integrate function calling toolbox, tool‐use, chain-of-thought
- **Local/private deployment** — on-premises or edge use, minimizing external data exposure


## Acknowledgements

**Special thanks to**:
- Crusoe Cloud, Akash, Lazarus, Cerebras for hardware/training support 
- The open-source datasets and foundational work by Meta, Qwen, OpenCoder, etc. 
- The broader open-source community enabling deployment, quantization and local inference ecosystems.


## Contact & Support
- For issues, questions or community discussion: see the model’s Hugging Face discussions page: [https://huggingface.co/dphn/Dolphin3.0-Llama3.1-8B/discussions]
- You may also join the Discord community: [https://discord.gg/cognitivecomputations]