mradermacher commited on
Commit
fa267f0
·
verified ·
1 Parent(s): 8092a36

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- ### quantize_version: 2 -->
2
  <!-- ### output_tensor_quantised: 1 -->
3
  <!-- ### convert_type: hf -->
@@ -7,3 +37,54 @@
7
  <!-- ### quants_skip: -->
8
  <!-- ### skip_mmproj: -->
9
  static quants of https://huggingface.co/hitonet/hito-1.7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: hitonet/hito-1.7b
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ mradermacher:
8
+ readme_rev: 1
9
+ quantized_by: mradermacher
10
+ tags:
11
+ - qwen3
12
+ - fine-tuned
13
+ - hito
14
+ - hitonet
15
+ - reasoning
16
+ - conversational
17
+ - thinking
18
+ - adaptive-reasoning
19
+ - tree-of-thought
20
+ - hierarchical-reasoning
21
+ - cognitive-framework
22
+ - self-aware-ai
23
+ - anti-hallucination
24
+ - synthetic-data
25
+ - gguf
26
+ - llama-cpp
27
+ - ollama
28
+ ---
29
+ ## About
30
+
31
  <!-- ### quantize_version: 2 -->
32
  <!-- ### output_tensor_quantised: 1 -->
33
  <!-- ### convert_type: hf -->
 
37
  <!-- ### quants_skip: -->
38
  <!-- ### skip_mmproj: -->
39
  static quants of https://huggingface.co/hitonet/hito-1.7b
40
+
41
+ <!-- provided-files -->
42
+
43
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#hito-1.7b-GGUF).***
44
+
45
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/hito-1.7b-i1-GGUF
46
+ ## Usage
47
+
48
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
49
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
50
+ more details, including on how to concatenate multi-part files.
51
+
52
+ ## Provided Quants
53
+
54
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
55
+
56
+ | Link | Type | Size/GB | Notes |
57
+ |:-----|:-----|--------:|:------|
58
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q2_K.gguf) | Q2_K | 0.9 | |
59
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
60
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
61
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
62
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
63
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
64
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
65
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
66
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
67
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
68
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
69
+ | [GGUF](https://huggingface.co/mradermacher/hito-1.7b-GGUF/resolve/main/hito-1.7b.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
70
+
71
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
72
+ types (lower is better):
73
+
74
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
75
+
76
+ And here are Artefact2's thoughts on the matter:
77
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
78
+
79
+ ## FAQ / Model Request
80
+
81
+ See https://huggingface.co/mradermacher/model_requests for some answers to
82
+ questions you might have and/or if you want some other model quantized.
83
+
84
+ ## Thanks
85
+
86
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
87
+ me use its servers and providing upgrades to my workstation to enable
88
+ this work in my free time.
89
+
90
+ <!-- end -->