Adapters
GGUF
anezatra commited on
Commit
61ba441
·
verified ·
1 Parent(s): 21ecfe7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -57
README.md CHANGED
@@ -23,43 +23,6 @@ This model is a fine-tuned version of LLaMA 3 8B using Alpaca-style instruction-
23
  - **Paper:** https://arxiv.org/abs/2302.13971 (LLaMA 3)
24
  - **Demo:** [Example usage with llama-cli]
25
 
26
- ## Uses
27
-
28
- ### Direct Use
29
-
30
- - Chatbots for general conversation
31
- - Instruction-following NLP tasks
32
- - Generating structured or unstructured text
33
- - Educational assistants or question answering
34
-
35
- ### Downstream Use
36
-
37
- - Fine-tuning for task-specific NLP tasks
38
- - Integration into larger applications such as AI assistants or virtual agents
39
-
40
- ### Out-of-Scope Use
41
-
42
- - Should not be used for high-stakes decision-making (medical, legal, or financial advice)
43
- - May generate biased, unsafe, or incorrect outputs if unchecked
44
-
45
- ## Bias, Risks, and Limitations
46
-
47
- - Model reflects biases present in its training dataset.
48
- - May generate unsafe or offensive outputs if prompted maliciously.
49
- - Performance may vary with non-English inputs unless pre-translated.
50
-
51
- ### Recommendations
52
-
53
- - Always monitor outputs in deployment.
54
- - Use prompt filtering or moderation for sensitive domains.
55
- - Combine with human-in-the-loop evaluation for critical tasks.
56
-
57
- ## How to Get Started with the Model
58
-
59
- ```bash
60
- llama-cli --model meta-llama-3.1-8b-alpaca.Q4_K_M.gguf -p "Hello Eliza"
61
- ```
62
-
63
  ## Training Details
64
 
65
  ### Training Data
@@ -136,23 +99,4 @@ llama-cli --model meta-llama-3.1-8b-alpaca.Q4_K_M.gguf -p "Hello Eliza"
136
  year={2025},
137
  howpublished={\url{https://huggingface.co/unsloth/meta-llama-3.1-8b-alpaca}}
138
  }
139
- ```
140
-
141
- **APA:**
142
- Anezatra. (2025). *LLaMA 3 8B Alpaca Fine-Tuned Model.* HuggingFace. https://huggingface.co/unsloth/meta-llama-3.1-8b-alpaca
143
-
144
- ## Glossary
145
-
146
- - **LoRA:** Low-Rank Adaptation for efficient fine-tuning
147
- - **GGUF:** Quantized model format compatible with llama.cpp
148
- - **bf16:** Brain Floating Point 16-bit precision
149
- - **Q4_K_M:** Quantization method for reduced memory footprint
150
-
151
- ## Model Card Authors
152
-
153
- - Anezatra
154
-
155
- ## Model Card Contact
156
-
157
- - HuggingFace: https://huggingface.co/unsloth
158
- - Email: [email protected]
 
23
  - **Paper:** https://arxiv.org/abs/2302.13971 (LLaMA 3)
24
  - **Demo:** [Example usage with llama-cli]
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## Training Details
27
 
28
  ### Training Data
 
99
  year={2025},
100
  howpublished={\url{https://huggingface.co/unsloth/meta-llama-3.1-8b-alpaca}}
101
  }
102
+ ```