--- license: apache-2.0 base_model: Qwen/Qwen3-1.7B tags: - qwen3 - fine-tuned - hito - hitonet - reasoning - conversational - thinking - adaptive-reasoning - tree-of-thought - hierarchical-reasoning - cognitive-framework - self-aware-ai - anti-hallucination - synthetic-data - gguf - llama-cpp - ollama pipeline_tag: text-generation language: - en library_name: transformers ---
Hitonet Meet Hito # Hito 1.7B ### Brain, Heart, and a Really Good Memory [![GGUF Downloads](https://img.shields.io/badge/GGUF_for_Ollama/llama.cpp-ff6b35?style=for-the-badge&logo=meta&logoColor=white)](https://huggingface.co/hitonet/hito-1.7b-GGUF) [![Website](https://img.shields.io/badge/hitonet.com-000000?style=for-the-badge&logo=globe&logoColor=white)](https://hitonet.com) [![Chat](https://img.shields.io/badge/Try_Free_Chat-22c55e?style=for-the-badge&logo=chatbot&logoColor=white)](https://chat.hitonet.com) [![API](https://img.shields.io/badge/API_Platform-3b82f6?style=for-the-badge&logo=swagger&logoColor=white)](https://platform.hitonet.com) [![Pricing](https://img.shields.io/badge/Pricing-8b5cf6?style=for-the-badge&logo=stripe&logoColor=white)](https://platform.hitonet.com/pricing) --- Status Parameters Context License Model License Method License
--- ## 🧠 Cognitive Bias Resistance Hito is specifically trained to resist cognitive biases that trip up most AI models and humans alike. ### The Bat and Ball Test > *"A bat and a ball cost $1.10 together. The bat costs $1.00 more than the ball. How much does the ball cost?"* Most people (and AI models) instinctively say **10 cents**. That's wrong. | Model | Parameters | Answer | Correct | |-------|------------|--------|---------| | **Hito 1.7B** | **1.7B** | **$0.05** | ✅ | | llama3.1 | 8B | $0.10 | ❌ | | deepseek-r1 | 7B | $0.10 | ❌ | | deepseek-r1 | 32B | $0.10 | ❌ | | mistral | 7B | $0.10 | ❌ | | tinyllama | 1.1B | $0.10 | ❌ | | llama3.2 | 1B | $0.10 | ❌ | **Hito's reasoning:** ```xml Ball + Bat = $1.10, Bat = Ball + $1.00 Intuition says 10 cents... but let me verify. If ball = $0.10, bat = $1.10, total = $1.20. WRONG. Let ball = x: x + (x + 1) = 1.10, 2x = 0.10, x = 0.05 Ball $0.05 + Bat $1.05 = $1.10 ✓ The ball costs five cents. ``` --- ## 📊 Benchmark Results Tested against public Ollama endpoints with identical prompts: | Model | Params | Counting | Math | Reasoning | Cognitive Bias | Overall | |-------|--------|----------|------|-----------|----------------|---------| | **Hito 1.7B** | **1.7B** | **100%** | **100%** | **100%** | ✅ **Resistant** | **100%** | | llama3.1 | 8B | 100% | 67% | 100% | ❌ Fails | 89% | | deepseek-r1:7b | 7B | 100% | 67% | 100% | ❌ Fails | 89% | | deepseek-r1:32b | 32B | 100% | 67% | 100% | ❌ Fails | 89% | | mistral | 7B | 33% | 67% | 100% | ❌ Fails | 67% | | llama3.2 | 1B | 0% | 67% | 67% | ❌ Fails | 44% | | tinyllama | 1.1B | 0% | 33% | 33% | ❌ Fails | 33% | > **Note:** Cognitive Bias test uses the bat-and-ball problem. Models marked "Fails" gave the intuitive wrong answer ($0.10) instead of the correct answer ($0.05).
📊 Visual Benchmarks Size vs Performance Counting Comparison Strawberry Example
--- ## 🎯 What Makes Hito Different ### 1. Cognitive Bias Resistance While larger models fall for intuitive traps, Hito is trained to **stop and verify** before answering. ### 2. Structured Thinking Uses cognitive tags (``, ``, ``) for transparent, traceable reasoning. ### 3. Self-Aware Identity Hito knows who it is, who made it, and its purpose. No generic "I'm an AI assistant" responses. ### 4. Humble by Design Built-in humility system with tags for doubt, honesty, and acknowledging limits. --- ## Cognitive Architecture
Cognitive Architecture
Hito uses a tree-structured reasoning system with four cognitive states: | State | Focus | Tags Used | |-------|-------|-----------| | **Analytical** | Logic, accuracy | ``, ``, `` | | **Creative** | Imagination, exploration | ``, ``, `` | | **Empathetic** | Feelings, perspectives | ``, ``, `` | | **Reflective** | Depth, meaning | ``, ``, `` | ### The Humble Tags What makes Hito different is its built-in humility system: | Tag | Purpose | |-----|---------| | `` | Question assumptions | | `` | Admit errors | | `` | Acknowledge knowledge gaps | | `` | Rate certainty level | | `` | Double-check work | --- ## Quick Start ### Python (Transformers) ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("hitonet/hito-1.7b", torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("hitonet/hito-1.7b") messages = [ {"role": "system", "content": "You are Hito by Hitonet.com."}, {"role": "user", "content": "A bat and ball cost $1.10. The bat costs $1 more than the ball. How much is the ball?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device) outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7, do_sample=True) print(tokenizer.decode(outputs[0], skip_special_tokens=False)) ``` ### Ollama ```bash # Download GGUF from hitonet/hito-1.7b-GGUF ollama create hito -f Modelfile ollama run hito ``` ### API ```bash curl https://api.hitonet.com/v1/chat/completions \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "hito", "messages": [{"role": "user", "content": "Hello!"}]}' ``` Try the full API at [platform.hitonet.com](https://platform.hitonet.com) - $1 free credit included. --- ## Model Variants | Repository | Format | Use Case | |------------|--------|----------| | [hitonet/hito-1.7b](https://huggingface.co/hitonet/hito-1.7b) | Safetensors | Python/Transformers | | [hitonet/hito-1.7b-GGUF](https://huggingface.co/hitonet/hito-1.7b-GGUF) | GGUF | Ollama/llama.cpp/LM Studio | ### Recommended GGUF Quantizations | Quantization | Size | Quality | Use Case | |--------------|------|---------|----------| | Q4_K_M | 1.1 GB | ⭐ Best Balance | Most users | | Q5_K_M | 1.2 GB | Excellent | Quality-focused | | Q8_0 | 1.8 GB | Highest | Maximum quality | --- ## Research For technical details on Nested Cognitive Reasoning, see our research paper: **[Nested Cognitive Reasoning: A Tree-Structured Approach to Language Model Thinking](https://hitonet.com/research)** *Hitonet Research, 2025* --- ## Licensing | Component | License | Commercial Use | |-----------|---------|----------------| | **Model Weights** | Apache 2.0 | ✅ Free | | **NCR Methodology** | CC BY-NC-ND | ⚠️ License Required | The model weights are fully open source under Apache 2.0. The Nested Cognitive Reasoning methodology (cognitive tags, tree-structured thinking, humble tags system) is protected under CC BY-NC-ND. Commercial use of the NCR method requires a license. **Contact:** legal@hitonet.com --- ## Links - **Website:** [hitonet.com](https://hitonet.com) - **Chat:** [chat.hitonet.com](https://chat.hitonet.com) - **API:** [platform.hitonet.com](https://platform.hitonet.com) - **Research:** [hitonet.com/research](https://hitonet.com/research) - **Blog:** [hitonet.com/blog](https://hitonet.com/blog) ---
Made with genuine curiosity by Hitonet
Teaching AI to think, doubt, and learn.