InedxsAI - PEFT LoRA Adapter

Fine-tuned LoRA adapter for Qwen2-7B-Instruct specialized in conversational AI and instruction following.

πŸ“‹ Model Details

  • Base Model: Qwen/Qwen2-7B-Instruct
  • Training Method: PEFT (Parameter-Efficient Fine-Tuning)
  • Model Type: Full Fine-Tuned Adapter
  • Languages: French (primary), English
  • Adapter Size: ~11.5 GB
  • License: Apache 2.0

Note: This is a comprehensive fine-tune with extensive parameter modifications, resulting in a larger adapter size compared to typical LoRA adapters. This enables more significant model adaptations while maintaining compatibility with PEFT infrastructure.

πŸš€ Quick Start

Installation

pip install transformers peft torch

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2-7B-Instruct",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Load adapter
model = PeftModel.from_pretrained(
    base_model,
    "InedxsAI/Inedxs.AI"
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
    "Qwen/Qwen2-7B-Instruct",
    trust_remote_code=True
)

# Generate
prompt = "Bonjour, qui es-tu ?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“Š Training Details

Training Configuration

  • Framework: PEFT 0.17.1
  • LoRA Rank: Optimized for conversational tasks
  • Target Modules: Attention and feedforward layers
  • Training Precision: Mixed precision (FP16/BF16)

Training Data

Fine-tuned on conversational datasets focusing on:

  • Instruction following
  • Dialogue and conversation
  • Question answering
  • Task completion

🎯 Intended Uses

Primary Use Cases

  • Conversational AI assistants
  • Instruction-following chatbots
  • Interactive question-answering systems
  • General-purpose dialogue agents

Out-of-Scope Uses

  • Medical diagnosis or advice
  • Legal counseling
  • Financial advice without proper disclaimers
  • Generating harmful or malicious content

⚑ Performance

This adapter enhances Qwen2-7B-Instruct's capabilities in:

  • Conversational coherence
  • Instruction understanding
  • Task-specific responses
  • Multi-turn dialogue

πŸ”§ Integration Options

Option 1: Merge with Base Model

# Merge adapter for inference
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged_model")

Option 2: Convert to GGUF

For efficient inference with llama.cpp or Ollama, see the quantized version: πŸ‘‰ InedxsAI/Inedxs.AI-GGUF

πŸ›‘οΈ Limitations & Biases

  • Inherits limitations from base Qwen2-7B-Instruct model
  • May exhibit biases present in training data
  • Performance varies with prompt quality and context
  • Not suitable for critical decision-making without human oversight

πŸ“œ Citation

@misc{inedxsai2025,
  title={InedxsAI: PEFT Adapter for Qwen2-7B-Instruct},
  author={InedxsAI},
  year={2025},
  howpublished={\url{https://huggingface.co/InedxsAI/Inedxs.AI}}
}

πŸ”— Related Resources

πŸ“§ Contact

For questions or issues, please open an issue on the model repository.


Made with ❀️ by InedxsAI

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for InedxsAI/Inedxs.AI

Base model

Qwen/Qwen2-7B
Adapter
(343)
this model