Function-Calling Fine-tuned Model: gemma-3-270m-it-function-tuned-202601231532

This model was fine-tuned using the Function-Calling Fine-Tuner. It has been trained to generate specific JSON outputs based on user instructions and available tools. This repository includes a system_prompt_format.txt file, which defines the exact prompt structure the model was trained on.

Prompt Format

The model was trained with the following system prompt structure. The {tool_descriptions} placeholder should be replaced with the list of tools available for a given task.

You are a function calling AI model. Given a user query, the following tools are available:
{tool_descriptions}

Use this format to respond:
```json
{{
  "thought": "Based on the query, I need to use [tool_name] because [your reasoning].",
  "tool": "tool_name",
  "arguments": {{
    "arg_name1": "value1"
  }}
}}

Respond with only the JSON object for the correct tool call.


## Model Details
*   **Base Model:** `google/gemma-3-270m-it`
*   **Curriculum Training (Replay of Base Model's Data):** `0%`
## Training Data
*   **Dataset:** `broadfield-dev/gemma-3-refined-tool-data-1769172431`
*   **Instruction Column:** `instruction`
*   **Output Column:** `output`
## How to Use
Load the `system_prompt_format.txt` file and inject your tool definitions into the `{tool_descriptions}` placeholder.
**Example:**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from huggingface_hub import hf_hub_download
import json

repo_id = "broadfield-dev/gemma-3-270m-it-function-tuned-202601231532"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)

# Load the prompt template from the Hub
prompt_template_path = hf_hub_download(repo_id=repo_id, filename="system_prompt_format.txt")
with open(prompt_template_path, 'r') as f:
    prompt_template = f.read()

# Define your tools and format them as a string
my_tools_string = "- Tool: \`search\`\n  - Description: Searches the web.\n  - Arguments: {"query": {"type": "string"}}"
system_prompt = prompt_template.format(tool_descriptions=my_tools_string)

instruction = "What is the weather in New York?"
chat = [
  {"role": "system", "content": system_prompt},
  {"role": "user", "content": instruction},
]

prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[inputs.input_ids.shape[-1]:], skip_special_tokens=True))
Downloads last month
33
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support