Blueprint Extraction Large Learning Agent (BELLA) v1.0
Finetuned on LLaVA 1.6 Vicuna 13B on CADCoder dataset to convert image to CadQuery code.
Evaluation
The model was evaluated on the 100 example test split of the CADCODER/GenCAD-Code dataset.
| Metric | Score |
|---|---|
| Intersection over Union (IoU) | 0.7505 |
| Valid Generated Solid Rate | 0.98 |
| ROUGE-L | 0.8454 |
| BLEU | 0.7456 |
How to Use
import torch
from transformers import AutoProcessor, LlavaNextForConditionalGeneration
from PIL import Image
import os
from datasets import load_dataset
model_path = "StoryGold/bella-v1.0-13b"
dataset_name = "CADCODER/GenCAD-Code"
dataset = load_dataset(dataset_name, split='test', streaming=True)
example = next(iter(dataset))
image = example['image']
prompt_text = example['prompt']
ground_truth_code = example['cadquery']
print("Dataset example loaded successfully.")
print(f"Prompt: '{prompt_text}'")
model = LlavaNextForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto")
processor = AutoProcessor.from_pretrained(model_path)
chat_template = (
"A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
f"USER: <image>\n{prompt_text} ASSISTANT:"
)
inputs = processor(text=chat_template, images=image, return_tensors="pt").to(model.device)
with torch.inference_mode():
output_ids = model.generate(**inputs, max_new_tokens=4096, do_sample=False)
output_text = processor.decode(output_ids[0], skip_special_tokens=True)
- Downloads last month
- 227
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for StoryGold/bella-v1.0-13b
Base model
llava-hf/llava-v1.6-vicuna-13b-hf