gemma2-2b-sa-1k-4096

This model is a vocabulary-expanded version of gemma2-2b for Sanskrit.

Training Details

Parameter Value
Base Model gemma2-2b
Target Language Sanskrit
Training Samples 1,000
Added Tokens 4096

Method

  1. Stage 1: Initialize new token embeddings
  2. Stage 2: Full model fine-tuning using LoRA

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Intellexus/gemma2-2b-sa-1k-4096")
tokenizer = AutoTokenizer.from_pretrained("Intellexus/gemma2-2b-sa-1k-4096")

text = "Your text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Citations

### Gemma 2 (Base Model)

@article{gemma2024,
    title = "Gemma 2: Improving Open Language Models at a Practical Size",
    author = "{Gemma Team, Google DeepMind}",
    journal = "arXiv preprint arXiv:2408.00118",
    year = "2024",
    url = "https://arxiv.org/abs/2408.00118",
}
### CC-100 (Training Data)

@inproceedings{conneau-etal-2020-unsupervised,
    title = "Unsupervised Cross-lingual Representation Learning at Scale",
    author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzman, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    year = "2020",
    url = "https://aclanthology.org/2020.acl-main.747",
}

@inproceedings{wenzek-etal-2020-ccnet,
    title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
    author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzman, Francisco and Joulin, Armand and Grave, Edouard",
    booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
    year = "2020",
    url = "https://aclanthology.org/2020.lrec-1.494",
}

### Saamayik (Sanskrit Parallel Data)

@inproceedings{maheshwari-etal-2024-samayik,
    title = "Sāmayik: A Benchmark and Dataset for {E}nglish-{S}anskrit Translation",
    author = "Maheshwari, Ayush and Gupta, Ashim and Krishna, Amrith and Singh, Atul Kumar and Ramakrishnan, Ganesh and Kumar, G. Anil and Singla, Jitin",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.1245",
    pages = "14244--14252",
}

Model Citation

@misc{intellexus-gemma2-2b-sa-1k-4096,
  author = {Intellexus},
  title = {gemma2-2b-sa-1k-4096},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Intellexus/gemma2-2b-sa-1k-4096}
}
Downloads last month
11
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Intellexus/gemma2-2b-sa-1k-4096

Base model

google/gemma-2-2b
Adapter
(200)
this model

Collection including Intellexus/gemma2-2b-sa-1k-4096