Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Thai
ArXiv:
Libraries:
Datasets
pandas
License:
PumeTu's picture
Update README.md
9099b83 verified
metadata
language:
  - th
license: cc-by-sa-4.0
size_categories:
  - 10K<n<100K
task_categories:
  - text-generation
  - question-answering
  - summarization
  - text-classification
tags:
  - finance
  - legal
  - retail
  - medical
dataset_info:
  - config_name: default
    features:
      - name: ID
        dtype: string
      - name: Domain
        dtype: string
      - name: Instruction
        dtype: string
      - name: Input
        dtype: string
      - name: Output
        dtype: string
      - name: Tags
        dtype: string
      - name: Task_type
        dtype: string
      - name: License
        dtype: string
    splits:
      - name: train
        num_bytes: 310301464
        num_examples: 32207
      - name: test
        num_bytes: 77188084
        num_examples: 7793
    download_size: 127947319
    dataset_size: 387489548
  - config_name: full
    features:
      - name: ID
        dtype: string
      - name: Domain
        dtype: string
      - name: Instruction
        dtype: string
      - name: Input
        dtype: string
      - name: Output
        dtype: string
      - name: Tags
        dtype: string
      - name: Task_type
        dtype: string
      - name: License
        dtype: string
    splits:
      - name: train
        num_bytes: 310298412
        num_examples: 32207
      - name: test
        num_bytes: 77187609
        num_examples: 7793
    download_size: 127951575
    dataset_size: 387486021
  - config_name: paper
    features:
      - name: ID
        dtype: string
      - name: Domain
        dtype: string
      - name: Instruction
        dtype: string
      - name: Input
        dtype: string
      - name: Output
        dtype: string
      - name: Tags
        dtype: string
      - name: Task_type
        dtype: string
      - name: License
        dtype: string
      - name: Cultural
        dtype: string
    splits:
      - name: train
        num_bytes: 268816999
        num_examples: 28098
      - name: test
        num_bytes: 67733647
        num_examples: 6916
    download_size: 110689550
    dataset_size: 336550646
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
  - config_name: paper
    data_files:
      - split: train
        path: paper/train-*
      - split: test
        path: paper/test-*

WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai (EMNLP'25)

WangchanThaiInstruct is a human-authored Thai dataset that improves instruction-following in low-resource settings, capturing cultural and domain-specific nuances across four domains and seven task types.

The evaluate code can be found at this github link

@inproceedings{limkonchotiwat2025thaiinstruct,
  title     = {WangchanThaiInstruct: An Instruction-Following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai},
  author    = {Limkonchotiwat, Peerat and Tuchinda, Pume and Lowphansirikul, Lalita and Nonesung, Surapon and Tasawong, Panuthep and Aji, Alham Fikri and Udomcharoenchaikit, Can and Nutanong, Sarana},
  booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
  year      = {2025},
  publisher = {Association for Computational Linguistics}
}

Overview

100% Human-Annotated Thai Instruction Dataset (Batch 1-5 Release, v0.4_beta)

We have both NC and SA licenses. Please adhere to the license terms. Each row has its license according to its source. We will continue updating licenses as we receive permission from the respective license sources.

4 Domains:

  • Medical
  • Finance
  • Retail
  • Legal

7 Tasks:

  • Summarization
  • Open QA
  • Close QA
  • Classification
  • Creative Writing
  • Brainstorming
  • Multiple Choice QA

Usage

We provide to subsets for our dataset, default and paper. Default is the full dataset, however it does not contain cultural tags for 5000 samples. The paper subset is the version we utilize in all our experiments in the paper and is the version you should you use if you want to reproduce the results in the paper, and it also contains the cultural tags that marks which samples are cultural and general.

from datasets import load_dataset

# For the full subset
dataset = load_dataset("airesearch/WangchanThaiInstruct")

# For the paper subset
dataset = load_dataset("airesearch/WangchanThaiInstruct", "paper")