context
stringclasses 1
value | question
stringlengths 236
290
| answer_prefix
stringclasses 1
value | answers
listlengths 1
9
| task
stringclasses 1
value | max_new_tokens
int64 256
256
|
|---|---|---|---|---|---|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"the following day"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Spain",
"Taíno",
"indigenous Taíno people"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"September 13, 2012"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Haliaeetus"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Kobol's Last Gleaming"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Nala"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Christopher Lloyd"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Charlotte of Mecklenburg-Strelitz"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"Glenn Close"
] |
nq_128k
| 256
|
"You will be given a list of documents. You need to read carefully and understand all of them. Then (...TRUNCATED)
| "====== Now let's start! ======\nBased on the documents above, can you answer the following query? P(...TRUNCATED)
|
Final Answer:
|
[
"the Ramones"
] |
nq_128k
| 256
|
End of preview. Expand
in Data Studio
LOFT RAG - Natural Questions (128k)
Dataset Description
This dataset is part of the LOFT (Long-context Open Foundation Tasks) benchmark, specifically the RAG (Retrieval-Augmented Generation) task.
- Dataset: Natural Questions
- Context Length: 128k
- Task Type: RAG (Retrieval-Augmented Generation)
- Language: English
- Source: LOFT Benchmark (Google DeepMind)
Dataset Structure
Data Fields
context(string): Full prompt context including corpus documents and few-shot examplesquestion(string): Query separator + query format + query textanswer_prefix(string): Prefix for answer generation ("Final Answer: ")answers(list[string]): Ground truth answerstask(string): Task identifier (e.g., "nq_128k")max_new_tokens(int64): Maximum tokens for generation (256)
Data Splits
dev: Development set (10 examples)test: Test set (100 examples)
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("loft-rag-nq-128k")
# Access splits
dev_data = dataset["dev"]
df_dev = dev_data.to_pandas()
test_data = dataset["test"]
df_test = test_data.to_pandas()
# Example usage
sample = dataset["dev"][0] if "dev" in dataset else dataset["test"][0]
context = sample["context"]
question = sample["question"]
answers = sample["answers"]
Dataset Creation
This dataset was converted from LOFT's original format to HuggingFace format using exact LOFT prompt construction to ensure 100% fidelity.
- Prompt Construction: Uses LOFT's
PromptRegistryandconcatenate_chunks()for exact prompt matching - Few-shot Examples: Preserved exactly as in LOFT (5 examples)
- Corpus Documents: Full corpus included in context (corpus-in-context approach)
- Verification: All prompts verified to match LOFT originals exactly
Related Datasets
All LOFT RAG datasets are available under the loft-rag-* namespace:
- Main Index - Overview of all datasets
Citation
@article{{loft2024,
title={{LOFT: Long-context Open Foundation Tasks}},
author={{Google DeepMind}},
year={{2024}},
url={{https://github.com/google-deepmind/loft}}
}}
License
Apache 2.0
- Downloads last month
- 37