FinePDFs-Edu v2 classifier (English)
Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 1304547 annotations generated by Qwen3-235B-A22B-Instruct-2507 for web samples from FinePDFs dataset.
Unlike in original FW-EDU, we are not filtering for undergraduate content, which results in high inclusion of papers!
How to use in transformers
To load the FinePDFs-Edu-v2 classifier, use the following code:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import re
CHUNK_SIZE = 2048 - 2
MAX_CHARS = 10_000
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceFW/finepdfs_edu_classifier_v2_eng_Latn")
model = AutoModelForSequenceClassification.from_pretrained("HHuggingFaceFW/finepdfs_edu_classifier_v2_eng_Latn")
regex_whitespace = re.compile(r'\s')
def create_text_chunks(text: str, tokenizer):
def trim_to_whitespace(text: str, trim_start: bool = True, trim_end: bool = True):
if trim_start:
match = regex_whitespace.search(text)
if match:
text = text[match.start()+1:]
else:
text = text[10:]
if trim_end:
match = regex_whitespace.search(text[::-1])
if match:
text = text[:len(text) - match.start() - 1]
else:
text = text[:-10]
return text
# First tokenize the text
# Speed hack, we take at most
if len(text) <= 2*MAX_CHARS:
tokens = tokenizer.encode(text[:MAX_CHARS], return_tensors="np", add_special_tokens=False)[0]
# Process the top chunks
chunks_from_top_sampled = [tokens[:CHUNK_SIZE]]
chunks_top_text = tokenizer.batch_decode(chunks_from_top_sampled, skip_special_tokens=True)
chunks_top_text = [trim_to_whitespace(chunks_top_text[0], trim_start=False, trim_end=True)]
return [chunks_top_text]
else:
# We tokenize the top and bottom of text
text_top = text[:MAX_CHARS]
text_bottom = text[-MAX_CHARS:]
tokens = tokenizer.batch_encode_plus([text_top, text_bottom], return_tensors="np", add_special_tokens=False)["input_ids"]
# This ensures that the second chunks is always maxed out
chunks = [tokens[0][:CHUNK_SIZE], tokens[1][-CHUNK_SIZE:]]
chunks_text = tokenizer.batch_decode(chunks, skip_special_tokens=True)
chunks_top_text = [trim_to_whitespace(chunks_text[0], trim_start=False, trim_end=True)]
chunks_bottom_text = [trim_to_whitespace(chunks_text[1], trim_start=True, trim_end=False)]
return chunks_top_text + chunks_bottom_text
text = "This is a test sentence." * 2000
chunks = create_text_chunks(text, tokenizer)
scores = []
for chunk in chunks:
inputs = tokenizer(chunk, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
scores.append(score)
print(max(scores))
Training
The classifier was trained on 13824480 pairs of web samples and their scores from 0 to 5, generated by Qwen3-235B-A22B-Instruct-2507. The samples were annotated based on their educational quality with 0 being not educational and 5 being highly educational.
Below is the prompt used for Qwen3-235B-A22B-Instruct-2507 annotations:
Below is an extract from a PDF file. Evaluate whether the extract exhibits properties suitable for educational training data using the 6-point scoring system described below. Select the single score that best represents the extract's educational quality level:
**Score 0: No Educational Value**
- Award 0 points for content with zero educational merit including spam, promotional material, garbled text, random sequences, severely corrupted formatting, or content that provides no learning opportunities whatsoever.
**Score 1: Minimal Educational Content**
- Award 1 point for content with very limited educational value such as basic data listings, simple contact information, minimal factual statements without context, brief announcements, or content that presents isolated facts without meaningful educational framework.
**Score 2: Basic Informational Content**
- Award 2 points for content that provides basic information but lacks depth, context, or clear educational structure. This includes simple news items, basic product descriptions, brief summaries, casual observations, or informational content that states facts without explanation or educational development.
**Score 3: Moderate Educational Value**
- Award 3 points for content that offers solid educational information with some context and explanation. This includes informative articles with background information, basic explanatory content, introductory-level material, general knowledge content, or well-written informational pieces that provide context and some depth.
**Score 4: Strong Educational Content**
- Award 4 points for content with clear educational merit featuring detailed explanations, multiple perspectives, analytical depth, or comprehensive coverage of topics. This includes academic articles, detailed tutorials, in-depth analyses, research-based content, or material that demonstrates critical thinking and provides substantial learning value.
**Score 5: Exceptional Educational Value**
- Award 5 points for content with outstanding educational merit that demonstrates expert-level knowledge, sophisticated analysis, comprehensive understanding, and significant pedagogical value. This includes advanced academic research, expert commentary with deep insights, comprehensive educational material with multiple learning dimensions, or content that advances understanding through original thinking and thorough exploration.
## Evaluation Process
The extract: {example}
After examining the extract:
- Briefly justify your total score, focusing on the educational depth, context provided, and learning potential, up to 100 words.
- Conclude with the score using the format: "Educational value score: <total points>"\
We added a classification head with a single regression output to answerdotai/ModernBERT-large, unroze the last 4 layers and trained the model for 5000 steps with a learning rate of 3e-4.
Training Details:
- Model: answerdotai/ModernBERT-large with a classification head
- Dataset: 13824480 samples from Qwen3-235B-A22B-Instruct-2507 annotations
- Steps: 5000
- Learning Rate: 3e-4
- class distribution: {0: 2304080, 1: 2304080, 2: 2304080, 3: 2304080, 4: 2304080, 5: 2304080}
- Evaluation Metric: F1 score
Classification report
We treat the regression model's predictions as discrete classes to calculate the metrics on a hold-out set of 10000 Qwen3-235B-A22B-Instruct-2507-annotated samples.
Validation Report:
| class | precision | recall | f1-score | support |
|--------:|------------:|---------:|-----------:|----------:|
| 0 | 0.5 | 0.88 | 0.64 | 1122 |
| 1 | 0.89 | 0.61 | 0.72 | 7887 |
| 2 | 0.46 | 0.69 | 0.55 | 3412 |
| 3 | 0.62 | 0.58 | 0.6 | 3568 |
| 4 | 0.66 | 0.6 | 0.63 | 3134 |
| 5 | 0.47 | 0.7 | 0.56 | 877 |
Confusion matrix
We verify that the predicted educational scores are indeed close to their ground truth, and are mostry impacted by the noisy annotation.
Confusion Matrix:
| class | 0 | 1 | 2 | 3 | 4 | 5 |
|---------:|----:|-----:|-----:|-----:|-----:|----:|
| 0 | 983 | 133 | 5 | 1 | 0 | 0 |
| 1 | 972 | 4823 | 1952 | 128 | 11 | 1 |
| 2 | 8 | 437 | 2351 | 581 | 34 | 1 |
| 3 | 1 | 34 | 734 | 2072 | 697 | 30 |
| 4 | 2 | 2 | 63 | 534 | 1873 | 660 |
| 5 | 0 | 0 | 2 | 18 | 243 | 614 |
Limitations
While the FinePDFs-Edu-v2 classifier performs well in distinguishing high-quality educational content for FinePDFs dataset, there are some limitations:
- Scope: The model's performance might change for other datasets, in particular for out of distribution samples. Unlike original fw-edu It is NOT focused on educational content relevant to primary and grade school levels and may not perform as well on content intended for undergraduate school levels or specialized domains.
- Bias: The model's performance is dependent on the quality and representativeness of the training data and the LLM used for the annotation. Biases in both can affect the classifier's judgments. It might overfit to academic looking content for the higher scores, but we haven't found the classifier to improve the scores.
- Context: The classifier evaluates individual web pages or extracts without considering broader context, which might impact its effectiveness in certain scenarios.
The training and inference code is available on GitHub https://github.com/huggingface/finepdfs/tree/main/classification
- Downloads last month
- 27