Questions about TxT360 coverage and historical web data
Hi TxT360 team and community,
Thanks for releasing TxT360 — it’s an exciting resource for open web-scale and text research!
I’m working on a project involving historical web content, and I’d love some guidance:
Does TxT360 include any older web data (e.g., pre-2008), or is it mainly focused on more recent sources?
Are there recommended datasets or strategies for covering 2008–2013 web pages that could complement TxT360?
Have community members tried combining TxT360 with other large-scale archives (e.g., Common Crawl, Internet Archive) to reconstruct earlier web snapshots? Any best practices or tools for deduplication and integration would be greatly appreciated.
Thanks in advance for any tips or links you can share!😀
Best,
Haride
Thanks for your interest in our work.
TxT360 used CommonCrawl, but parsed/filtered it so it is more suitable for training. So theoretically it dates back to the CommonCrawl dumps. The snapshot time is around 2008 hence older web data like pre-2008 will only be included if they are still around during that.
Our team is actually working on expanding TxT360, including combining them to other sources, such as HPLT. Hopefully they have earlier snapshots since the sources are from Internet Arxiv. Stay tuned for our next release!
Deduplication is a pretty heavy workload, we hope to create deduped version of the combined datasets and provide them soon. But you can take a look tools like Datatrove [1] or Slimpajama code [2] if you really need to do it yourself. Our data dedup code is also available [3], but its setting might be quite large scale and specific to our own environment.
Btw, "reconstruct earlier web snapshots" is not one of our goals, our goal is to curate high quality and unique natural tokens for model training (despite their timestamp), I assume these goals may have something in common though.
[1] https://github.com/huggingface/datatrove
[2] https://github.com/Cerebras/modelzoo/tree/main/src/cerebras/modelzoo/data_preparation/data_preprocessing/data_dedup
[3] https://github.com/LLM360/TxT360/tree/main/deduplication
Thank you very much for your detailed response and for sharing these valuable resources. Your clarification about the temporal coverage of TxT360, as well as the pointers to deduplication tools, are extremely helpful for my work.
I will look into the resources you mentioned, and I’m also very much looking forward to your team’s upcoming releases, especially the integration with HPLT and other sources.
Thanks again for your guidance and for the important work your team is doing with TxT360.
Best regards,
Haride