takaraspider / README.md
takarajordan's picture
Update README.md
e96ddc9 verified
|
raw
history blame
9.61 kB
metadata
license: cc-by-4.0
task_categories:
  - text-retrieval
  - text-classification
  - feature-extraction
language:
  - ja
  - en
pretty_name: TakaraSpider Japanese Web Crawl Dataset
size_categories:
  - 100K<n<1M
tags:
  - web-crawl
  - japanese
  - multilingual
  - html
  - text-extraction
  - nlp
  - cross-cultural
dataset_info:
  features:
    - name: crawl_id
      dtype: string
    - name: timestamp
      dtype: timestamp[ns, tz=UTC]
    - name: url
      dtype: string
    - name: source_url
      dtype: string
    - name: html
      dtype: string
  config_name: default
  data_files:
    - split: train
      path: data/train-*
  default: true
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

TakaraSpider Japanese Web Crawl Dataset

Domain Distribution

Dataset Summary

TakaraSpider is a large-scale web crawl dataset specifically designed to capture Japanese web content alongside international sources. The dataset contains 257,900 web pages collected through systematic crawling, with a primary focus on Japanese language content (78.5%) while maintaining substantial international representation (21.5%). This makes it ideal for Japanese-English comparative studies, cross-cultural web analysis, and multilingual NLP research.

The dataset was generated by the TakaraSpider crawler, which was specifically engineered to capture high-quality Japanese web content while maintaining broad international coverage.

Geographic Distribution

Supported Tasks and Leaderboards

  • Text Retrieval: Large-scale web document retrieval and indexing
  • Language Detection: Japanese-English-multilingual classification
  • Content Classification: Web page categorization (blogs, e-commerce, news, etc.)
  • Cross-Cultural Analysis: Comparative studies between Japanese and international web content
  • HTML Processing: Benchmarking for web scraping and content extraction tools
  • Japanese NLP: Training and evaluation for Japanese language models

Languages

  • Japanese (ja): 78.5% of content - Primary focus with rich representation
  • English (en): 5.3% of content - International perspective
  • Other/Unknown: 16.2% of content - Diverse multilingual representation

Language Distribution

Dataset Structure

Data Instances

{
  "crawl_id": "a0dde408-769a-44e8-ba44-5b16cdc93ccc",
  "timestamp": "2025-06-13T10:36:59.338661+00:00",
  "url": "https://www.example.co.jp/page",
  "source_url": "https://www.example.co.jp/",
  "html": "<!DOCTYPE html><html lang=\"ja\">..."
}

Data Fields

  • crawl_id (string): Unique identifier for each crawl session
  • timestamp (timestamp): ISO 8601 formatted crawl timestamp with timezone
  • url (string): Target URL that was crawled
  • source_url (string): Referring/source URL (when available)
  • html (string): Complete raw HTML content of the page

Data Splits

Split Examples
train 257,900

Dataset Creation

Curation Rationale

TakaraSpider was created to address the lack of high-quality, large-scale Japanese web crawl datasets for research purposes. Key objectives:

  1. Japanese Language Focus: Capture substantial Japanese web content for NLP research
  2. Cultural Representation: Include diverse Japanese web content types (blogs, news, e-commerce)
  3. International Balance: Maintain global perspective with international content
  4. Research Quality: Ensure clean, structured data suitable for academic and commercial research
  5. Temporal Consistency: Single-session crawl for temporal consistency

Content Types

Source Data

Initial Data Collection and Normalization

The data was collected through systematic web crawling using the TakaraSpider crawler during a concentrated crawling session on June 13, 2025. The crawler was configured to:

  • Prioritize Japanese (.jp) domains while maintaining international diversity
  • Capture complete HTML content with metadata
  • Ensure broad domain coverage (10,590+ unique domains)
  • Maintain crawl provenance through unique session IDs

Who are the source language producers?

The source content represents natural web usage across:

  • Japanese web users: Content creators, bloggers, businesses, news organizations
  • International web users: Global content accessible to Japanese audiences
  • Mixed demographics: Spanning individual users to large organizations

Considerations for Using the Data

Social Impact of Dataset

Positive Impacts:

  • Enables Japanese NLP research and development
  • Supports cross-cultural digital humanities research
  • Facilitates web technology development and benchmarking
  • Promotes understanding of Japanese digital culture

Potential Concerns:

  • May contain biased content reflecting web demographics
  • Temporal snapshot may not represent evolving web trends
  • Domain concentration could skew research findings

Discussion of Biases

Content Size Distribution

Identified Biases:

  1. Geographic Bias: 50.9% Japanese domains may not represent global web diversity
  2. Temporal Bias: Single-day crawl (June 13, 2025) captures specific moment in time
  3. Domain Concentration: Top 10 domains represent 13.4% of dataset (improved diversity)
  4. Language Detection: 15.9% of content requires language identification
  5. Content Type Skew: Structured webpages (64.1%) over-represented

Mitigation Strategies:

  • Clearly document dataset composition and limitations
  • Encourage diverse evaluation across content types
  • Recommend supplementary datasets for global research
  • Provide detailed analytics for informed usage decisions

Other Known Limitations

  • Temporal Scope: Single-session crawl may miss temporal variations
  • Robots.txt Compliance: Limited to publicly accessible content
  • Dynamic Content: JavaScript-rendered content may be incomplete
  • Scale vs. Depth: Broad coverage may sacrifice deep domain-specific content

URL Depth Distribution

Additional Information

Dataset Curators

  • Primary Curator: [Dataset Author Name]
  • Organization: [Organization Name]
  • Technical Contact: [Contact Email]

Licensing Information

This dataset is released under the Creative Commons Attribution 4.0 International License (CC-BY-4.0). Users are free to:

  • Share and redistribute the material
  • Adapt, remix, transform, and build upon the material
  • Use for any purpose, including commercial applications

Attribution Required: Please cite this dataset when using it in research or applications.

Citation Information

@dataset{takaraspider2025,
  title={TakaraSpider: Large-Scale Japanese Web Crawl Dataset},
  author={[Author Names]},
  year={2025},
  publisher={Hugging Face},
  doi={[DOI if available]},
  url={https://huggingface.co/datasets/takarajordan/takaraspider}
}

Contributions

Thanks to @takarajordan for creating and sharing this dataset with the research community.

Technical Specifications

Computational Requirements

  • Storage: ~2.5GB compressed, ~8GB uncompressed
  • Memory: 4GB+ RAM recommended for full dataset loading
  • Processing: Optimized for streaming with 🤗 Datasets library

Data Quality Metrics

Metric Value Description
Duplicate URLs 0.0% No duplicate URLs detected in sample
Content Completeness 99%+ HTML content available for virtually all records
Metadata Completeness 100% All required fields populated
Average Content Size 198KB Substantial content per page
Domain Diversity 0.205 Strong domain-to-page ratio

Getting Started

Quick Start

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("takarajordan/takaraspider")

# Or stream for memory efficiency
dataset = load_dataset("takarajordan/takaraspider", streaming=True)

# Sample for testing
sample = dataset["train"].select(range(1000))

Example Usage

# Filter Japanese content
japanese_pages = dataset["train"].filter(
    lambda x: 'lang="ja"' in x['html'][:500].lower()
)

# Extract large content pages
rich_content = dataset["train"].filter(
    lambda x: len(x['html']) > 100000
)

# Domain analysis
from urllib.parse import urlparse
domains = [urlparse(url).netloc for url in dataset["train"]['url']]

Analytics and Visualizations

Complete analytics and visualizations are available in the analytics_output/ directory:

  • Domain Distribution: Top domains by page count
  • Geographic Analysis: TLD-based geographic distribution
  • Content Analysis: Size distribution and content types
  • Language Breakdown: Detailed language detection results
  • URL Structure: Path depth and navigation patterns

This dataset card was generated using comprehensive analytics based on a 51,580-sample representative subset (20% of full dataset). Last updated: June 18, 2025.