RAG-QA-Logs-Corpus / README.md
tarekmasryo's picture
Update README.md
3ba89e7 verified
metadata
license: cc-by-4.0
task_categories:
  - tabular-classification
  - tabular-regression
  - question-answering
  - text-generation
language:
  - en
tags:
  - rag
  - retrieval-augmented-generation
  - evaluation
  - hallucination
  - meta-modeling
  - risk-scoring
  - logs
  - telemetry
  - tabular-data
  - multi-table
  - machine-learning
  - open-dataset
  - synthetic
  - simulated
  - agent
  - code
pretty_name: RAG QA Logs & Corpus
size_categories:
  - 100K<n<1M

🧠 RAG QA Evaluation Logs & Corpus

Multi-Table RAG Telemetry for Quality, Hallucinations, Latency, and Cost

A multi-table dataset modeling a production-style RAG (Retrieval-Augmented Generation) system end-to-end:

  • a document corpus
  • a chunk-level index
  • retrieval events per query
  • QA evaluation runs with correctness, hallucination, latency, and cost signals
  • scenario templates for QA use cases
  • a data dictionary documenting every column

All records are fully synthetic but realistic system logs and corpus data β€” designed to look and behave like real RAG telemetry while remaining safe to share and experiment with.

The dataset is intended for RAG quality analysis, meta-modeling, hallucination risk scoring, and dashboard-style telemetry.


πŸ” Privacy & Synthetic Data

This dataset is fully synthetic.

  • No real users, customers, patients, or organisations are represented.
  • No personally identifiable information (PII) is included.
  • All IDs, queries, documents, and logs were programmatically generated to mimic realistic RAG system behaviour while preserving privacy.

The design aims to balance realism (for meaningful analysis and modeling) with strong privacy guarantees, making it suitable for open research, teaching, demos, and public dashboards.


πŸ“˜ Dataset Overview

Field Description
Files rag_corpus_documents.csv, rag_corpus_chunks.csv, rag_qa_eval_runs.csv, rag_retrieval_events.csv, rag_qa_scenarios.csv, rag_qa_data_dictionary.csv
Tables 6 (documents, chunks, QA runs, retrieval events, data dictionary)
Total rows (approx.) ~103K across all tables (103,273 rows in total)
Main targets is_correct, hallucination_flag, faithfulness_label
Type Multi-table tabular logs + short text fields

Tables are linked by stable identifiers such as doc_id, chunk_id, run_id, example_id, and scenario_id, making joins explicit and reliable.


πŸ“‚ Files

  • rag_corpus_documents.csv – document-level corpus metadata
  • rag_corpus_chunks.csv – chunk-level index and text content
  • rag_qa_eval_runs.csv – QA runs with labels, metrics, and configurations
  • rag_retrieval_events.csv – per-chunk retrieval telemetry
  • rag_qa_scenarios.csv – scenario-level QA templates and use cases
  • rag_qa_data_dictionary.csv – column-level documentation for all tables

All files are in CSV format, use snake_case column names, and are designed to be ML- and analytics-ready.

πŸ“Š Table Summary

Approximate size per table: | Table | Rows | Columns | Granularity | |--------------------------|--------|---------|-------------------------------------------------------| | rag_corpus_documents | 658 | 19 | One row per document in the RAG corpus | | rag_corpus_chunks | 5,237 | 6 | One row per text chunk derived from a document | | rag_qa_eval_runs | 3,824 | 46 | One row per QA evaluation example | | rag_retrieval_events | 93,375 | 9 | One row per retrieved chunk for a given QA example | | rag_qa_scenarios | 88 | 11 | One row per scenario-level QA template / use case | | rag_qa_data_dictionary | 91 | 5 | One row per column definition across all tables |

🧱 Table Structure

1️⃣ Document Corpus β€” rag_corpus_documents.csv

High-level view of the RAG knowledge base. Granularity: 1 row = 1 document Key fields:

  • doc_id β€” unique document identifier
  • domain β€” e.g. support_faq, hr_policies, product_docs, developer_docs, policies, financial_reports, medical_guides, research_papers, customer_success, data_platform_docs, mlops_docs, marketing_analytics
  • title β€” document title
  • source_type β€” e.g. pdf_manual, spreadsheet, wiki_page
  • language β€” currently en
  • n_sections, n_tokens, n_chunks, avg_chunk_tokens β€” structural and size indicators
  • created_at_utc, last_updated_at_utc β€” lifecycle timestamps
  • is_active, contains_tables β€” operational flags
  • pii_risk_level, security_tier β€” risk and access level
  • owner_team, embedding_model, search_index, top_keywords β€” ownership and indexing metadata
    Use this table to understand what kind of corpus the RAG system is built on and how corpus properties relate to downstream performance.

2️⃣ Chunk Corpus β€” rag_corpus_chunks.csv

What the retriever actually β€œsees”.

Granularity: 1 row = 1 chunk

Key fields:

  • chunk_id β€” unique chunk identifier
  • doc_id β€” foreign key to rag_corpus_documents.doc_id
  • domain β€” propagated from the parent document
  • chunk_index β€” 0-based position of the chunk within the document
  • estimated_tokens β€” approximate token length
  • chunk_text β€” the text content used for retrieval and ranking

Use this table to rebuild retrieval candidates, inspect chunking strategies, and study how content structure affects retrieval.


3️⃣ QA Evaluation Runs β€” rag_qa_eval_runs.csv

End-to-end evaluation records for question–answer runs, including labels and metrics. Granularity: 1 row = 1 QA example (one query, one answer, one configuration) Key fields: Context & content

  • example_id, run_id β€” unique identifiers
  • domain, task_type, difficulty β€” scenario and question type
  • scenario_id β€” link to rag_qa_scenarios
  • query β€” user-style question text
  • gold_answer β€” reference answer
  • has_answer_in_corpus β€” whether the corpus actually contains sufficient evidence

Quality & hallucination signals

  • is_correct β€” main binary correctness flag
  • correctness_label β€” descriptive view of correctness (e.g. correct / partial / incorrect)
  • faithfulness_label β€” e.g. faithful / unfaithful / unknown
  • hallucination_flag β€” binary hallucination indicator
  • user_feedback_label β€” simplified user-style feedback
  • supervising_judge_label β€” synthetic β€œexpert” judgement
  • is_noanswer_probe β€” marks deliberately unanswerable queries

Retrieval metrics

  • retrieval_strategy, chunking_strategy
  • n_retrieved_chunks
  • top1_score, mean_retrieved_score
  • recall_at_5, recall_at_10, mrr_at_10
  • has_relevant_in_top5, has_relevant_in_top10

Latency & resource usage

  • latency_ms_retrieval, latency_ms_generation, total_latency_ms
  • used_long_context_window, context_window_tokens

Configuration & cost

  • embedding_model, reranker_model, generator_model
  • temperature, top_p, max_new_tokens, stop_reason
  • prompt_tokens, answer_tokens, total_cost_usd

Supervision & usage

  • doc_ids_used, chunk_ids_used
  • eval_mode, created_at_utc

This table is the main entry point for meta-modeling, risk scoring, and latency–cost–quality tradeoff analysis.


4️⃣ Retrieval Events β€” rag_retrieval_events.csv

Per-chunk retrieval telemetry for each QA example.

Granularity: 1 row = 1 retrieved chunk for an example

Key fields:

  • run_id, example_id β€” link back to rag_qa_eval_runs
  • chunk_id β€” link to rag_corpus_chunks
  • rank β€” rank position (1 = top)
  • retrieval_score β€” retriever score
  • is_relevant β€” relevance label for this chunk
  • domain, difficulty, retrieval_strategy β€” redundant context fields for easier analysis

Use this table to reconstruct retrieval lists, compute custom ranking metrics, and explore how ranking quality influences final answers.


5️⃣ Scenarios β€” rag_qa_scenarios.csv

Scenario-level templates and use cases for QA runs.

Granularity: 1 row = 1 scenario

Key fields:

  • scenario_id β€” links to rag_qa_eval_runs.scenario_id
  • domain β€” scenario domain (support, HR, finance, medical, developer docs, etc.)
  • primary_doc_id β€” anchor document for the scenario
  • query, gold_answer β€” canonical scenario-level QA pair
  • difficulty_level β€” e.g. easy / medium / hard
  • scenario_type β€” e.g. factual QA, policy lookup, multi-hop reasoning
  • use_case β€” short description of the business or product scenario
  • has_answer_in_corpus β€” whether the scenario is designed to be answerable from the corpus
  • n_eval_examples, is_used_in_eval β€” how many QA examples were generated per scenario and whether it appears in the evaluation runs

This table adds a narrative layer on top of the logs, making it easier to build dashboards, teaching materials, or explainability reports.


6️⃣ Data Dictionary β€” rag_qa_data_dictionary.csv

Column-level documentation across all tables. Granularity: 1 row = 1 column definition Key fields:

  • table_name β€” name of the table the column belongs to
  • column_name β€” column name in snake_case
  • dtype β€” high-level type (int, float, bool, category, datetime, text)
  • description β€” human-readable explanation
  • allowed_values β€” expected values or ranges where applicable

Use this file as a single source of truth when exploring or building models on top of the dataset.


🎯 Targets & Tasks

Typical learning targets:

  • is_correct β€” classification: did the system answer correctly?
  • hallucination_flag β€” classification: is the answer hallucinated?
  • faithfulness_label β€” multi-class view of answer faithfulness
    Paired with rich system signals (retrieval metrics, latency, cost, configuration), these enable:
  • Meta-models that estimate answer quality before showing it to users
  • Risk scores driving block / escalate / rerun decisions
  • Policy design for when to switch retrieval strategy or model configuration

πŸš€ Example Usage

Using plain pandas with the raw files: python import pandas as pd base_path = "path/to/data" # or the local dataset path docs = pd.read_csv(f"{base_path}/rag_corpus_documents.csv") chunks = pd.read_csv(f"{base_path}/rag_corpus_chunks.csv") qa_runs = pd.read_csv(f"{base_path}/rag_qa_eval_runs.csv") retrieval_events = pd.read_csv(f"{base_path}/rag_retrieval_events.csv") scenarios = pd.read_csv(f"{base_path}/rag_qa_scenarios.csv") dictionary = pd.read_csv(f"{base_path}/rag_qa_data_dictionary.csv") Example join: attach the top-ranked retrieved chunk to each QA example: python top = ( retrieval_events.query("rank == 1") .merge(chunks[["chunk_id", "chunk_text"]], on="chunk_id", how="left") ) qa_with_top_chunk = qa_runs.merge( top[["run_id", "chunk_text"]], on="run_id", how="left", suffixes=("", "_top_chunk"), ) You can then train a simple meta-model on qa_with_top_chunk to predict is_correct or hallucination_flag from retrieval and configuration features.

πŸ”¬ Research & Applications

  • RAG meta-modeling

    • Predict correctness or hallucination risk from retrieval and latency metrics
    • Build guardrails that decide when to block, escalate, or rerun answers
  • Retrieval & ranking analysis

    • Compare retrieval strategies across domains and difficulty levels
    • Explore how rank, score, and recall relate to final correctness
  • Latency & cost trade-offs

    • Study how total_latency_ms, context_window_tokens, and total_cost_usd interact with answer quality
    • Prototype β€œfast vs careful” modes for RAG systems
  • Teaching & dashboards

    • Demonstrate a realistic RAG pipeline without exposing real logs
    • Build dashboards that visualise quality, latency, cost, and configuration over time

🧭 Ethical Considerations

  • All records are fully synthetic system logs and corpus content, not collected from real users or organisations.
  • No personally identifiable information (PII) is included.
  • The dataset is intended for research, teaching, benchmarking, and prototyping,
    not for validating real-world systems in high-stakes domains (e.g. clinical, legal, financial decisions).

πŸ“š Citation

When using this dataset in research, demos, or teaching material, please cite the dataset URL on Hugging Face and:

β€œRAG QA Evaluation Logs & Corpus β€” Synthetic Multi-Table Benchmark by Tarek Masryo”


πŸ“œ License

CC BY 4.0 (Attribution Required)
You are free to use, share, and modify this dataset, provided that appropriate credit is given.