license: cc-by-4.0
task_categories:
- tabular-classification
- tabular-regression
- question-answering
- text-generation
language:
- en
tags:
- rag
- retrieval-augmented-generation
- evaluation
- hallucination
- meta-modeling
- risk-scoring
- logs
- telemetry
- tabular-data
- multi-table
- machine-learning
- open-dataset
- synthetic
- simulated
- agent
- code
pretty_name: RAG QA Logs & Corpus
size_categories:
- 100K<n<1M
π§ RAG QA Evaluation Logs & Corpus
Multi-Table RAG Telemetry for Quality, Hallucinations, Latency, and Cost
A multi-table dataset modeling a production-style RAG (Retrieval-Augmented Generation) system end-to-end:
- a document corpus
- a chunk-level index
- retrieval events per query
- QA evaluation runs with correctness, hallucination, latency, and cost signals
- scenario templates for QA use cases
- a data dictionary documenting every column
All records are fully synthetic but realistic system logs and corpus data β designed to look and behave like real RAG telemetry while remaining safe to share and experiment with.
The dataset is intended for RAG quality analysis, meta-modeling, hallucination risk scoring, and dashboard-style telemetry.
π Privacy & Synthetic Data
This dataset is fully synthetic.
- No real users, customers, patients, or organisations are represented.
- No personally identifiable information (PII) is included.
- All IDs, queries, documents, and logs were programmatically generated to mimic realistic RAG system behaviour while preserving privacy.
The design aims to balance realism (for meaningful analysis and modeling) with strong privacy guarantees, making it suitable for open research, teaching, demos, and public dashboards.
π Dataset Overview
| Field | Description |
|---|---|
| Files | rag_corpus_documents.csv, rag_corpus_chunks.csv, rag_qa_eval_runs.csv, rag_retrieval_events.csv, rag_qa_scenarios.csv, rag_qa_data_dictionary.csv |
| Tables | 6 (documents, chunks, QA runs, retrieval events, data dictionary) |
| Total rows (approx.) | ~103K across all tables (103,273 rows in total) |
| Main targets | is_correct, hallucination_flag, faithfulness_label |
| Type | Multi-table tabular logs + short text fields |
Tables are linked by stable identifiers such as doc_id, chunk_id, run_id, example_id, and scenario_id, making joins explicit and reliable.
π Files
rag_corpus_documents.csvβ document-level corpus metadatarag_corpus_chunks.csvβ chunk-level index and text contentrag_qa_eval_runs.csvβ QA runs with labels, metrics, and configurationsrag_retrieval_events.csvβ per-chunk retrieval telemetryrag_qa_scenarios.csvβ scenario-level QA templates and use casesrag_qa_data_dictionary.csvβ column-level documentation for all tables
All files are in CSV format, use snake_case column names, and are designed to be ML- and analytics-ready.
π Table Summary
Approximate size per table:
| Table | Rows | Columns | Granularity |
|--------------------------|--------|---------|-------------------------------------------------------|
| rag_corpus_documents | 658 | 19 | One row per document in the RAG corpus |
| rag_corpus_chunks | 5,237 | 6 | One row per text chunk derived from a document |
| rag_qa_eval_runs | 3,824 | 46 | One row per QA evaluation example |
| rag_retrieval_events | 93,375 | 9 | One row per retrieved chunk for a given QA example |
| rag_qa_scenarios | 88 | 11 | One row per scenario-level QA template / use case |
| rag_qa_data_dictionary | 91 | 5 | One row per column definition across all tables |
π§± Table Structure
1οΈβ£ Document Corpus β rag_corpus_documents.csv
High-level view of the RAG knowledge base. Granularity: 1 row = 1 document Key fields:
doc_idβ unique document identifierdomainβ e.g.support_faq,hr_policies,product_docs,developer_docs,policies,financial_reports,medical_guides,research_papers,customer_success,data_platform_docs,mlops_docs,marketing_analyticstitleβ document titlesource_typeβ e.g.pdf_manual,spreadsheet,wiki_pagelanguageβ currentlyenn_sections,n_tokens,n_chunks,avg_chunk_tokensβ structural and size indicatorscreated_at_utc,last_updated_at_utcβ lifecycle timestampsis_active,contains_tablesβ operational flagspii_risk_level,security_tierβ risk and access levelowner_team,embedding_model,search_index,top_keywordsβ ownership and indexing metadata
Use this table to understand what kind of corpus the RAG system is built on and how corpus properties relate to downstream performance.
2οΈβ£ Chunk Corpus β rag_corpus_chunks.csv
What the retriever actually βseesβ.
Granularity: 1 row = 1 chunk
Key fields:
chunk_idβ unique chunk identifierdoc_idβ foreign key torag_corpus_documents.doc_iddomainβ propagated from the parent documentchunk_indexβ 0-based position of the chunk within the documentestimated_tokensβ approximate token lengthchunk_textβ the text content used for retrieval and ranking
Use this table to rebuild retrieval candidates, inspect chunking strategies, and study how content structure affects retrieval.
3οΈβ£ QA Evaluation Runs β rag_qa_eval_runs.csv
End-to-end evaluation records for questionβanswer runs, including labels and metrics. Granularity: 1 row = 1 QA example (one query, one answer, one configuration) Key fields: Context & content
example_id,run_idβ unique identifiersdomain,task_type,difficultyβ scenario and question typescenario_idβ link torag_qa_scenariosqueryβ user-style question textgold_answerβ reference answerhas_answer_in_corpusβ whether the corpus actually contains sufficient evidence
Quality & hallucination signals
is_correctβ main binary correctness flagcorrectness_labelβ descriptive view of correctness (e.g. correct / partial / incorrect)faithfulness_labelβ e.g. faithful / unfaithful / unknownhallucination_flagβ binary hallucination indicatoruser_feedback_labelβ simplified user-style feedbacksupervising_judge_labelβ synthetic βexpertβ judgementis_noanswer_probeβ marks deliberately unanswerable queries
Retrieval metrics
retrieval_strategy,chunking_strategyn_retrieved_chunkstop1_score,mean_retrieved_scorerecall_at_5,recall_at_10,mrr_at_10has_relevant_in_top5,has_relevant_in_top10
Latency & resource usage
latency_ms_retrieval,latency_ms_generation,total_latency_msused_long_context_window,context_window_tokens
Configuration & cost
embedding_model,reranker_model,generator_modeltemperature,top_p,max_new_tokens,stop_reasonprompt_tokens,answer_tokens,total_cost_usd
Supervision & usage
doc_ids_used,chunk_ids_usedeval_mode,created_at_utc
This table is the main entry point for meta-modeling, risk scoring, and latencyβcostβquality tradeoff analysis.
4οΈβ£ Retrieval Events β rag_retrieval_events.csv
Per-chunk retrieval telemetry for each QA example.
Granularity: 1 row = 1 retrieved chunk for an example
Key fields:
run_id,example_idβ link back torag_qa_eval_runschunk_idβ link torag_corpus_chunksrankβ rank position (1 = top)retrieval_scoreβ retriever scoreis_relevantβ relevance label for this chunkdomain,difficulty,retrieval_strategyβ redundant context fields for easier analysis
Use this table to reconstruct retrieval lists, compute custom ranking metrics, and explore how ranking quality influences final answers.
5οΈβ£ Scenarios β rag_qa_scenarios.csv
Scenario-level templates and use cases for QA runs.
Granularity: 1 row = 1 scenario
Key fields:
scenario_idβ links torag_qa_eval_runs.scenario_iddomainβ scenario domain (support, HR, finance, medical, developer docs, etc.)primary_doc_idβ anchor document for the scenarioquery,gold_answerβ canonical scenario-level QA pairdifficulty_levelβ e.g. easy / medium / hardscenario_typeβ e.g. factual QA, policy lookup, multi-hop reasoninguse_caseβ short description of the business or product scenariohas_answer_in_corpusβ whether the scenario is designed to be answerable from the corpusn_eval_examples,is_used_in_evalβ how many QA examples were generated per scenario and whether it appears in the evaluation runs
This table adds a narrative layer on top of the logs, making it easier to build dashboards, teaching materials, or explainability reports.
6οΈβ£ Data Dictionary β rag_qa_data_dictionary.csv
Column-level documentation across all tables. Granularity: 1 row = 1 column definition Key fields:
table_nameβ name of the table the column belongs tocolumn_nameβ column name in snake_casedtypeβ high-level type (int, float, bool, category, datetime, text)descriptionβ human-readable explanationallowed_valuesβ expected values or ranges where applicable
Use this file as a single source of truth when exploring or building models on top of the dataset.
π― Targets & Tasks
Typical learning targets:
is_correctβ classification: did the system answer correctly?hallucination_flagβ classification: is the answer hallucinated?faithfulness_labelβ multi-class view of answer faithfulness
Paired with rich system signals (retrieval metrics, latency, cost, configuration), these enable:- Meta-models that estimate answer quality before showing it to users
- Risk scores driving block / escalate / rerun decisions
- Policy design for when to switch retrieval strategy or model configuration
π Example Usage
Using plain pandas with the raw files:
python import pandas as pd base_path = "path/to/data" # or the local dataset path docs = pd.read_csv(f"{base_path}/rag_corpus_documents.csv") chunks = pd.read_csv(f"{base_path}/rag_corpus_chunks.csv") qa_runs = pd.read_csv(f"{base_path}/rag_qa_eval_runs.csv") retrieval_events = pd.read_csv(f"{base_path}/rag_retrieval_events.csv") scenarios = pd.read_csv(f"{base_path}/rag_qa_scenarios.csv") dictionary = pd.read_csv(f"{base_path}/rag_qa_data_dictionary.csv")
Example join: attach the top-ranked retrieved chunk to each QA example:
python top = ( retrieval_events.query("rank == 1") .merge(chunks[["chunk_id", "chunk_text"]], on="chunk_id", how="left") ) qa_with_top_chunk = qa_runs.merge( top[["run_id", "chunk_text"]], on="run_id", how="left", suffixes=("", "_top_chunk"), )
You can then train a simple meta-model on qa_with_top_chunk to predict is_correct or hallucination_flag from retrieval and configuration features.
π¬ Research & Applications
RAG meta-modeling
- Predict correctness or hallucination risk from retrieval and latency metrics
- Build guardrails that decide when to block, escalate, or rerun answers
Retrieval & ranking analysis
- Compare retrieval strategies across domains and difficulty levels
- Explore how rank, score, and recall relate to final correctness
Latency & cost trade-offs
- Study how
total_latency_ms,context_window_tokens, andtotal_cost_usdinteract with answer quality - Prototype βfast vs carefulβ modes for RAG systems
- Study how
Teaching & dashboards
- Demonstrate a realistic RAG pipeline without exposing real logs
- Build dashboards that visualise quality, latency, cost, and configuration over time
π§ Ethical Considerations
- All records are fully synthetic system logs and corpus content, not collected from real users or organisations.
- No personally identifiable information (PII) is included.
- The dataset is intended for research, teaching, benchmarking, and prototyping,
not for validating real-world systems in high-stakes domains (e.g. clinical, legal, financial decisions).
π Citation
When using this dataset in research, demos, or teaching material, please cite the dataset URL on Hugging Face and:
βRAG QA Evaluation Logs & Corpus β Synthetic Multi-Table Benchmark by Tarek Masryoβ
π License
CC BY 4.0 (Attribution Required)
You are free to use, share, and modify this dataset, provided that appropriate credit is given.