| | --- |
| | dataset_info: |
| | features: |
| | - name: image |
| | dtype: image |
| | - name: labels |
| | dtype: |
| | class_label: |
| | names: |
| | '0': charts |
| | '1': diagram |
| | '2': geometry |
| | '3': medical |
| | '4': ocr |
| | '5': random |
| | '6': table |
| | splits: |
| | - name: train |
| | num_bytes: 160813723527.0 |
| | num_examples: 700768 |
| | - name: test |
| | num_bytes: 8506367769.25 |
| | num_examples: 36886 |
| | download_size: 169224452489 |
| | dataset_size: 169320091296.25 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: test |
| | path: data/test-* |
| | --- |
| | |
| | # π Vision Filtering Dataset |
| |
|
| | A high-quality, labeled image dataset designed to benchmark computer vision models for filtering noisy image dataβespecially relevant for pretraining and curating datasets for vision-language models (VLMs). |
| |
|
| | --- |
| |
|
| | ## π Overview |
| |
|
| | This dataset contains **6 image categories** curated from online and public datasets: |
| |
|
| | - π `charts`: Graphs, bar charts, line charts, pie charts |
| | - π§ `diagrams`: Schematics, flowcharts, technical illustrations |
| | - π `geometry`: Geometric shapes, figures, and math visuals |
| | - π₯ `medical`: Annotated scans, X-rays, and medical diagrams |
| | - π€ `ocr`: Images containing printed text or handwriting |
| | - π `random`: Miscellaneous, non-relevant/noisy images |
| |
|
| | The dataset is intended for training and evaluating classification models to **automatically filter relevant images** from large-scale scraped datasets. |
| |
|
| | --- |
| | ## π§© Datasets by Category (with Hugging Face Links) |
| |
|
| | | Category | Dataset | |
| | |--------------|---------| |
| | | π **Charts** | [nimapourjafar/mm_chart2text](https://huggingface.co/datasets/nimapourjafar/mm_chart2text) <br> [nimapourjafar/mm_chartqa](https://huggingface.co/datasets/nimapourjafar/mm_chartqa) <br> [nimapourjafar/mm_dvqa](https://huggingface.co/datasets/nimapourjafar/mm_dvqa) <br> [nimapourjafar/mm_figureqa](https://huggingface.co/datasets/nimapourjafar/mm_figureqa) <br> [nimapourjafar/mm_plotqa](https://huggingface.co/datasets/nimapourjafar/mm_plotqa) <br> [nimapourjafar/mm_vistext](https://huggingface.co/datasets/nimapourjafar/mm_vistext) | |
| | | π **Diagram** | [lmms-lab/ai2d](https://huggingface.co/datasets/lmms-lab/ai2d) <br> [nimapourjafar/mm_tqa](https://huggingface.co/datasets/nimapourjafar/mm_tqa) <br> [shreyanshu09/Block_Diagram](https://huggingface.co/datasets/shreyanshu09/Block_Diagram) <br> [yyyyifan/TQA](https://huggingface.co/datasets/yyyyifan/TQA) | |
| | | π **Geometry** | [5CD-AI/Viet-Geometry-VQA](https://huggingface.co/datasets/5CD-AI/Viet-Geometry-VQA) <br> [AI4Math/MathVerse](https://huggingface.co/datasets/AI4Math/MathVerse) <br> [HuggingFaceM4/datikz](https://huggingface.co/datasets/HuggingFaceM4/datikz) <br> [MathLLMs/MathVision](https://huggingface.co/datasets/MathLLMs/MathVision) <br> [nimapourjafar/mm_geomverse](https://huggingface.co/datasets/nimapourjafar/mm_geomverse) <br> [nimapourjafar/mm_intergps](https://huggingface.co/datasets/nimapourjafar/mm_intergps) <br> [PeijieWang/MV-MATH](https://huggingface.co/datasets/PeijieWang/MV-MATH) <br> [THU-KEG/MM_Math](https://huggingface.co/datasets/THU-KEG/MM_Math) <br> [VIM-Bench/VIM-MathVista](https://huggingface.co/datasets/VIM-Bench/VIM-MathVista) <br> [We-Math/We-Math](https://huggingface.co/datasets/We-Math/We-Math) | |
| | | 𧬠**Medical** | [foreverbeliever/OmniMedVQA](https://huggingface.co/datasets/foreverbeliever/OmniMedVQA) <br> [rbojia/medical-vqa](https://huggingface.co/datasets/rbojia/medical-vqa) | |
| | | π§Ύ **OCR** | [5CD-AI/Viet-Geometry-VQA](https://huggingface.co/datasets/5CD-AI/Viet-Geometry-VQA) <br> [mathieu1256/FATURA2-invoices](https://huggingface.co/datasets/mathieu1256/FATURA2-invoices) <br> [nimapourjafar/mm_docvqa](https://huggingface.co/datasets/nimapourjafar/mm_docvqa) <br> [nimapourjafar/mm_iam](https://huggingface.co/datasets/nimapourjafar/mm_iam) <br> [nimapourjafar/mm_ocrvqa](https://huggingface.co/datasets/nimapourjafar/mm_ocrvqa) <br> [nimapourjafar/mm_rendered_text](https://huggingface.co/datasets/nimapourjafar/mm_rendered_text) <br> [nimapourjafar/mm_visualmrc](https://huggingface.co/datasets/nimapourjafar/mm_visualmrc) <br> [nimapourjafar/mm_websight](https://huggingface.co/datasets/nimapourjafar/mm_websight) <br> [vikp/doclaynet_math](https://huggingface.co/datasets/vikp/doclaynet_math) <br> [JayRay5/Image_Infographvqa](https://huggingface.co/datasets/JayRay5/Image_Infographvqa) <br> [nimapourjafar/mm_infographic_vqa](https://huggingface.co/datasets/nimapourjafar/mm_infographic_vqa) <br> [nimapourjafar/mm_finqa](https://huggingface.co/datasets/nimapourjafar/mm_finqa) <br> [nimapourjafar/mm_multihierrt](https://huggingface.co/datasets/nimapourjafar/mm_multihierrt) <br> [nimapourjafar/mm_robust_sqa](https://huggingface.co/datasets/nimapourjafar/mm_robust_sqa) <br> [nimapourjafar/mm_robust_wikisql](https://huggingface.co/datasets/nimapourjafar/mm_robust_wikisql) <br> [nimapourjafar/mm_robust_wtq](https://huggingface.co/datasets/nimapourjafar/mm_robust_wtq) <br> [nimapourjafar/mm_tabmwp](https://huggingface.co/datasets/nimapourjafar/mm_tabmwp) <br> [nimapourjafar/mm_tat_qa](https://huggingface.co/datasets/nimapourjafar/mm_tat_qa) | |
| | | π **Random** | [COCO Dataset](https://cocodataset.org/) | |
| |
|
| | --- |
| |
|
| | ## π Dataset Distribution |
| |
|
| | The dataset is well-balanced across six image categories, with slightly more samples in the `ocr` and `random` classes. |
| |
|
| | <div style="display: flex; justify-content: space-between; align-items: center; flex-wrap: wrap; gap: 20px;"> |
| |
|
| | <img src="./dataset_distribution_pie_chart.png" alt="Pie Chart - Percentage Distribution" style="width: 35%; height: auto;" /> |
| |
|
| | <img src="./file_distribution_chart.png" alt="Bar Chart - File Count" style="width: 52%; height: auto;" /> |
| |
|
| | </div> |
| |
|
| | ## π§Ύ Dataset Structure |
| |
|
| | The dataset is organized in a standard image classification folder format: |
| |
|
| | vision-filtering-dataset/ |
| | ``` |
| | βββ train/ |
| | β βββ charts/ |
| | β βββ diagrams/ |
| | β βββ geometry/ |
| | β βββ medical/ |
| | β βββ ocr/ |
| | β βββ random/ |
| | βββ test/ |
| | βββ charts/ |
| | βββ diagrams/ |
| | βββ geometry/ |
| | βββ medical/ |
| | βββ ocr/ |
| | βββ random/ |
| | ``` |
| | Each subfolder contains `.jpg` or `.png` image files. |
| |
|
| | --- |
| |
|
| | ## π§ͺ Use Cases |
| |
|
| | - Vision model training (CNNs, Transformers, ViTs) |
| | - Image filtering for web-scraped datasets |
| | - Preprocessing for multimodal or OCR-based tasks |
| | - Benchmarking classification models on mixed visual domains |
| |
|
| | --- |
| |
|
| | ## π§ Loading with π€ Datasets |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("AbdulazizAlshamsi/VLM_Dataset_classification") |
| | train = dataset["train"] |
| | test = dataset["test"] |
| | ``` |
| |
|
| | Each sample contains: |
| | β’ image: the image data (PIL object) |
| | β’ label: the class label (charts, diagrams, etc.) |
| | |
| | βΈ» |
| |
|
| | ## π Citation |
| |
|
| | If you use this dataset, please cite it as follows: |
| |
|
| | ```bibtex |
| | @misc{visionfiltering2025, |
| | title={Vision Filtering Dataset}, |
| | author={Abdulaziz Alshamsi}, |
| | year={2025}, |
| | howpublished={\url{https://huggingface.co/datasets/AbdulazizAlshamsi/VLM_Dataset_classification}}, |
| | note={Image classification dataset for visual filtering} |
| | } |
| | ``` |
| | βΈ» |
| |
|
| | ## πββοΈ Author |
| |
|
| | Abdulaziz Alshamsi |
| | AI Researcher β The University of Manchester |
| | π§ abdulaziz.alshamsi@postgrad.manchester.ac.uk |
| | π LinkedIn |
| |
|
| | βΈ» |
| |
|
| | ## β€οΈ Contributions |
| |
|
| | Feel free to open issues or submit pull requests to improve the dataset! |