You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Kenyan Animal Behavior Recognition (KABR) Mini-Scene Raw Videos

Dataset Summary

This dataset is comprised of a collection of 10+ hours of drone videos focused on Kenyan wildlife that contains behaviors of giraffes, plains zebras, and Grevy's zebras. Animals can be identified with bounding box coordinates provided, and behavior annotations can be recovered by linking the labels back to these bounding boxes from the mini-scene annotations provided in our ML-ready behavior recognition-focused subset of this data: KABR.

Data collection was conducted at the Mpala Research Centre in Kenya, by flying drones over the animals, providing high-quality video footage of the animal's natural behaviors. The drone footage is captured at a resolution of 4-5.4K, recorded at a frame rate of 29.97 frames per second using a DJI Air 2S drone.

KABR is the processed, ML-ready version of this dataset (with mini-scenes). It includes eight different classes, encompassing seven types of animal behavior and an additional category for occluded instances. In the annotation process for this dataset, a team of 10 people was involved, with an expert zoologist overseeing the process. Each behavior was labeled based on its distinctive features, using a standardized set of criteria to ensure consistency and accuracy across the annotations.

Note that these behaviors are not explicitly labeled on the data provided in this dataset, but can be linked through the process described below in the Dataset Instances Section. The raw footage provides researchers with access to the complete, unedited visual context that was used to generate the behavioral annotations in the original KABR mini-scene dataset. This enables studies of animal detection, tracking, environmental context analysis, and development of preprocessing pipelines for wildlife video analysis.

Supported Tasks and Leaderboards

This dataset could be used for training or evaluating animal detection models or as input for behavior analysis on videos with a custom pipeline. It supports various computer vision tasks including:

  • Animal Detection: Detecting giraffes, plains zebras, and Grevy's zebras in natural settings
  • Animal Tracking: Following individual animals across video sequences
  • Behavior Analysis: Developing methods for temporal behavior recognition
  • Environmental Context Analysis: Studying animal behavior in relation to environmental factors
  • Video Preprocessing: Developing pipelines for extracting regions of interest from drone footage
  • Multi-object Tracking: Tracking multiple animals simultaneously in wide-field drone videos

No specific leaderboards are maintained for this raw footage, since it serves as source material for developing and testing preprocessing methods rather than as a benchmark dataset itself.

Languages

English

Dataset Structure

The KABR full video dataset follows the following format:

/dataset/
    data/
        DD_MM_YY-DJI_0NNN/
            DD_MM_YY-DJI_0NNN.mp4  (or DD_MM_YY-DJI_0NNN-trimmed.mp4)
            actions/
                MS#.xml
                ...
            metadata/
                DJI_0NNN.jpg
                DJI_0NNN_metadata.json
                DJI_0NNN_tracks.xml
                DJI_0NNN.SRT
        DD_MM_YY-DJI_0NNN/
            DD_MM_YY-DJI_0NNN.mp4  (or DD_MM_YY-DJI_0NNN-trimmed.mp4)
            actions/
                MS#.xml
                ...
            metadata/
                DJI_0NNN.jpg
                DJI_0NNN_metadata.json
                DJI_0NNN_tracks.xml
                DJI_0NNN.SRT
        ...

Note: Directory names use the format DD_MM_YY-DJI_0NNN where DD_MM_YY represents the collection date (e.g., 11_01_23 for January 11, 2023) and DJI_0NNN is the video identifier. Some directories may include session information (e.g., 16_01_23_session_1-DJI_0001).

Ecological Metadata

For Darwin Core compliant ecological details including session information, environmental conditions, and sampling event data, please see the session_events.csv file in the KABR Behavior Telemetry dataset.

Data Instances

Naming: Within the data folder, each DD_MM_YY-DJI_0NNN directory contains:

  • DD_MM_YY-DJI_0NNN.mp4 or DD_MM_YY-DJI_0NNN-trimmed.mp4: Video collected by the drone (original or trimmed to remove people/takeoff/landing).
  • actions - Folder containing:
    • MS#.xml: Contains behavior annotation information for the indicated mini-scene (indicated by number MS#).
  • metadata - Folder containing:
    • DJI_0NNN.jpg: Color-coded Gantt chart indicating the timeline for mini-scenes derived from the video.
    • DJI_0NNN_metadata.json: Contains binary data relating the main video to its derived mini-scenes.
    • DJI_0NNN_tracks.xml: Contains bounding box coordinates for each mini-scene within the main video, with references to the frame ID relative to the main video.
    • DJI_0NNN.SRT: Subtitle file with drone telemetry data.

Examples:

  • DJI_0022_metadata.json:
{
    "original": "../data/recording_NNN/DJI_0022.mp4",
...
  • DJI_0022_tracks.xml:
<?xml version='1.0' encoding='UTF-8'?>
<annotations>
  <version>1.1</version>
  <meta>
    <task>
      <size>8720</size>
      <original_size>
        <width>3840</width>
        <height>2160</height>
      </original_size>
      <source>DJI_0022</source>
    </task>
  </meta>
  <track id="1" label="Zebra" source="manual">
    <box frame="1" outside="0" occluded="0" keyframe="1" xtl="1651.00" ytl="1114.00" xbr="1681.00" ybr="1132.00" z_order="0"/>
    <box frame="2" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1681.00" ybr="1133.00" z_order="0"/>
    <box frame="3" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1111.00" xbr="1681.00" ybr="1133.00" z_order="0"/>
    <box frame="4" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1681.00" ybr="1135.00" z_order="0"/>
    <box frame="5" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1680.00" ybr="1135.00" z_order="0"/>
    <box frame="6" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1680.00" ybr="1136.00" z_order="0"/>
    <box frame="7" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1115.00" xbr="1680.00" ybr="1136.00" z_order="0"/>
...

The bounding box can then be linked to the appropriate action/MS#.xml file (track id is MS#, the mini-scene number), which provides the behavior annotation:

<points frame="64" keyframe="0" outside="0" occluded="0" points="161.15,145.68" z_order="0">
    <attribute name="Behavior">Walk</attribute>

Note: The dataset consists of a total of 1,139,893 annotated frames captured from drone videos. There are 488,638 annotated frames of Grevy's zebras, 492,507 annotated frames of plains zebras, and 158,748 annotated frames of giraffes. Occasionally other animals and vehicles appear in the videos, but they are not identified.

Data Fields

There are 14,764 unique behavioral sequences in the dataset. These consist of eight distinct behaviors:

  • Walk
  • Trot
  • Run: animal is moving at a cantor or gallop
  • Graze: animal is eating grass or other vegetation
  • Browse: animal is eating trees or bushes
  • Head Up: animal is looking around or observe surroundings
  • Auto-Groom: animal is grooming itself (licking, scratching, or rubbing)
  • Occluded: animal is not fully visible

Dataset Creation

Curation Rationale

This KABR full video dataset was created to provide a comprehensive resource for studying animal behavior in their natural habitat using drone technology. It contains the complete, drone footage used for the KABR mini-scene dataset (Imageomics/KABR), thus enabling:

  • Method Development: Researchers can develop and test their own animal detection, tracking, and behavior analysis pipelines from scratch.
  • Context Preservation: The full-frame footage maintains environmental context that may be important for understanding animal behavior.
  • Preprocessing Research: Enables development of improved methods for extracting regions of interest from drone footage.
  • Data Augmentation: Raw footage can be processed in multiple ways to create different training scenarios.
  • Reproducibility: Provides the source material for validating and reproducing the KABR mini-scene creation process (kabr-tools pipeline) used to generate the original KABR dataset.

Additionally, The inclusion of flight metadata through SRT files allows researchers to incorporate spatial, temporal, and technical recording parameters into their analyses.

Source Data

Initial Data Collection and Normalization

Data was collected from 6 January 2023 through 21 January 2023 at the Mpala Research Centre in Kenya under a Nacosti research license. We used DJI Air and Mavic 2S drones equipped with cameras to record 4K and 5.4K resolution videos from varying altitudes and distances of 10 to 50 meters from the animals (distance was determined by circumstances and safety regulations).

Relationship to initial KABR dataset release

This raw video collection is the source material from which the processed KABR mini-scene behavioral recognition dataset was created. Researchers using both datasets can:

  • Validate the mini-scene extraction process
  • Develop alternative preprocessing approaches
  • Study the relationship between behavioral annotations and full environmental context
  • Create new datasets with different extraction parameters

Related KABR data and models can be found in the Imageomics KABR Collection. Additional information about the KABR project is available on the KABR project page.

Annotations

See our paper, KABR_In-Situ_Dataset_for_Kenyan_Animal_Behavior_Recognition_From_Drone, for full annotation process details.

Video Trimming and Processing Details

Some videos were trimmed to remove footage of people (identifiable), vehicle appearances, drone takeoff, or landing sequences. People were only close enough for ID at takeoff and landing. The table below details which videos were trimmed and the reasons:

Note: January 16, 2023 data represents a single collection session split into two flights (flight_1 and flight_2).

Date Session/Flight Video ID Trim Type Trim Point Reason
11_01_23 session_1 DJI_0488 End trim 7:48 Remove people
11_01_23 session_2 DJI_0980 End trim 4:00 Remove people
12_01_23 session_1 DJI_0989 End trim 1:00 Remove people
12_01_23 session_2 DJI_0994 End trim 3:30 Remove people
12_01_23 session_3 DJI_0997 Start trim 0:15 Remove people
12_01_23 session_3 DJI_0998 End trim 2:30 Remove people
12_01_23 session_4 DJI_0003 End trim 1:00 Remove people and landing
12_01_23 session_5 DJI_0008 End trim 3:00 Remove landing and people
13_01_23 session_1 DJI_0009 End trim 4:18 Remove landing
13_01_23 session_2 DJI_0011 Start trim 0:50 Remove people and takeoff
13_01_23 session_3 DJI_0014 Start trim 0:24 Remove people
13_01_23 session_4 DJI_0017 Start trim 0:12 Remove people
13_01_23 session_5 DJI_0018 Start trim 0:22 Remove takeoff
13_01_23 session_5 DJI_0021 Start trim 0:27 Remove launch
13_01_23 session_6 DJI_0027 Start trim 0:27 Remove takeoff
13_01_23 session_6 DJI_0029 Start trim 0:30 Remove takeoff
13_01_23 session_7 DJI_0031 Start trim 0:27 Remove takeoff
13_01_23 session_8 DJI_0034 Start trim 0:27 Remove takeoff
13_01_23 session_8 DJI_0039 Start trim 0:39 Remove takeoff and people
16_01_23 flight_1 DJI_0001 Start trim 0:12 Remove people
16_01_23 flight_2 DJI_0004 End trim Last 0:10 Remove landing
17_01_23 session_1 DJI_0005 Start trim 0:40 Remove people and takeoff
17_01_23 session_2 DJI_0008 Start trim 0:28 Remove people and takeoff

Excluded Videos:

  • 12_01_23-DJI_0993: Deleted (no wildlife data or behavior annotations)
  • 13_01_23-DJI_0010: Deleted (only contained drone landing footage)
  • 16_01_23-DJI_0005: Deleted (no useful data)
  • '12_01_23-DJI_0008': This directory does not include 5.xml and 8.xml, these tracks were generated, but less than 3 seconds so excluded from analysis.

Additional Notes:

  • 11_01_23-DJI_0979: Field vehicle appears briefly at 2:18 but not close enough for individual identification

Personal and Sensitive Information

Personally identifiable information (PII) has been removed from the dataset. Though there are exact locations of endangered species included in this data, their safety is assured by their location within the preserve, and the footage contributes to conservation research efforts. This location release has thus been approved by our partners at Mpala Research Centre.

Considerations for Using the Data

Intended Use Cases

  • This raw video dataset is intended for:
  • Computer vision research in wildlife monitoring
  • Development of animal detection and tracking algorithms
  • Behavioral analysis method development
  • Drone-based wildlife survey technique improvement
  • Educational purposes in wildlife research and computer vision

Bias, Risks, and Limitations

Content-based

  • Limited to three species: giraffes, plains zebras, and Grevy's zebras
  • All footage is from a single location (Mpala Research Centre, Kenya)
  • Limited temporal range (16 days in January 2023)
  • Some videos have been trimmed, potentially affecting continuity (only removed takeoff and landing)

Technical

  • Large file sizes may require significant storage and bandwidth
  • High resolution requires substantial computational resources for processing
  • Lighting and weather conditions vary across footage
  • Natural occlusions and camera movement are present in drone footage

Other Known Limitations

This dataset is not ML-ready. It contains the full videos (with bounding box coordinates) that were processed to create the KABR dataset. See KABR Behavior Telemetry Dataset for ecological metadata associated with this dataset and AI-ready behavior and detection annotations.

This data exhibits a long-tailed distribution due to the natural variation in frequency of the observed behaviors.

Additional Information

Authors

  • Jenna Kline (The Ohio State University) - ORCID: 0009-0006-7301-5774
  • Maksim Kholiavchenko (Rensselaer Polytechnic Institute) - ORCID: 0000-0001-6757-1957
  • Michelle Ramirez (The Ohio State University)
  • Samuel Stevens (The Ohio State University) - ORCID: 0009-0000-9493-7766
  • Alec Sheets (The Ohio State University) - ORCID: 0000-0002-3737-1484
  • Reshma Ramesh Babu (The Ohio State University) - ORCID: 0000-0002-2517-5347
  • Namrata Banerji (The Ohio State University) - ORCID: 0000-0001-6813-0010
  • Elizabeth Campolongo (The Ohio State University) - ORCID: 0000-0003-0846-2413
  • Matthew Thompson (The Ohio State University) - ORCID: 0000-0003-0583-8585
  • Nina Van Tiel (École polytechnique fédérale de Lausanne) - ORCID: 0000-0001-6393-5629
  • Jackson Miliko (Mpala Research Centre)
  • Isla Duporge (Princeton University) - ORCID: 0000-0001-8463-2459
  • Neil Rosser (University of Florida) - ORCID:0000-0001-7796-2548
  • Eduardo Bessa (Universidade de Brasília) - ORCID: 0000-0003-0606-5860
  • Charles Stewart (Rensselaer Polytechnic Institute)
  • Tanya Berger-Wolf (The Ohio State University) - ORCID: 0000-0001-7610-1412
  • Daniel Rubenstein (Princeton University) - ORCID: 0000-0001-9049-5219

Licensing Information

This dataset is dedicated to the public domain for the benefit of scientific pursuits. We ask that you cite this dataset and the original KABR paper using the below citations if you make use of it in your research.

Citation Information

Dataset

@misc{kabr-mini-scene-videos,
  author = {
    Jenna Kline and Maksim Kholiavchenko and Michelle Ramirez and Samuel Stevens and Alec Sheets and Reshma Ramesh Babu and
    Namrata Banerji and Elizabeth Campolongo and Matthew Thompson and Nina Van Tiel and Jackson Miliko and Isla Duporge and Neil Rosser and
    Eduardo Bessa and Charles Stewart and Tanya Berger-Wolf and Daniel Rubenstein
  },
  title = {Kenyan Animal Behavior Recognition (KABR) Mini-Scene Raw Videos},
  year = {2026},
  url = {https://huggingface.co/datasets/imageomics/KABR-mini-scene-raw-videos},
  doi = {},
  publisher = {Hugging Face},
  }

Paper

@inproceedings{kholiavchenko2024kabr,
  title={KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition from Drone Videos},
  author={Kholiavchenko, Maksim and Kline, Jenna and Ramirez, Michelle and Stevens, Sam and Sheets, Alec and Babu, Reshma and Banerji, Namrata and Campolongo, Elizabeth and Thompson, Matthew and Van Tiel, Nina and Miliko, Jackson and Bessa, Eduardo and Duporge, Isla and Berger-Wolf, Tanya and Rubenstein, Daniel and Stewart, Charles},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={31-40},
  year={2024},
  doi={10.1109/WACVW60836.2024.00011}
}

Contributions

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Additional support was also provided by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), which is funded by the US National Science Foundation under Award #2112606. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The data was gathered at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.

Dataset Card Authors

Jenna Kline and Elizabeth Campolongo

Dataset Card Contact

Please open a discussion in the Community tab with any questions regarding this dataset.

Downloads last month
47

Collection including imageomics/KABR-mini-scene-raw-videos