harpreetsahota commited on
Commit
05fb0b0
·
verified ·
1 Parent(s): 47a1a0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -60,8 +60,6 @@ dataset_summary: '
60
 
61
  # Dataset Card for Qualcomm Interactive Video Dataset
62
 
63
- ![image/png](qivd.gif)
64
-
65
 
66
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2900 samples.
67
 
@@ -89,6 +87,10 @@ session = fo.launch_app(dataset)
89
 
90
  ### Dataset Description
91
 
 
 
 
 
92
  QIVD (Qualcomm Interactive Video Dataset) is a comprehensive video question-answering dataset designed for evaluating multimodal AI models on their ability to understand and reason about video content. The dataset contains 2,900 video samples with associated questions, answers, and temporal annotations. Each sample includes a question about the video content, a detailed answer, a short answer, and a timestamp indicating when the answer can be found in the video.
93
 
94
  The dataset covers 13 distinct categories of video understanding tasks, including object referencing, action detection, object attributes, action counting, object counting, and more specialized tasks like audio-visual reasoning and OCR in videos.
 
60
 
61
  # Dataset Card for Qualcomm Interactive Video Dataset
62
 
 
 
63
 
64
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2900 samples.
65
 
 
87
 
88
  ### Dataset Description
89
 
90
+ ![image/png](qivd.gif)
91
+
92
+
93
+
94
  QIVD (Qualcomm Interactive Video Dataset) is a comprehensive video question-answering dataset designed for evaluating multimodal AI models on their ability to understand and reason about video content. The dataset contains 2,900 video samples with associated questions, answers, and temporal annotations. Each sample includes a question about the video content, a detailed answer, a short answer, and a timestamp indicating when the answer can be found in the video.
95
 
96
  The dataset covers 13 distinct categories of video understanding tasks, including object referencing, action detection, object attributes, action counting, object counting, and more specialized tasks like audio-visual reasoning and OCR in videos.