sayuyuyyy Sp1786 commited on
Commit
61e0894
·
verified ·
0 Parent(s):

Duplicate from Sp1786/multiclass-sentiment-analysis-dataset

Browse files

Co-authored-by: Shahriar Parvez <[email protected]>

Files changed (6) hide show
  1. .gitattributes +55 -0
  2. README.md +110 -0
  3. create_dataset.py +26 -0
  4. test_df.csv +0 -0
  5. train_df.csv +0 -0
  6. val_df.csv +0 -0
.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ - translation
6
+ language:
7
+ - en
8
+ tags:
9
+ - code
10
+ pretty_name: multiclass-sentiment-analysis-dataset
11
+ size_categories:
12
+ - 10K<n<100K
13
+ ---
14
+ # Dataset Card for Dataset Name
15
+
16
+ ## Dataset Description
17
+
18
+ - **Homepage:**
19
+ - **Repository:**
20
+ - **Paper:**
21
+ - **Leaderboard:**
22
+ - **Point of Contact:**
23
+
24
+ ### Dataset Summary
25
+
26
+ This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
27
+
28
+ ### Supported Tasks and Leaderboards
29
+
30
+ [More Information Needed]
31
+
32
+ ### Languages
33
+
34
+ [More Information Needed]
35
+
36
+ ## Dataset Structure
37
+
38
+ ### Data Instances
39
+
40
+ [More Information Needed]
41
+
42
+ ### Data Fields
43
+
44
+ [More Information Needed]
45
+
46
+ ### Data Splits
47
+
48
+ [More Information Needed]
49
+
50
+ ## Dataset Creation
51
+
52
+ ### Curation Rationale
53
+
54
+ [More Information Needed]
55
+
56
+ ### Source Data
57
+
58
+ #### Initial Data Collection and Normalization
59
+
60
+ [More Information Needed]
61
+
62
+ #### Who are the source language producers?
63
+
64
+ [More Information Needed]
65
+
66
+ ### Annotations
67
+
68
+ #### Annotation process
69
+
70
+ [More Information Needed]
71
+
72
+ #### Who are the annotators?
73
+
74
+ [More Information Needed]
75
+
76
+ ### Personal and Sensitive Information
77
+
78
+ [More Information Needed]
79
+
80
+ ## Considerations for Using the Data
81
+
82
+ ### Social Impact of Dataset
83
+
84
+ [More Information Needed]
85
+
86
+ ### Discussion of Biases
87
+
88
+ [More Information Needed]
89
+
90
+ ### Other Known Limitations
91
+
92
+ [More Information Needed]
93
+
94
+ ## Additional Information
95
+
96
+ ### Dataset Curators
97
+
98
+ [More Information Needed]
99
+
100
+ ### Licensing Information
101
+
102
+ [More Information Needed]
103
+
104
+ ### Citation Information
105
+
106
+ [More Information Needed]
107
+
108
+ ### Contributions
109
+
110
+ [More Information Needed]
create_dataset.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import DatasetDict, load_dataset
2
+ import csv
3
+ import json
4
+
5
+ def main():
6
+ for split in ["train", "val", "test"]:
7
+ input_file = csv.DictReader(open(f"{split}_df.csv"))
8
+
9
+ with open(f'{split}.jsonl', 'w') as fout:
10
+ for row in input_file:
11
+ fout.write(json.dumps({'id': row['id'], 'text': row['text'], 'label': row['label'], 'sentiment': row['sentiment']})+"\n")
12
+
13
+
14
+ """
15
+ train_dset = load_dataset("csv", data_files="train_df.csv", split="train")
16
+ val_dset = load_dataset("csv", data_files="val_df.csv", split="val")
17
+ test_dset = load_dataset("csv", data_files="test_df.csv", split="test")
18
+ raw_dset = DatasetDict()
19
+ raw_dset["train"] = train_dset
20
+ raw_dset["test"] = test_dset
21
+ for split, dset in raw_dset.items():
22
+ dset.to_json(f"{split}.jsonl")
23
+ """
24
+
25
+ if __name__ == "__main__":
26
+ main()
test_df.csv ADDED
The diff for this file is too large to render. See raw diff
 
train_df.csv ADDED
The diff for this file is too large to render. See raw diff
 
val_df.csv ADDED
The diff for this file is too large to render. See raw diff