ray0303 commited on
Commit
c6e56e0
·
verified ·
1 Parent(s): 2b67b58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -84
README.md CHANGED
@@ -1,84 +1,85 @@
1
- ---
2
- library_name: transformers
3
- language:
4
- - tk
5
- license: apache-2.0
6
- base_model: openai/whisper-small
7
- tags:
8
- - generated_from_trainer
9
- datasets:
10
- - mozilla-foundation/common_voice_17_0
11
- metrics:
12
- - wer
13
- model-index:
14
- - name: Whisper Small TK - Abdyrahman Gudratullayew
15
- results:
16
- - task:
17
- name: Automatic Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Common Voice 17.0
21
- type: mozilla-foundation/common_voice_17_0
22
- config: tk
23
- split: test
24
- args: 'config: tk, split: test'
25
- metrics:
26
- - name: Wer
27
- type: wer
28
- value: 57.933673469387756
29
- ---
30
-
31
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
- should probably proofread and complete it, then remove this comment. -->
33
-
34
- # Whisper Small TK - Abdyrahman Gudratullayew
35
-
36
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 1.3114
39
- - Wer: 57.9337
40
-
41
- ## Model description
42
-
43
- More information needed
44
-
45
- ## Intended uses & limitations
46
-
47
- More information needed
48
-
49
- ## Training and evaluation data
50
-
51
- More information needed
52
-
53
- ## Training procedure
54
-
55
- ### Training hyperparameters
56
-
57
- The following hyperparameters were used during training:
58
- - learning_rate: 1e-05
59
- - train_batch_size: 16
60
- - eval_batch_size: 8
61
- - seed: 42
62
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
- - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 500
65
- - training_steps: 5000
66
- - mixed_precision_training: Native AMP
67
-
68
- ### Training results
69
-
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:-------:|:----:|:---------------:|:-------:|
72
- | 0.0083 | 14.0845 | 1000 | 1.1117 | 60.3571 |
73
- | 0.0003 | 28.1690 | 2000 | 1.2099 | 57.7041 |
74
- | 0.0002 | 42.2535 | 3000 | 1.2640 | 58.0102 |
75
- | 0.0001 | 56.3380 | 4000 | 1.2973 | 58.1378 |
76
- | 0.0001 | 70.4225 | 5000 | 1.3114 | 57.9337 |
77
-
78
-
79
- ### Framework versions
80
-
81
- - Transformers 4.48.3
82
- - Pytorch 2.6.0+cu118
83
- - Datasets 3.2.0
84
- - Tokenizers 0.21.0
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - tk
5
+ license: apache-2.0
6
+ base_model: openai/whisper-small
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - mozilla-foundation/common_voice_17_0
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Whisper Small TK - Abdyrahman Gudratullayew
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice 17.0
21
+ type: mozilla-foundation/common_voice_17_0
22
+ config: tk
23
+ split: test
24
+ args: 'config: tk, split: test'
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 57.933673469387756
29
+ pipeline_tag: automatic-speech-recognition
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # Whisper Small TK - Abdyrahman Gudratullayew
36
+
37
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 1.3114
40
+ - Wer: 57.9337
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 1e-05
60
+ - train_batch_size: 16
61
+ - eval_batch_size: 8
62
+ - seed: 42
63
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 500
66
+ - training_steps: 5000
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
72
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
73
+ | 0.0083 | 14.0845 | 1000 | 1.1117 | 60.3571 |
74
+ | 0.0003 | 28.1690 | 2000 | 1.2099 | 57.7041 |
75
+ | 0.0002 | 42.2535 | 3000 | 1.2640 | 58.0102 |
76
+ | 0.0001 | 56.3380 | 4000 | 1.2973 | 58.1378 |
77
+ | 0.0001 | 70.4225 | 5000 | 1.3114 | 57.9337 |
78
+
79
+
80
+ ### Framework versions
81
+
82
+ - Transformers 4.48.3
83
+ - Pytorch 2.6.0+cu118
84
+ - Datasets 3.2.0
85
+ - Tokenizers 0.21.0