TiberiuCristianLeon commited on
Commit
e4b37f2
·
verified ·
1 Parent(s): 90c0000

Upload 2 files

Browse files
Files changed (2) hide show
  1. GradioREADME.md +87 -0
  2. Gradioapp.py +735 -0
GradioREADME.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Multilingual Machine Translation
3
+ emoji: 👁
4
+ colorFrom: red
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 6.0.2
8
+ app_file: app.py
9
+ pinned: true
10
+ license: gpl-3.0
11
+ short_description: Gradio 6 Machine Translation with MCP
12
+ task: translation
13
+ pipeline_tag: translation
14
+ tags:
15
+ - translation
16
+ - machine translation
17
+ - translate
18
+ - multilingual
19
+ - MCP
20
+ - polyglot
21
+ models:
22
+ - Helsinki-NLP
23
+ - QUICKMT
24
+ - Argos
25
+ - Google
26
+ - HPLT
27
+ - HPLT-OPUS
28
+ - Helsinki-NLP/opus-mt-tc-bible-big-mul-mul
29
+ - Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_nld
30
+ - Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_fra_por_spa
31
+ - Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul
32
+ - Helsinki-NLP/opus-mt-tc-bible-big-roa-deu_eng_fra_por_spa
33
+ - Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-roa
34
+ - Helsinki-NLP/opus-mt-tc-bible-big-roa-en
35
+ - facebook/nllb-200-distilled-600M
36
+ - facebook/nllb-200-distilled-1.3B
37
+ - facebook/nllb-200-1.3B
38
+ - facebook/nllb-200-3.3B
39
+ - facebook/mbart-large-50-many-to-many-mmt
40
+ - facebook/mbart-large-50-one-to-many-mmt
41
+ - facebook/mbart-large-50-many-to-one-mmt
42
+ - facebook/m2m100_418M
43
+ - facebook/m2m100_1.2B
44
+ - facebook/hf-seamless-m4t-medium
45
+ - facebook/seamless-m4t-large
46
+ - facebook/seamless-m4t-v2-large
47
+ - alirezamsh/small100
48
+ - Lego-MT/Lego-MT
49
+ - bigscience/mt0-small
50
+ - bigscience/mt0-base
51
+ - bigscience/mt0-large
52
+ - bigscience/mt0-xl
53
+ - bigscience/bloomz-560m
54
+ - bigscience/bloomz-1b1
55
+ - bigscience/bloomz-1b7
56
+ - bigscience/bloomz-3b
57
+ - google-t5/t5-small
58
+ - google-t5/t5-base
59
+ - google-t5/t5-large
60
+ - google/flan-t5-small
61
+ - google/flan-t5-base
62
+ - google/flan-t5-large
63
+ - google/flan-t5-xl
64
+ - google/madlad400-3b-mt
65
+ - jbochi/madlad400-3b-mt
66
+ - NiuTrans/LMT-60-0.6B
67
+ - NiuTrans/LMT-60-1.7B
68
+ - NiuTrans/LMT-60-4B",
69
+ - HuggingFaceTB/SmolLM3-3B
70
+ - winninghealth/WiNGPT-Babel-2
71
+ - utter-project/EuroLLM-1.7B
72
+ - utter-project/EuroLLM-1.7B-Instruct
73
+ - Unbabel/Tower-Plus-2B
74
+ - Unbabel/TowerInstruct-7B-v0.2
75
+ - Unbabel/TowerInstruct-Mistral-7B-v0.2
76
+ datasets:
77
+ - Argos
78
+ - Quickmt
79
+ - Bergamot
80
+ - OPUS
81
+ - HPLT
82
+ - Tatoeba
83
+ ---
84
+
85
+ ```text
86
+ Machine Translation App using various models with Gradio API and MCP
87
+ ```
Gradioapp.py ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import spaces
3
+ import torch
4
+ from transformers import T5Tokenizer, T5ForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM, AutoModelForCausalLM, AutoModel, pipeline
5
+ from transformers import logging as hflogging
6
+ import languagecodes
7
+ import httpx, os
8
+ import polars as pl
9
+
10
+ hflogging.set_verbosity_error()
11
+ favourite_langs = {"German": "de", "Romanian": "ro", "English": "en", "-----": "-----"}
12
+ df = pl.read_parquet("isolanguages.parquet")
13
+ non_empty_isos = df.slice(1).filter(pl.col("ISO639-1") != "").rows()
14
+ # all_langs = languagecodes.iso_languages_byname
15
+ all_langs = {iso[0]: (iso[1], iso[2], iso[3]) for iso in non_empty_isos} # {'Romanian': ('ro', 'rum', 'ron')}
16
+ iso1toall = {iso[1]: (iso[0], iso[2], iso[3]) for iso in non_empty_isos} # {'ro': ('Romanian', 'rum', 'ron')}
17
+ langs = list(favourite_langs.keys())
18
+ langs.extend(list(all_langs.keys())) # Language options as list, add favourite languages first
19
+
20
+ models = ["Helsinki-NLP", "QUICKMT", "Argos", "Google", "HPLT", "HPLT-OPUS",
21
+ "Helsinki-NLP/opus-mt-tc-bible-big-mul-mul", "Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_nld",
22
+ "Helsinki-NLP/opus-mt-tc-bible-big-mul-deu_eng_fra_por_spa", "Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul",
23
+ "Helsinki-NLP/opus-mt-tc-bible-big-roa-deu_eng_fra_por_spa", "Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-roa", "Helsinki-NLP/opus-mt-tc-bible-big-roa-en",
24
+ "facebook/nllb-200-distilled-600M", "facebook/nllb-200-distilled-1.3B", "facebook/nllb-200-1.3B", "facebook/nllb-200-3.3B",
25
+ "facebook/mbart-large-50-many-to-many-mmt", "facebook/mbart-large-50-one-to-many-mmt", "facebook/mbart-large-50-many-to-one-mmt",
26
+ "facebook/m2m100_418M", "facebook/m2m100_1.2B", "alirezamsh/small100",
27
+ "facebook/hf-seamless-m4t-medium", "facebook/seamless-m4t-large", "facebook/seamless-m4t-v2-large",
28
+ "bigscience/mt0-small", "bigscience/mt0-base", "bigscience/mt0-large", "bigscience/mt0-xl",
29
+ "bigscience/bloomz-560m", "bigscience/bloomz-1b1", "bigscience/bloomz-1b7", "bigscience/bloomz-3b",
30
+ "google/madlad400-3b-mt", "jbochi/madlad400-3b-mt",
31
+ "NiuTrans/LMT-60-0.6B", "NiuTrans/LMT-60-1.7B", "NiuTrans/LMT-60-4B",
32
+ "Lego-MT/Lego-MT", "BSC-LT/salamandraTA-2b-instruct",
33
+ "winninghealth/WiNGPT-Babel", "winninghealth/WiNGPT-Babel-2", "winninghealth/WiNGPT-Babel-2.1",
34
+ "Unbabel/Tower-Plus-2B", "HuggingFaceTB/SmolLM3-3B", "Unbabel/TowerInstruct-7B-v0.2",
35
+ "utter-project/EuroLLM-1.7B", "utter-project/EuroLLM-1.7B-Instruct",
36
+ "google-t5/t5-small", "google-t5/t5-base", "google-t5/t5-large",
37
+ "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", "google/flan-t5-xl"
38
+ ]
39
+ DEFAULTS = [langs[0], langs[1], models[0]]
40
+
41
+ def timer(func):
42
+ from time import time
43
+ def translate_text(input_text, s_language, t_language, model_name) -> tuple[str, str]:
44
+ start_time = time()
45
+ translated_text, message_text = func(input_text, s_language, t_language, model_name)
46
+ end_time = time()
47
+ execution_time = end_time - start_time
48
+ message_text = f'Executed in {execution_time:.2f} seconds! {message_text}'
49
+ return translated_text, message_text
50
+ return translate_text
51
+
52
+ def model_to_cuda(model):
53
+ # Move the model to GPU if available
54
+ if torch.cuda.is_available():
55
+ model = model.to('cuda')
56
+ print("CUDA is available! Using GPU.")
57
+ else:
58
+ print("CUDA not available! Using CPU.")
59
+ return model
60
+
61
+ def HelsinkiNLPAutoTokenizer(sl, tl, input_text): # deprecated
62
+ if model_name == "Helsinki-NLP":
63
+ message_text = f'Translated from {sl} to {tl} with {model_name}.'
64
+ try:
65
+ model_name = f"Helsinki-NLP/opus-mt-{sl}-{tl}"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = model_to_cuda(AutoModelForSeq2SeqLM.from_pretrained(model_name))
68
+ except EnvironmentError:
69
+ try:
70
+ model_name = f"Helsinki-NLP/opus-tatoeba-{sl}-{tl}"
71
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
72
+ model = model_to_cuda(AutoModelForSeq2SeqLM.from_pretrained(model_name))
73
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
74
+ output_ids = model.generate(input_ids, max_length=512)
75
+ translated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
76
+ return translated_text, message_text
77
+ except EnvironmentError as error:
78
+ return f"Error finding model: {model_name}! Try other available language combination.", error
79
+
80
+ class Translators:
81
+ def __init__(self, model_name: str, sl: str, tl: str, input_text: str):
82
+ self.model_name = model_name
83
+ self.sl, self.tl = sl, tl
84
+ self.input_text = input_text
85
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
86
+ self.max_new_tokens = 512
87
+
88
+ def google(self):
89
+ self.input_text = " ".join(self.input_text.split())
90
+ url = os.environ['GCLIENT'] + f'sl={self.sl}&tl={self.tl}&q={self.input_text}'
91
+ response = httpx.get(url)
92
+ return response.json()[0][0][0]
93
+
94
+ def simplepipe(self):
95
+ try:
96
+ pipe = pipeline("translation", model=self.model_name, device=self.device)
97
+ translation = pipe(self.input_text)
98
+ message = f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {self.model_name}.'
99
+ return translation[0]['translation_text'], message
100
+ except Exception as error:
101
+ return f"Error translating with model: {self.model_name}! Try other available language combination or model.", error
102
+
103
+ def niutrans(self):
104
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name, padding_side='left')
105
+ model = AutoModelForCausalLM.from_pretrained(self.model_name)
106
+ prompt = f"Translate the following text from {self.sl} into {self.tl}.\n{self.sl}: {self.input_text}.\n{self.tl}: "
107
+ messages = [{"role": "user", "content": prompt}]
108
+ text = tokenizer.apply_chat_template(
109
+ messages,
110
+ tokenize=False,
111
+ add_generation_prompt=True,
112
+ )
113
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
114
+ generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
115
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
116
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
117
+ outputs = ''.join(outputs) if isinstance(outputs, list) else outputs
118
+ return outputs
119
+
120
+ def salamandratapipe(self):
121
+ pipe = pipeline("text-generation", model=self.model_name)
122
+ messages = [{"role": "user", "content": f"Translate the following text from {self.sl} into {self.tl}.\n{self.sl}: {self.input_text} \n{self.tl}:"}]
123
+ return pipe(messages, max_new_tokens=self.max_new_tokens, early_stopping=True, num_beams=5)[0]["generated_text"][1]["content"]
124
+
125
+ def hplt(self, opus = False):
126
+ # langs = ['ar', 'bs', 'ca', 'en', 'et', 'eu', 'fi', 'ga', 'gl', 'hi', 'hr', 'is', 'mt', 'nn', 'sq', 'sw', 'zh_hant']
127
+ hplt_models = ['ar-en', 'bs-en', 'ca-en', 'en-ar', 'en-bs', 'en-ca', 'en-et', 'en-eu', 'en-fi',
128
+ 'en-ga', 'en-gl', 'en-hi', 'en-hr', 'en-is', 'en-mt', 'en-nn', 'en-sq', 'en-sw',
129
+ 'en-zh_hant', 'et-en', 'eu-en', 'fi-en', 'ga-en', 'gl-en', 'hi-en', 'hr-en',
130
+ 'is-en', 'mt-en', 'nn-en', 'sq-en', 'sw-en', 'zh_hant-en']
131
+ lang_map = {"zh": "zh_hant"}
132
+ self.sl = lang_map.get(self.sl, self.sl)
133
+ self.tl = lang_map.get(self.tl, self.tl)
134
+ if opus:
135
+ hplt_model = f'HPLT/translate-{self.sl}-{self.tl}-v1.0-hplt_opus' # HPLT/translate-en-hr-v1.0-hplt_opus
136
+ else:
137
+ hplt_model = f'HPLT/translate-{self.sl}-{self.tl}-v1.0-hplt' # HPLT/translate-en-hr-v1.0-hplt
138
+ if f'{self.sl}-{self.tl}' in hplt_models:
139
+ pipe = pipeline("translation", model=hplt_model, device=self.device)
140
+ translation = pipe(self.input_text)
141
+ translated_text = translation[0]['translation_text']
142
+ message_text = f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {hplt_model}.'
143
+ else:
144
+ translated_text = f'HPLT model from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} not available!'
145
+ message_text = f"Available models: {', '.join(hplt_models)}"
146
+ return translated_text, message_text
147
+
148
+ @staticmethod
149
+ def download_argos_model(available_packages, from_code, to_code):
150
+ import argostranslate.package
151
+ print('Downloading model for', from_code, to_code)
152
+ # Download and install Argos Translate package from path
153
+ package_to_install = next(
154
+ filter(lambda x: x.from_code == from_code and x.to_code == to_code, available_packages)
155
+ )
156
+ argostranslate.package.install_from_path(package_to_install.download())
157
+
158
+ def argos(self):
159
+ import argostranslate.translate, argostranslate.package
160
+ argostranslate.package.update_package_index()
161
+ available_packages = argostranslate.package.get_available_packages()
162
+ available_slanguages = [lang.from_code for lang in available_packages]
163
+ available_tlanguages = [lang.to_code for lang in available_packages]
164
+ available_languages = sorted(list(set(available_slanguages + available_tlanguages)))
165
+ combos: tuple[str|str] = sorted(list(zip(available_slanguages, available_tlanguages)))
166
+ packages_info = ', '.join(f"{pkg.from_name} ({pkg.from_code}) -> {pkg.to_name} ({pkg.to_code})" for pkg in available_packages)
167
+ # print(available_languages, combos, packages_info)
168
+ if self.sl not in available_languages and self.tl not in available_languages:
169
+ translated_text = f'''No supported Argos model available from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]}!
170
+ Try other model or languages combination from the available Argos models: {', '.join(available_languages)}.'''
171
+ else:
172
+ try:
173
+ if (self.sl, self.tl) in combos:
174
+ self.__class__.download_argos_model(available_packages, self.sl, self.tl) # Download model
175
+ translated_text = argostranslate.translate.translate(self.input_text, self.sl, self.tl) # Direct translation
176
+ elif (self.sl, 'en') in combos and ('en', self.tl) in combos:
177
+ self.__class__.download_argos_model(available_packages, self.sl, 'en') # Download model
178
+ translated_pivottext = argostranslate.translate.translate(self.input_text, self.sl, 'en') # Translate to pivot language English
179
+ self.__class__.download_argos_model(available_packages, 'en', self.tl) # Download model
180
+ translated_text = argostranslate.translate.translate(translated_pivottext, 'en', self.tl) # Translate from pivot language English
181
+ message = f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with Argos using pivot language English.'
182
+ else:
183
+ translated_text = f"No Argos model for {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]}. Try other model or languages combination from the available Argos models: {packages_info}."
184
+ except StopIteration as IterationError:
185
+ # packages_info = ', '.join(f"{pkg.get_description()}->{str(pkg.links)} {str(pkg.source_languages)}" for pkg in available_packages)
186
+ translated_text = f"No Argos model for {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]}. Error: {IterationError}. Try other model or languages combination from the available Argos models: {packages_info}."
187
+ except Exception as generalerror:
188
+ translated_text = f"General error: {generalerror}"
189
+ return translated_text
190
+
191
+ @staticmethod
192
+ def quickmttranslate(model_path, input_text):
193
+ from quickmt import Translator
194
+ # 'auto' auto-detects GPU, set to "cpu" to force CPU inference
195
+ device = 'gpu' if torch.cuda.is_available() else 'cpu'
196
+ translator = Translator(str(model_path), device = device)
197
+ # translation = Translator(f"./quickmt-{self.sl}-{self.tl}/", device="auto", inter_threads=2)
198
+ # set beam size to 1 for faster speed (but lower quality)
199
+ translation = translator(input_text, beam_size=5, max_input_length = 512, max_decoding_length = 512)
200
+ # print(model_path, input_text, translation)
201
+ return translation
202
+
203
+ @staticmethod
204
+ def quickmtdownload(model_name):
205
+ from quickmt.hub import hf_download
206
+ from pathlib import Path
207
+ model_path = Path("/quickmt/models") / model_name
208
+ if not model_path.exists():
209
+ hf_download(
210
+ model_name = f"quickmt/{model_name}",
211
+ output_dir=Path("/quickmt/models") / model_name,
212
+ )
213
+ return model_path
214
+
215
+ def quickmt(self):
216
+ model_name = f"quickmt-{self.sl}-{self.tl}"
217
+ # from quickmt.hub import hf_list
218
+ # quickmt_models = [i.split("/quickmt-")[1] for i in hf_list()]
219
+ # quickmt_models.sort()
220
+ quickmt_models = ['ar-en', 'bn-en', 'cs-en', 'da-en', 'de-en', 'el-en', 'en-ar', 'en-bn',
221
+ 'en-cs', 'en-da', 'en-de', 'en-el', 'en-es', 'en-fa', 'en-fr', 'en-he',
222
+ 'en-hi', 'en-hu', 'en-id', 'en-it', 'en-ja', 'en-ko', 'en-lv', 'en-pl',
223
+ 'en-pt', 'en-ro', 'en-ru', 'en-sv', 'en-th', 'en-tr', 'en-ur', 'en-vi',
224
+ 'en-zh', 'es-en', 'fa-en', 'fr-en', 'he-en', 'hi-en', 'hu-en', 'id-en',
225
+ 'it-en', 'ja-en', 'ko-en', 'lv-en', 'pl-en', 'pt-en', 'ro-en', 'ru-en',
226
+ 'th-en', 'tr-en', 'ur-en', 'vi-en', 'zh-en']
227
+ # available_languages = list(set([lang for model in quickmt_models for lang in model.split('-')]))
228
+ # available_languages.sort()
229
+ available_languages = ['ar', 'bn', 'cs', 'da', 'de', 'el', 'en', 'es', 'fa', 'fr', 'he',
230
+ 'hi', 'hu', 'id', 'it', 'ja', 'ko', 'lv', 'pl', 'pt', 'ro', 'ru',
231
+ 'sv', 'th', 'tr', 'ur', 'vi', 'zh']
232
+ # print(quickmt_models, available_languages)
233
+ # Direct translation model
234
+ if f"{self.sl}-{self.tl}" in quickmt_models:
235
+ model_path = Translators.quickmtdownload(model_name)
236
+ translated_text = Translators.quickmttranslate(model_path, self.input_text)
237
+ message = f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {model_name}.'
238
+ # Pivot language English
239
+ elif self.sl in available_languages and self.tl in available_languages:
240
+ model_name = f"quickmt-{self.sl}-en"
241
+ model_path = Translators.quickmtdownload(model_name)
242
+ entranslation = Translators.quickmttranslate(model_path, self.input_text)
243
+ model_name = f"quickmt-en-{self.tl}"
244
+ model_path = Translators.quickmtdownload(model_name)
245
+ translated_text = Translators.quickmttranslate(model_path, entranslation)
246
+ message = f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with Quickmt using pivot language English.'
247
+ else:
248
+ translated_text = f'No Quickmt model available for translation from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]}!'
249
+ message = f"Available models: {', '.join(quickmt_models)}"
250
+ return translated_text, message
251
+
252
+ def HelsinkiNLP_mulroa(self):
253
+ try:
254
+ pipe = pipeline("translation", model=self.model_name, device=self.device)
255
+ tgt_lang = iso1toall.get(self.tl)[2] # 'deu', 'ron', 'eng', 'fra'
256
+ translation = pipe(f'>>{tgt_lang}<< {self.input_text}')
257
+ return translation[0]['translation_text'], f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {self.model_name}.'
258
+ except Exception as error:
259
+ return f"Error translating with model: {self.model_name}! Try other available language combination.", error
260
+
261
+ def HelsinkiNLP(self):
262
+ try: # Standard bilingual model
263
+ model_name = f"Helsinki-NLP/opus-mt-{self.sl}-{self.tl}"
264
+ pipe = pipeline("translation", model=model_name, device=self.device)
265
+ translation = pipe(self.input_text)
266
+ return translation[0]['translation_text'], f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {model_name}.'
267
+ except EnvironmentError:
268
+ try: # Tatoeba models
269
+ model_name = f"Helsinki-NLP/opus-tatoeba-{self.sl}-{self.tl}"
270
+ pipe = pipeline("translation", model=model_name, device=self.device)
271
+ translation = pipe(self.input_text)
272
+ return translation[0]['translation_text'], f'Translated from {iso1toall[self.sl][0]} to {iso1toall[self.tl][0]} with {model_name}.'
273
+ except EnvironmentError as error:
274
+ self.model_name = "Helsinki-NLP/opus-mt-tc-bible-big-mul-mul" # Last resort: try multi to multi
275
+ return self.HelsinkiNLP_mulroa()
276
+ except KeyError as error:
277
+ return f"Error: Translation direction {self.sl} to {self.tl} is not supported by Helsinki Translation Models", error
278
+
279
+ def madlad(self):
280
+ model = T5ForConditionalGeneration.from_pretrained(self.model_name, device_map="auto")
281
+ tokenizer = T5Tokenizer.from_pretrained(self.model_name)
282
+ text = f"<2{self.tl}> {self.input_text}"
283
+ # input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device)
284
+ # outputs = model.generate(input_ids=input_ids, max_new_tokens=512)
285
+ # return tokenizer.decode(outputs[0], skip_special_tokens=True)
286
+ # return tokenizer.batch_decode(outputs, skip_special_tokens=True)
287
+ # Use a pipeline as a high-level helper
288
+ translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang=self.sl, tgt_lang=self.tl)
289
+ translated_text = translator(text, max_length=512)
290
+ return translated_text[0]['translation_text']
291
+
292
+ def smollm(self):
293
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
294
+ model = AutoModelForCausalLM.from_pretrained(self.model_name)
295
+ prompt = f"""Translate the following {self.sl} text to {self.tl}, generating only the translated text and maintaining the original meaning and tone:
296
+ {self.input_text}
297
+ Translation:"""
298
+ inputs = tokenizer(prompt, return_tensors="pt")
299
+ outputs = model.generate(
300
+ inputs.input_ids,
301
+ max_length=len(inputs.input_ids[0]) + 150,
302
+ temperature=0.3,
303
+ do_sample=True
304
+ )
305
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
306
+ print(response)
307
+ return response.split("Translation:")[-1].strip()
308
+
309
+ def flan(self):
310
+ tokenizer = T5Tokenizer.from_pretrained(self.model_name, legacy=False)
311
+ model = T5ForConditionalGeneration.from_pretrained(self.model_name)
312
+ prompt = f"translate {self.sl} to {self.tl}: {self.input_text}"
313
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
314
+ outputs = model.generate(input_ids)
315
+ return tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
316
+
317
+ def tfive(self):
318
+ tokenizer = T5Tokenizer.from_pretrained(self.model_name)
319
+ model = T5ForConditionalGeneration.from_pretrained(self.model_name, device_map="auto")
320
+ prompt = f"translate {self.sl} to {self.tl}: {self.input_text}"
321
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
322
+ output_ids = model.generate(input_ids, max_length=512)
323
+ translated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True).strip()
324
+ return translated_text
325
+
326
+ def mbart_many_to_many(self):
327
+ from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
328
+ model = MBartForConditionalGeneration.from_pretrained(self.model_name)
329
+ tokenizer = MBart50TokenizerFast.from_pretrained(self.model_name)
330
+ # translate source to target
331
+ tokenizer.src_lang = languagecodes.mbart_large_languages[self.sl]
332
+ encoded = tokenizer(self.input_text, return_tensors="pt")
333
+ generated_tokens = model.generate(
334
+ **encoded,
335
+ forced_bos_token_id=tokenizer.lang_code_to_id[languagecodes.mbart_large_languages[self.tl]]
336
+ )
337
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
338
+
339
+ def mbart_one_to_many(self):
340
+ # translate from English
341
+ from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
342
+ model = MBartForConditionalGeneration.from_pretrained(self.model_name)
343
+ tokenizer = MBart50TokenizerFast.from_pretrained(self.model_name, src_lang="en_XX")
344
+ model_inputs = tokenizer(self.input_text, return_tensors="pt")
345
+ langid = languagecodes.mbart_large_languages[self.tl]
346
+ generated_tokens = model.generate(
347
+ **model_inputs,
348
+ forced_bos_token_id=tokenizer.lang_code_to_id[langid]
349
+ )
350
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
351
+
352
+ def mbart_many_to_one(self):
353
+ # translate to English
354
+ from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
355
+ model = MBartForConditionalGeneration.from_pretrained(self.model_name)
356
+ tokenizer = MBart50TokenizerFast.from_pretrained(self.model_name)
357
+ tokenizer.src_lang = languagecodes.mbart_large_languages[self.sl]
358
+ encoded = tokenizer(self.input_text, return_tensors="pt")
359
+ generated_tokens = model.generate(**encoded)
360
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
361
+
362
+ def mtom(self):
363
+ from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
364
+ model = M2M100ForConditionalGeneration.from_pretrained(self.model_name)
365
+ tokenizer = M2M100Tokenizer.from_pretrained(self.model_name)
366
+ tokenizer.src_lang = self.sl
367
+ encoded = tokenizer(self.input_text, return_tensors="pt")
368
+ generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id(self.tl))
369
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
370
+
371
+ def smallonehundred(self):
372
+ from transformers import M2M100ForConditionalGeneration
373
+ from tokenization_small100 import SMALL100Tokenizer
374
+ model = M2M100ForConditionalGeneration.from_pretrained(self.model_name)
375
+ tokenizer = SMALL100Tokenizer.from_pretrained(self.model_name)
376
+ tokenizer.tgt_lang = self.tl
377
+ encoded_sl = tokenizer(self.input_text, return_tensors="pt")
378
+ generated_tokens = model.generate(**encoded_sl, max_length=256, num_beams=5)
379
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
380
+
381
+ def LegoMT(self):
382
+ from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
383
+ model = M2M100ForConditionalGeneration.from_pretrained(self.model_name) # "Lego-MT/Lego-MT"
384
+ tokenizer = M2M100Tokenizer.from_pretrained(self.model_name)
385
+ tokenizer.src_lang = self.sl
386
+ encoded = tokenizer(self.input_text, return_tensors="pt")
387
+ generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id(self.tl))
388
+ return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
389
+
390
+ def bigscience(self):
391
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
392
+ model = AutoModelForSeq2SeqLM.from_pretrained(self.model_name)
393
+ self.input_text = self.input_text if self.input_text.endswith('.') else f'{self.input_text}.'
394
+ inputs = tokenizer.encode(f"Translate to {self.tl}: {self.input_text}", return_tensors="pt")
395
+ outputs = model.generate(inputs)
396
+ translation = tokenizer.decode(outputs[0])
397
+ translation = translation.replace('<pad> ', '').replace('</s>', '')
398
+ return translation
399
+
400
+ def bloomz(self):
401
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
402
+ model = AutoModelForCausalLM.from_pretrained(self.model_name)
403
+ self.input_text = self.input_text if self.input_text.endswith('.') else f'{self.input_text}.'
404
+ # inputs = tokenizer.encode(f"Translate from {self.sl} to {self.tl}: {self.input_text} Translation:", return_tensors="pt")
405
+ inputs = tokenizer.encode(f"Translate to {self.tl}: {self.input_text}", return_tensors="pt")
406
+ outputs = model.generate(inputs)
407
+ translation = tokenizer.decode(outputs[0])
408
+ translation = translation.replace('<pad> ', '').replace('</s>', '')
409
+ translation = translation.split('Translation:')[-1].strip() if 'Translation:' in translation else translation.strip()
410
+ return translation
411
+
412
+ def nllb(self):
413
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name, src_lang=self.sl)
414
+ # model = AutoModelForSeq2SeqLM.from_pretrained(self.model_name, device_map="auto", torch_dtype=torch.bfloat16)
415
+ model = AutoModelForSeq2SeqLM.from_pretrained(self.model_name)
416
+ translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang=self.sl, tgt_lang=self.tl)
417
+ translated_text = translator(self.input_text, max_length=512)
418
+ return translated_text[0]['translation_text']
419
+
420
+ def seamlessm4t1(self):
421
+ from transformers import AutoProcessor, SeamlessM4TModel
422
+ processor = AutoProcessor.from_pretrained(self.model_name)
423
+ model = SeamlessM4TModel.from_pretrained(self.model_name)
424
+ src_lang = iso1toall.get(self.sl)[2] # 'deu', 'ron', 'eng', 'fra'
425
+ tgt_lang = iso1toall.get(self.tl)[2]
426
+ text_inputs = processor(text = self.input_text, src_lang=src_lang, return_tensors="pt")
427
+ output_tokens = model.generate(**text_inputs, tgt_lang=tgt_lang, generate_speech=False)
428
+ return processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
429
+
430
+ def seamlessm4t2(self):
431
+ from transformers import AutoProcessor, SeamlessM4Tv2ForTextToText
432
+ processor = AutoProcessor.from_pretrained(self.model_name)
433
+ model = SeamlessM4Tv2ForTextToText.from_pretrained(self.model_name)
434
+ src_lang = iso1toall.get(self.sl)[2] # 'deu', 'ron', 'eng', 'fra'
435
+ tgt_lang = iso1toall.get(self.tl)[2]
436
+ text_inputs = processor(text=self.input_text, src_lang=src_lang, return_tensors="pt")
437
+ decoder_input_ids = model.generate(**text_inputs, tgt_lang=tgt_lang)[0].tolist()
438
+ return processor.decode(decoder_input_ids, skip_special_tokens=True)
439
+
440
+ def wingpt(self):
441
+ model = AutoModelForCausalLM.from_pretrained(
442
+ self.model_name,
443
+ torch_dtype="auto",
444
+ device_map="auto"
445
+ )
446
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
447
+ # input_json = '{"input_text": self.input_text}'
448
+ messages = [
449
+ {"role": "system", "content": f"Translate this to {self.tl} language"},
450
+ {"role": "user", "content": self.input_text}
451
+ ]
452
+
453
+ text = tokenizer.apply_chat_template(
454
+ messages,
455
+ tokenize=False,
456
+ add_generation_prompt=True
457
+ )
458
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
459
+
460
+ generated_ids = model.generate(
461
+ **model_inputs,
462
+ max_new_tokens=512,
463
+ temperature=0.1
464
+ )
465
+
466
+ generated_ids = [
467
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
468
+ ]
469
+ output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
470
+ result = output.split('\n')[-1].strip() if '\n' in output else output.strip()
471
+ return result
472
+
473
+ def eurollm(self):
474
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
475
+ model = AutoModelForCausalLM.from_pretrained(self.model_name)
476
+ prompt = f"{self.sl}: {self.input_text} {self.tl}:"
477
+ inputs = tokenizer(prompt, return_tensors="pt")
478
+ outputs = model.generate(**inputs, max_new_tokens=512)
479
+ output = tokenizer.decode(outputs[0], skip_special_tokens=True)
480
+ print(output)
481
+ # result = output.rsplit(f'{self.tl}:')[-1].strip() if f'{self.tl}:' in output else output.strip()
482
+ result = output.rsplit(f'{self.tl}:')[-1].strip() if '\n' in output or f'{self.tl}:' in output else output.strip()
483
+ return result
484
+
485
+ def eurollm_instruct(self):
486
+ tokenizer = AutoTokenizer.from_pretrained(self.model_name)
487
+ model = AutoModelForCausalLM.from_pretrained(self.model_name)
488
+ text = f'<|im_start|>system\n<|im_end|>\n<|im_start|>user\nTranslate the following {self.sl} source text to {self.tl}:\n{self.sl}: {self.input_text} \n{self.tl}: <|im_end|>\n<|im_start|>assistant\n'
489
+ inputs = tokenizer(text, return_tensors="pt")
490
+ outputs = model.generate(**inputs, max_new_tokens=512)
491
+ output = tokenizer.decode(outputs[0], skip_special_tokens=True)
492
+ if f'{self.tl}:' in output:
493
+ output = output.rsplit(f'{self.tl}:')[-1].strip().replace('assistant\n', '').strip()
494
+ return output
495
+
496
+ def unbabel(self):
497
+ pipe = pipeline("text-generation", model=self.model_name, torch_dtype=torch.bfloat16, device_map="auto")
498
+ messages = [{"role": "user",
499
+ "content": f"Translate the following text from {self.sl} into {self.tl}.\n{self.sl}: {self.input_text}.\n{self.tl}:"}]
500
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)
501
+ tokenized_input = pipe.tokenizer(self.input_text, return_tensors="pt")
502
+ num_input_tokens = len(tokenized_input["input_ids"][0])
503
+ max_new_tokens = round(num_input_tokens + 0.5 * num_input_tokens)
504
+ outputs = pipe(prompt, max_new_tokens=max_new_tokens, do_sample=False)
505
+ translated_text = outputs[0]["generated_text"]
506
+ print(f"Input chars: {len(self.input_text)}", f"Input tokens: {num_input_tokens}", f"max_new_tokens: {max_new_tokens}",
507
+ "Chars to tokens ratio:", round(len(self.input_text) / num_input_tokens, 2), f"Raw translation: {translated_text}")
508
+ markers = ["<end_of_turn>", "<|im_end|>", "<|im_start|>assistant"] # , "\n"
509
+ for marker in markers:
510
+ if marker in translated_text:
511
+ translated_text = translated_text.split(marker)[1].strip()
512
+ translated_text = translated_text.replace('Answer:', '', 1).strip() if translated_text.startswith('Answer:') else translated_text
513
+ translated_text = translated_text.split("Translated text:")[0].strip() if "Translated text:" in translated_text else translated_text
514
+ split_translated_text = translated_text.split('\n', translated_text.count('\n'))
515
+ translated_text = '\n'.join(split_translated_text[:self.input_text.count('\n')+1])
516
+ return translated_text
517
+
518
+ def bergamot(model_name: str = 'deen', sl: str = 'de', tl: str = 'en', input_text: str = 'Hallo, mein Freund'):
519
+ try:
520
+ import bergamot
521
+ # input_text = [input_text] if isinstance(input_text, str) else input_text
522
+ config = bergamot.ServiceConfig(numWorkers=4)
523
+ service = bergamot.Service(config)
524
+ model = service.modelFromConfigPath(f"./{model_name}/bergamot.config.yml")
525
+ options = bergamot.ResponseOptions(alignment=False, qualityScores=False, HTML=False)
526
+ rawresponse = service.translate(model, bergamot.VectorString(input_text), options)
527
+ translated_text: str = next(iter(rawresponse)).target.text
528
+ message_text = f"Translated from {sl} to {tl} with Bergamot {model_name}."
529
+ except Exception as error:
530
+ response = error
531
+ return translated_text, message_text
532
+
533
+ @timer
534
+ @spaces.GPU
535
+ def translate_text(input_text: str, s_language: str, t_language: str, model_name: str) -> tuple[str, str]:
536
+ """
537
+ Translates the input text from the source language to the target language using a specified model.
538
+
539
+ Parameters:
540
+ input_text (str): The source text to be translated
541
+ s_language (str): The source language of the input text
542
+ t_language (str): The target language in which the input text is translated
543
+ model_name (str): The selected translation model name
544
+
545
+ Returns:
546
+ tuple:
547
+ translated_text(str): The input text translated to the selected target language
548
+ message_text(str): A descriptive message summarizing the translation process. Example: "Translated from English to German with Helsinki-NLP."
549
+
550
+ Example:
551
+ >>> translate_text("Hello world", "English", "German", "Helsinki-NLP")
552
+ ("Hallo Welt", "Translated from English to German with Helsinki-NLP.")
553
+ """
554
+
555
+ sl = all_langs[s_language][0]
556
+ tl = all_langs[t_language][0]
557
+ message_text = f'Translated from {s_language} to {t_language} with {model_name}'
558
+ if not input_text or input_text.strip() == '':
559
+ translated_text = f'No input text entered!'
560
+ message_text = 'Please enter a text to translate!'
561
+ return translated_text, message_text
562
+ if sl == tl:
563
+ translated_text = f'Source language {s_language} identical to target language {t_language}!'
564
+ message_text = 'Please choose different target and source language!'
565
+ return translated_text, message_text
566
+ try:
567
+ if "-mul" in model_name.lower() or "mul-" in model_name.lower() or "-roa" in model_name.lower():
568
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).HelsinkiNLP_mulroa()
569
+
570
+ elif model_name == "Helsinki-NLP":
571
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).HelsinkiNLP()
572
+
573
+ elif model_name == 'Argos':
574
+ translated_text = Translators(model_name, sl, tl, input_text).argos()
575
+
576
+ elif model_name == "QUICKMT":
577
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).quickmt()
578
+
579
+ elif model_name == 'Google':
580
+ translated_text = Translators(model_name, sl, tl, input_text).google()
581
+
582
+ elif model_name == "Helsinki-NLP/opus-mt-tc-bible-big-roa-en":
583
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).simplepipe()
584
+
585
+ elif "m2m" in model_name.lower():
586
+ translated_text = Translators(model_name, sl, tl, input_text).mtom()
587
+
588
+ elif "small100" in model_name.lower():
589
+ translated_text = Translators(model_name, sl, tl, input_text).smallonehundred()
590
+
591
+ elif "lego" in model_name.lower():
592
+ translated_text = Translators(model_name, sl, tl, input_text).LegoMT()
593
+
594
+ elif "niutrans" in model_name.lower():
595
+ translated_text = Translators(model_name, sl, tl, input_text).niutrans()
596
+
597
+ elif "salamandra" in model_name.lower():
598
+ translated_text = Translators(model_name, s_language, t_language, input_text).salamandratapipe()
599
+
600
+ elif model_name.startswith('google-t5'):
601
+ translated_text = Translators(model_name, s_language, t_language, input_text).tfive()
602
+
603
+ elif 'flan' in model_name.lower():
604
+ translated_text = Translators(model_name, s_language, t_language, input_text).flan()
605
+
606
+ elif 'madlad' in model_name.lower():
607
+ translated_text = Translators(model_name, sl, tl, input_text).madlad()
608
+
609
+ elif 'mt0' in model_name.lower():
610
+ translated_text = Translators(model_name, s_language, t_language, input_text).bigscience()
611
+
612
+ elif 'bloomz' in model_name.lower():
613
+ translated_text = Translators(model_name, s_language, t_language, input_text).bloomz()
614
+
615
+ elif 'nllb' in model_name.lower():
616
+ nnlbsl, nnlbtl = languagecodes.nllb_language_codes[s_language], languagecodes.nllb_language_codes[t_language]
617
+ translated_text = Translators(model_name, nnlbsl, nnlbtl, input_text).nllb()
618
+
619
+ elif model_name == "facebook/mbart-large-50-many-to-many-mmt":
620
+ translated_text = Translators(model_name, s_language, t_language, input_text).mbart_many_to_many()
621
+
622
+ elif model_name == "facebook/mbart-large-50-one-to-many-mmt":
623
+ translated_text = Translators(model_name, s_language, t_language, input_text).mbart_one_to_many()
624
+
625
+ elif model_name == "facebook/mbart-large-50-many-to-one-mmt":
626
+ translated_text = Translators(model_name, s_language, t_language, input_text).mbart_many_to_one()
627
+
628
+ elif model_name == "facebook/seamless-m4t-v2-large":
629
+ translated_text = Translators(model_name, sl, tl, input_text).seamlessm4t2()
630
+
631
+ elif "m4t-medium" in model_name or "m4t-large" in model_name:
632
+ translated_text = Translators(model_name, sl, tl, input_text).seamlessm4t1()
633
+
634
+ elif model_name == "utter-project/EuroLLM-1.7B-Instruct":
635
+ translated_text = Translators(model_name, s_language, t_language, input_text).eurollm_instruct()
636
+
637
+ elif model_name == "utter-project/EuroLLM-1.7B":
638
+ translated_text = Translators(model_name, s_language, t_language, input_text).eurollm()
639
+
640
+ elif 'Unbabel' in model_name:
641
+ translated_text = Translators(model_name, s_language, t_language, input_text).unbabel()
642
+
643
+ elif model_name == "HuggingFaceTB/SmolLM3-3B":
644
+ translated_text = Translators(model_name, s_language, t_language, input_text).smollm()
645
+
646
+ elif "winninghealth/WiNGPT" in model_name:
647
+ translated_text = Translators(model_name, s_language, t_language, input_text).wingpt()
648
+
649
+ elif "HPLT" in model_name:
650
+ if model_name == "HPLT-OPUS":
651
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).hplt(opus = True)
652
+ else:
653
+ translated_text, message_text = Translators(model_name, sl, tl, input_text).hplt()
654
+
655
+ elif model_name == "Bergamot":
656
+ translated_text, message_text = Translators(model_name, s_language, t_language, input_text).bergamot()
657
+
658
+ except Exception as trerror:
659
+ translated_text = f'Error in main function "translate_text": {trerror}'
660
+ finally:
661
+ print(input_text, translated_text, message_text)
662
+ return translated_text, message_text
663
+
664
+ def swap_languages(src_lang, tgt_lang):
665
+ '''Swap dropdown values for source and target language'''
666
+ return tgt_lang, src_lang
667
+
668
+ def get_info(model_name: str, sl: str = None, tl: str = None):
669
+ helsinki = '### [Helsinki-NLP](https://huggingface.co/Helsinki-NLP "Helsinki-NLP")'
670
+ if model_name == "Helsinki-NLP" and sl and tl:
671
+ url = f'https://huggingface.co/{model_name}/opus-mt-{sl}-{tl}/raw/main/README.md'
672
+ response = httpx.get(url).text
673
+ if 'Repository not found' in response or 'Invalid username or password' in response:
674
+ return helsinki
675
+ return response
676
+ elif model_name == "Argos":
677
+ return httpx.get(f'https://huggingface.co/TiberiuCristianLeon/Argostranslate/raw/main/README.md').text
678
+ elif "HPLT" in model_name:
679
+ return """[HPLT Uni direction translation models](https://huggingface.co/collections/HPLT/hplt-12-uni-direction-translation-models)
680
+ ['ar-en', 'bs-en', 'ca-en', 'en-ar', 'en-bs', 'en-ca', 'en-et', 'en-eu', 'en-fi',
681
+ 'en-ga', 'en-gl', 'en-hi', 'en-hr', 'en-is', 'en-mt', 'en-nn', 'en-sq', 'en-sw',
682
+ 'en-zh_hant', 'et-en', 'eu-en', 'fi-en', 'ga-en', 'gl-en', 'hi-en', 'hr-en',
683
+ 'is-en', 'mt-en', 'nn-en', 'sq-en', 'sw-en', 'zh_hant-en']"""
684
+ elif "QUICKMT" in model_name:
685
+ return """[QUICKMT](https://huggingface.co/quickmt)
686
+ ['ar-en', 'bn-en', 'cs-en', 'da-en', 'de-en', 'el-en', 'en-ar', 'en-bn',
687
+ 'en-cs', 'en-da', 'en-de', 'en-el', 'en-es', 'en-fa', 'en-fr', 'en-he',
688
+ 'en-hi', 'en-hu', 'en-id', 'en-it', 'en-ja', 'en-ko', 'en-lv', 'en-pl',
689
+ 'en-pt', 'en-ro', 'en-ru', 'en-sv', 'en-th', 'en-tr', 'en-ur', 'en-vi',
690
+ 'en-zh', 'es-en', 'fa-en', 'fr-en', 'he-en', 'hi-en', 'hu-en', 'id-en',
691
+ 'it-en', 'ja-en', 'ko-en', 'lv-en', 'pl-en', 'pt-en', 'ro-en', 'ru-en',
692
+ 'th-en', 'tr-en', 'ur-en', 'vi-en', 'zh-en']"""
693
+ elif model_name == "Google":
694
+ return "Google Translate Online"
695
+ else:
696
+ return httpx.get(f'https://huggingface.co/{model_name}/raw/main/README.md').text
697
+
698
+ def create_interface():
699
+ with gr.Blocks() as interface:
700
+ gr.Markdown("### Machine Text Translation with Gradio API and MCP Server")
701
+ input_text = gr.Textbox(label="Enter text to translate:", placeholder="Type your text here, maximum 512 tokens", autofocus=True, submit_btn='Translate', max_length=512)
702
+
703
+ with gr.Row(variant="compact"):
704
+ s_language = gr.Dropdown(choices=langs, value = DEFAULTS[0], label="Source language", interactive=True, scale=2)
705
+ t_language = gr.Dropdown(choices=langs, value = DEFAULTS[1], label="Target language", interactive=True, scale=2)
706
+ swap_btn = gr.Button("Swap Languages", size="md", scale=1)
707
+ swap_btn.click(fn=swap_languages, inputs=[s_language, t_language], outputs=[s_language, t_language], api_name=False, show_api=False)
708
+ # with gr.Row(equal_height=True):
709
+ model_name = gr.Dropdown(choices=models, label=f"Select a model. Default is {DEFAULTS[2]}.", value=DEFAULTS[2], interactive=True, scale=2)
710
+ # translate_btn = gr.Button(value="Translate", scale=1)
711
+
712
+ translated_text = gr.Textbox(label="Translated text:", placeholder="Display field for translation", interactive=False, show_copy_button=True, lines=2)
713
+ message_text = gr.Textbox(label="Messages:", placeholder="Display field for status and error messages", interactive=False,
714
+ value=f'Default translation settings: from {s_language.value} to {t_language.value} with {model_name.value}.')
715
+ allmodels = gr.HTML(label="Model links:", value=', '.join([f'<a href="https://huggingface.co/{model}">{model}</a>' for model in models]))
716
+ model_info = gr.Markdown(label="Model info:", value=get_info(DEFAULTS[2], DEFAULTS[0], DEFAULTS[1]), show_copy_button=True)
717
+ model_name.change(fn=get_info, inputs=[model_name, s_language, t_language], outputs=model_info, api_name=False, show_api=False)
718
+
719
+ # translate_btn.click(
720
+ # fn=translate_text,
721
+ # inputs=[input_text, s_language, t_language, model_name],
722
+ # outputs=[translated_text, message_text]
723
+ # )
724
+ input_text.submit(
725
+ fn=translate_text,
726
+ inputs=[input_text, s_language, t_language, model_name],
727
+ outputs=[translated_text, message_text]
728
+ )
729
+
730
+ return interface
731
+
732
+ interface = create_interface()
733
+ if __name__ == "__main__":
734
+ interface.launch(mcp_server=True)
735
+ # interface.queue().launch(server_name="0.0.0.0", show_error=True, server_port=7860, mcp_server=True)