XGenerationLab commited on
Commit
b74605a
·
verified ·
1 Parent(s): 5dbd355

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -62,7 +62,7 @@ transformers >= 4.37.0
62
  Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
63
  Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
64
 
65
- ```
66
  nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
67
  【用户问题】
68
  {question}
@@ -108,6 +108,36 @@ generated_ids = [
108
  ]
109
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
110
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  ## Acknowledgments
112
  If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
113
  ```bibtex
 
62
  Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
63
  Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
64
 
65
+ ```python
66
  nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
67
  【用户问题】
68
  {question}
 
108
  ]
109
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
110
  ```
111
+
112
+
113
+ ### Inference with vLLM
114
+ ```python
115
+ from vllm import LLM, SamplingParams
116
+ from transformers import AutoTokenizer
117
+ model_path = "XGenerationLab/XiYanSQL-QwenCoder-32B-2412"
118
+ llm = LLM(model=model_path, tensor_parallel_size=8)
119
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
120
+ sampling_params = SamplingParams(
121
+ n=1,
122
+ temperature=0.1,
123
+ max_tokens=1024
124
+ )
125
+
126
+ ## dialects -> ['SQLite', 'PostgreSQL', 'MySQL']
127
+ prompt = nl2sqlite_template_cn.format(dialect="", db_schema="", question="", evidence="")
128
+ message = [{'role': 'user', 'content': prompt}]
129
+ text = tokenizer.apply_chat_template(
130
+ message,
131
+ tokenize=False,
132
+ add_generation_prompt=True
133
+ )
134
+ outputs = llm.generate([text], sampling_params=sampling_params)
135
+ response = outputs[0].outputs[0].text
136
+ ```
137
+
138
+
139
+
140
+
141
  ## Acknowledgments
142
  If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
143
  ```bibtex