QLoRA: Efficient Finetuning of Quantized LLMs
Paper
β’
2305.14314
β’
Published
β’
58
veriforge-gemma-2b-it is a QLoRA-fine-tuned version of google/gemma-2b-it that specializes in prompt-based circuit synthesis for digital logic design, specifically in Verilog HDL.
google/gemma-2b-itfrom transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "louijiec/veriforge-gemma-2b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "### Prompt:\nWrite Verilog code for a 3-input XOR gate.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
module nand_3_input (output y, input a0, a1, a2);
assign y = ~(a0 & a1 & a2);
endmodule