Update README.md
Browse files
README.md
CHANGED
|
@@ -1,55 +1,148 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
-
library_name: transformers
|
| 4 |
-
pipeline_tag: text-generation
|
| 5 |
-
extra_gated_heading: Access Gemma on Hugging Face
|
| 6 |
-
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
|
| 7 |
-
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
| 8 |
-
Face and click below. Requests are processed immediately.
|
| 9 |
-
extra_gated_button_content: Acknowledge license
|
| 10 |
base_model: google/gemma-3-1b-it
|
| 11 |
tags:
|
| 12 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
# AdvRahul/Axion-Lite-1B-Q5_K_M-GGUF
|
| 16 |
-
This model is finetuned from google/gemma-3-1b-it, making it safer through red team testing with advanced protocols.
|
| 17 |
-
## Use with llama.cpp
|
| 18 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
| 19 |
|
| 20 |
-
|
| 21 |
-
brew install llama.cpp
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
-
### CLI:
|
| 27 |
```bash
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
```
|
| 30 |
|
| 31 |
-
###
|
|
|
|
|
|
|
|
|
|
| 32 |
```bash
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
```
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
|
| 39 |
-
```
|
| 40 |
-
git clone https://github.com/ggerganov/llama.cpp
|
| 41 |
-
```
|
| 42 |
|
| 43 |
-
|
| 44 |
-
```
|
| 45 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
| 46 |
-
```
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
or
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
```
|
| 54 |
-
|
| 55 |
```
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
base_model: google/gemma-3-1b-it
|
| 4 |
tags:
|
| 5 |
+
- gemma
|
| 6 |
+
- gemma3
|
| 7 |
+
- instruction-tuned
|
| 8 |
+
- fine-tuned
|
| 9 |
+
- safety
|
| 10 |
+
- gguf
|
| 11 |
+
- axion
|
| 12 |
---
|
| 13 |
|
| 14 |
# AdvRahul/Axion-Lite-1B-Q5_K_M-GGUF
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
**Axion-Lite-1B** is a safety-enhanced, quantized version of Google's powerful `gemma-3-1b-it` model. This model has been specifically fine-tuned to improve its safety alignment, making it more robust and reliable for a wide range of applications.
|
|
|
|
| 17 |
|
| 18 |
+
The model is provided in the GGUF format, which allows it to run efficiently on CPUs and other hardware with limited resources.
|
| 19 |
+
|
| 20 |
+
## 🚀 Model Details
|
| 21 |
+
|
| 22 |
+
* **Model Creator:** AdvRahul
|
| 23 |
+
* **Base Model:** [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it)
|
| 24 |
+
* **Fine-tuning Focus:** Enhanced Safety & Harmlessness through red-teaming.
|
| 25 |
+
* **Quantization:** `Q5_K_M` via GGUF. This quantization offers an excellent balance between model size, inference speed, and performance preservation.
|
| 26 |
+
* **Architecture:** Gemma 3
|
| 27 |
+
* **License:** Gemma 3 Terms of Use.
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## 💻 How to Use
|
| 32 |
+
|
| 33 |
+
This model is in GGUF format and is designed to be used with frameworks like `llama.cpp` and its Python bindings.
|
| 34 |
+
|
| 35 |
+
### Using `llama-cpp-python`
|
| 36 |
+
|
| 37 |
+
First, install the necessary library. Ensure you have a version that supports Gemma 3 models.
|
| 38 |
|
|
|
|
| 39 |
```bash
|
| 40 |
+
|
| 41 |
+
pip install llama-cpp-python
|
| 42 |
+
````
|
| 43 |
+
|
| 44 |
+
Then, you can use the following Python script to run the model:
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
from llama_cpp import Llama
|
| 48 |
+
|
| 49 |
+
# Download the model from the Hugging Face Hub before running this
|
| 50 |
+
# Or let llama-cpp-python download it for you
|
| 51 |
+
llm = Llama.from_pretrained(
|
| 52 |
+
repo_id="AdvRahul/Axion-Lite-1B-Q5_K_M-GGUF",
|
| 53 |
+
filename="Axion-Lite-1B-Q5_K_M.gguf",
|
| 54 |
+
verbose=False
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
prompt = "What are the key principles of responsible AI development?"
|
| 58 |
+
|
| 59 |
+
# The Gemma 3 instruction-tuned model uses a specific chat template.
|
| 60 |
+
# For simple prompts, you can start with <start_of_turn>user\n{prompt}<end_of_turn>\n<start_of_turn>model
|
| 61 |
+
chat_prompt = f"<start_of_turn>user\n{prompt}<end_of_turn>\n<start_of_turn>model"
|
| 62 |
+
|
| 63 |
+
output = llm(chat_prompt, max_tokens=256, stop=["<end_of_turn>"], echo=False)
|
| 64 |
+
|
| 65 |
+
print(output['choices'][0]['text'])
|
| 66 |
```
|
| 67 |
|
| 68 |
+
### Using `llama.cpp` (CLI)
|
| 69 |
+
|
| 70 |
+
You can also run this model directly from the command line after cloning and building the `llama.cpp` repository.
|
| 71 |
+
|
| 72 |
```bash
|
| 73 |
+
# Clone and build llama.cpp
|
| 74 |
+
git clone [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
|
| 75 |
+
cd llama.cpp
|
| 76 |
+
make
|
| 77 |
+
|
| 78 |
+
# Run inference
|
| 79 |
+
./main -m /path/to/your/models/Axion-Lite-1B-Q5_K_M.gguf -p "<start_of_turn>user\nWhat is the capital of India?<end_of_turn>\n<start_of_turn>model" -n 128
|
| 80 |
```
|
| 81 |
|
| 82 |
+
-----
|
| 83 |
|
| 84 |
+
## 📝 Model Description
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
+
### Fine-Tuning for Safety
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
+
**Axion-Lite-1B** originates from `google/gemma-3-1b-it`. The primary goal of this project was to enhance the model's safety alignment. The base model underwent **extensive red-team testing with advanced protocols** to significantly reduce the likelihood of generating harmful, unethical, biased, or unsafe content. This makes Axion-Lite-1B a more suitable choice for applications that require a higher degree of content safety and reliability.
|
| 89 |
+
|
| 90 |
+
### Quantization
|
| 91 |
+
|
| 92 |
+
The model is quantized to `Q5_K_M`, a method that provides a high-quality balance between perplexity (model accuracy) and file size. This makes it ideal for deployment in resource-constrained environments, such as on local machines, edge devices, or cost-effective cloud instances, without a significant drop in performance.
|
| 93 |
+
|
| 94 |
+
-----
|
| 95 |
+
|
| 96 |
+
## ℹ️ Base Model Information (Gemma 3)
|
| 97 |
+
|
| 98 |
+
\<details\>
|
| 99 |
+
\<summary\>Click to expand details on the base model\</summary\>
|
| 100 |
+
|
| 101 |
+
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models handle text input and generate text output, with open weights for both pre-trained variants and instruction-tuned variants. The `1B` model was trained on 2 trillion tokens of data.
|
| 102 |
+
|
| 103 |
+
### Training Data
|
| 104 |
+
|
| 105 |
+
The base model was trained on a dataset of text data that includes a wide variety of sources:
|
| 106 |
+
|
| 107 |
+
* **Web Documents:** A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary in over 140 languages.
|
| 108 |
+
* **Code:** Exposing the model to code helps it learn the syntax and patterns of programming languages.
|
| 109 |
+
* **Mathematics:** Training on mathematical text helps the model learn logical reasoning and symbolic representation.
|
| 110 |
+
|
| 111 |
+
### Data Preprocessing
|
| 112 |
+
|
| 113 |
+
The training data for the base model underwent rigorous cleaning and filtering, including:
|
| 114 |
+
|
| 115 |
+
* **CSAM Filtering:** Exclusion of Child Sexual Abuse Material.
|
| 116 |
+
* **Sensitive Data Filtering:** Automated techniques were used to filter out certain personal information and other sensitive data.
|
| 117 |
+
* **Content Quality Filtering:** Filtering based on content quality and safety in line with Google's policies.
|
| 118 |
+
|
| 119 |
+
\</details\>
|
| 120 |
+
|
| 121 |
+
-----
|
| 122 |
+
|
| 123 |
+
## ⚠️ Ethical Considerations and Limitations
|
| 124 |
+
|
| 125 |
+
While this model has been fine-tuned to enhance its safety, no language model is perfectly safe. It inherits the limitations of its base model, `gemma-3-1b-it`, and the data it was trained on.
|
| 126 |
+
|
| 127 |
+
* **Potential for Bias:** The model may still generate content that reflects societal biases present in the training data.
|
| 128 |
+
* **Factual Inaccuracy:** The model can "hallucinate" or generate incorrect or outdated information. It should not be used as a sole source of truth.
|
| 129 |
+
* **Not a Substitute for Human Judgment:** The outputs should be reviewed and validated, especially in sensitive or high-stakes applications.
|
| 130 |
+
|
| 131 |
+
Developers implementing this model should build additional safety mitigations and content moderation tools as part of a **defense-in-depth** strategy, tailored to their specific use case.
|
| 132 |
+
|
| 133 |
+
## Citing the Base Model
|
| 134 |
+
|
| 135 |
+
If you use this model, please consider citing the original Gemma 3 work:
|
| 136 |
+
|
| 137 |
+
```bibtex
|
| 138 |
+
@article{gemma_2025,
|
| 139 |
+
title={Gemma 3},
|
| 140 |
+
url={[https://goo.gle/Gemma3Report](https://goo.gle/Gemma3Report)},
|
| 141 |
+
publisher={Kaggle},
|
| 142 |
+
author={Gemma Team},
|
| 143 |
+
year={2025}
|
| 144 |
+
}
|
| 145 |
```
|
| 146 |
+
|
| 147 |
```
|
| 148 |
+
```
|