Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ss-lab
/
Mistral-7b-v0.3-bnb-4bit-GGUF
like
1
GGUF
tatsu-lab/alpaca
mistral
llama.cpp
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
Available Model files:
Note
Available Model files:
mistral-7b-v0.3.Q3_K_M.gguf
mistral-7b-v0.3.Q4_K_M.gguf
Note
The model's BOS token behavior was adjusted for GGUF compatibility.
This model was finetuned and converted to GGUF format using
Unsloth
.
Downloads last month
20
GGUF
Model size
7B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to view the estimation
3-bit
Q3_K_M
3.52 GB
4-bit
Q4_K_M
4.37 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
ss-lab/Mistral-7b-v0.3-bnb-4bit-GGUF
Base model
mistralai/Mistral-7B-v0.3
Quantized
unsloth/mistral-7b-v0.3-bnb-4bit
Quantized
(
190
)
this model
Dataset used to train
ss-lab/Mistral-7b-v0.3-bnb-4bit-GGUF
tatsu-lab/alpaca
Viewer
•
Updated
May 22, 2023
•
52k
•
50.5k
•
833
Collection including
ss-lab/Mistral-7b-v0.3-bnb-4bit-GGUF
mistral
Collection
Models from Mistral
•
1 item
•
Updated
4 days ago
•
1