-
-
-
-
-
-
Inference Providers
Active filters:
2-bit
MaziyarPanahi/Ministral-3-8B-Reasoning-2512-GGUF
Text Generation
•
8B
•
Updated
•
87
•
1
MaziyarPanahi/Trinity-Nano-Preview-GGUF
Text Generation
•
6B
•
Updated
•
52
•
1
MaziyarPanahi/Trinity-Mini-GGUF
Text Generation
•
26B
•
Updated
•
54.8k
•
1
MaziyarPanahi/Hermes-4.3-36B-GGUF
Text Generation
•
36B
•
Updated
•
59.2k
•
2
apthebest01931/rnj-1-instruct-mlx-2Bit
0.8B
•
Updated
•
6
jesusoctavioas/Olmo-3-1125-32B-mlx-2Bit
Text Generation
•
32B
•
Updated
•
19
MaziyarPanahi/GLM-4.6V-Flash-GGUF
Text Generation
•
9B
•
Updated
•
56.1k
•
2
Text Generation
•
7B
•
Updated
•
30
•
2
shubhamg2208/tomoro-ai-colqwen3-embed-4b-auto-round-w2a16
1B
•
Updated
•
2
shubhamg2208/tomoro-ai-colqwen3-embed-4b-auto-round-w2a16g32
1B
•
Updated
•
1
Matt300209/autoround_test
1B
•
Updated
•
1
mradermacher/Fairy2i-W2-GGUF
Text Generation
•
7B
•
Updated
•
179
fifrio/Qwen3-8B-gptq-2bit-calibration-English-128samples
8B
•
Updated
•
41
fifrio/Qwen3-8B-gptq-2bit-calibration-Indonesian-128samples
8B
•
Updated
•
22
fifrio/Qwen3-8B-gptq-2bit-calibration-Tamil-128samples
8B
•
Updated
•
20
fifrio/Qwen3-8B-gptq-2bit-calibration-Swahili-128samples
8B
•
Updated
•
25
fifrio/Qwen3-8B-gptq-2bit-calibration-Chinese-128samples
8B
•
Updated
•
22
mradermacher/Fairy2i-W2-i1-GGUF
Text Generation
•
7B
•
Updated
•
258
alexgusevski/Ministral-3-3B-Instruct-2512-q2-mlx
Text Generation
•
0.3B
•
Updated
•
14
alexgusevski/Ministral-3-3B-Reasoning-2512-q2-mlx
Text Generation
•
0.3B
•
Updated
•
17
alexgusevski/Ministral-3-8B-Instruct-2512-q2-mlx
Text Generation
•
0.8B
•
Updated
•
10
alexgusevski/Ministral-3-8B-Reasoning-2512-q2-mlx
Text Generation
•
0.8B
•
Updated
•
31
alexgusevski/Lightning-1.7B-q2-mlx
Text Generation
•
0.2B
•
Updated
•
6
fifrio/Qwen3-4B-gptq-2bit-calibration-English-128samples
4B
•
Updated
•
78
fifrio/Qwen3-4B-gptq-2bit-calibration-Indonesian-128samples
4B
•
Updated
•
35
fifrio/Qwen3-4B-gptq-2bit-calibration-Tamil-128samples
4B
•
Updated
•
43
fifrio/Qwen3-4B-gptq-2bit-calibration-Swahili-128samples
4B
•
Updated
•
42
fifrio/Qwen3-4B-gptq-2bit-calibration-Chinese-128samples
4B
•
Updated
•
54
mlx-community/YandexGPT-5-Lite-8B-instruct-q2
Text Generation
•
0.8B
•
Updated
•
104
•
2
fifrio/Qwen3-1.7B-gptq-2bit-calibration-English-128samples
2B
•
Updated
•
47