This model finding1/GLM-4.5-MLX-8.5bpw was converted to MLX format from zai-org/GLM-4.5 using mlx-lm version 0.26.3 with mlx_lm.convert --quantize --q-bits 8 --mlx-path MLX-8.5bpw --hf-path zai-org/GLM-4.5.

Downloads last month
4
Safetensors
Model size
353B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for finding1/GLM-4.5-MLX-8.5bpw

Base model

zai-org/GLM-4.5
Quantized
(29)
this model