lingyezhixing
lingyezhixing
AI & ML interests
None yet
Recent Activity
new activity 13 days ago
unsloth/Qwen3.5-122B-A10B-GGUF:Q3 quantization performance issues new activity about 1 month ago
ubergarm/Qwen3.5-122B-A10B-GGUF:Missing about 50~55GB of Q3?Organizations
None yet
Q3 quantization performance issues
2
#7 opened about 1 month ago
by
lingyezhixing
Missing about 50~55GB of Q3?
5
#7 opened about 1 month ago
by
lingyezhixing
Update: Should now be Fixed - Bug in UD-Q4_K_XL recipe using MXFP4 for attn tensors and experts?
π 8
26
#5 opened about 1 month ago
by
ubergarm
VLLMηAWQζ ΌεΌοΌ
2
#2 opened 2 months ago
by
Laoxu
Please regenerate to adapt to the latest improvements in llama.cpp
π₯ 1
1
#4 opened 3 months ago
by
lingyezhixing
Where IQ quantizeοΌ
5
#1 opened 4 months ago
by
lingyezhixing
IQ4_XS Please
3
#6 opened 4 months ago
by
lingyezhixing
θΏζ¬‘δΌδΈδΌζ14BηAutoAWQζθ GPTQοΌ
1
#2 opened 5 months ago
by
lingyezhixing
Will there still be 32B dense models?
βπ 8
2
#18 opened 8 months ago
by
lingyezhixing
Hello, I want to know if the draft model will reduce the model generation quality?
1
#2 opened 7 months ago
by
lingyezhixing
Smashed πͺ Scored to 82.86 π₯2bit IQ2_M on MMLU Pro single shot benchmark
β€οΈπ₯ 2
5
#7 opened 8 months ago
by
xbruce22
There must be something wrong with the size
π 2
2
#8 opened 11 months ago
by
lingyezhixing
Native FP4 seems to make quantization meaningless
3
#7 opened 8 months ago
by
lingyezhixing
Can you provide some low-precision quantization options?
βπ 3
11
#3 opened 8 months ago
by
lingyezhixing
Is the GGUF file still being uploaded?
π 2
3
#2 opened 9 months ago
by
lingyezhixing
FastLLM support?
#17 opened 9 months ago
by
lingyezhixing
There must be something wrong with the size
#7 opened 11 months ago
by
lingyezhixing
ζεΎ ζ΄ε€§θ§ζ¨‘ηRP樑ε
#10 opened 12 months ago
by
lingyezhixing
θ½ε¦ζδΎQ6_Kηιεζδ»ΆοΌ
π₯ 2
3
#10 opened about 1 year ago
by
lingyezhixing