|
ImportError for function find_pruneable_heads_and_indices
|
|
1
|
15
|
March 16, 2026
|
|
Transformers.js: Retrieving the size of models in MB/GB before running
|
|
1
|
6
|
March 16, 2026
|
|
Purpose of commit_hash in PreTrainedModel.from_pretrained
|
|
1
|
16
|
March 16, 2026
|
|
How DEoT Makes LLMs Think: A New Framework for Open-Ended Reasoning
|
|
2
|
10
|
March 15, 2026
|
|
AutoModel with ClinicalBERT gives UNEXPECTED warning
|
|
3
|
24
|
March 13, 2026
|
|
Are biofoundation models actually used in practice and how helpful they are?
|
|
0
|
6
|
March 10, 2026
|
|
Overfitting in BERT IMDB50k
|
|
2
|
1142
|
March 6, 2026
|
|
LLM Course code errors
|
|
7
|
89
|
March 6, 2026
|
|
Different output when we inference through packing with flash attention in bf16
|
|
1
|
11
|
March 6, 2026
|
|
Why are gradient_checkpointing and training bound?
|
|
2
|
21
|
March 2, 2026
|
|
Wave Field LLM — O(n log n) attention via wave equation dynamics, within 5% of standard transformer
|
|
2
|
4909
|
March 2, 2026
|
|
Attentions not returned from transformers ViT model when using output_attentions=True
|
|
5
|
1219
|
March 2, 2026
|
|
Using hyperparameter-search in Trainer
|
|
102
|
38923
|
March 2, 2026
|
|
Issue with summarization and translation pipeline
|
|
3
|
36
|
March 2, 2026
|
|
Is LLaMA rotary embedding implementation correct?
|
|
8
|
9559
|
February 26, 2026
|
|
Gemma 3 12B: 4-bit Quantization failing/ignored in Transformers v5.1.0 (Gemma3ForConditionalGeneration)
|
|
10
|
137
|
February 23, 2026
|
|
[Help Needed] Dual-Phase Softmax Steering on Llama-2 Residual Stream Yields Identical POPE Results
|
|
3
|
33
|
February 23, 2026
|
|
[Research/Discussion] Depth-agnostic stability for residual models (no extra norms, no tuning). Is this useful to you?
|
|
1
|
25
|
February 22, 2026
|
|
LLaVA Steering: Why does grounding fix hallucinations in captioning but not in Yes/No QA?
|
|
1
|
32
|
February 19, 2026
|
|
KV Caching problem with gemma 3
|
|
2
|
58
|
February 17, 2026
|
|
Num_beam_groups removed in V5?
|
|
1
|
52
|
February 14, 2026
|
|
[LLaVA-1.5] Implementing Control Barrier Functions (LCBF) via Attention Hooking – Persistent AttributeError: 'LlamaAttention' object has no attribute 'rotary_emb'
|
|
4
|
20
|
February 13, 2026
|
|
Error while importing "Trainer"
|
|
1
|
105
|
February 13, 2026
|
|
[LLaVA-1.5] Very low hallucination rate & weak attention correlation in "Attention Gap" experiment – Is my implementation of output_attentions correct?
|
|
4
|
28
|
February 12, 2026
|
|
Confusion with freezing Whisper's feature encoder
|
|
3
|
28
|
February 11, 2026
|
|
When using Whisper, pipeline notifies that generation_config default values have been modified, even for base models
|
|
4
|
57
|
February 8, 2026
|
|
Hyperparameters vs message format prompt tuning
|
|
2
|
31
|
February 6, 2026
|
|
SFT Conversation llama3-8b-Instruct fails with assistant_only_loss=True
|
|
2
|
112
|
February 5, 2026
|
|
How to train T5 to distinguish task-relevant tokens from contextual noise?
|
|
1
|
21
|
February 5, 2026
|
|
Finetuning whisper attention mask not set and canot be inferred
|
|
5
|
6211
|
February 4, 2026
|