This is an Irix-12B-Model_Stock fine-tune, produced at the request of DreadPoor through P-E-W's Heretic (v1.1.0) abliteration engine with Magnitude-Preserving Orthogonal Ablation enabled.
Note: The model was generated with Transformers v5.1.0.
Heretication Results
| Score Metric | Value | Parameter | Value |
|---|---|---|---|
| Refusals | 2/100 | direction_index | per layer |
| KL Divergence | 0.0136 | attn.o_proj.max_weight | 1.66 |
| Initial Refusals | 92/100 | attn.o_proj.max_weight_position | 23.85 |
| attn.o_proj.min_weight | 0.51 | ||
| attn.o_proj.min_weight_distance | 14.19 | ||
| mlp.down_proj.max_weight | 1.06 | ||
| mlp.down_proj.max_weight_position | 37.65 | ||
| mlp.down_proj.min_weight | 0.68 | ||
| mlp.down_proj.min_weight_distance | 23.18 |
Degree of Heretication
The Heresy Index weighs the resulting model's corruption by the process (KL Divergence) and its abolition of doctrine (Refusals) for a final verdict in classification.
Note: This is an arbitrary classification inspired by Warhammer 40K, having no tangible indication towards the model's performance.
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using yamatazen/EtherealAurora-12B-v2 as a base.
Models Merged
The following models were included in the merge:
- DreadPoor/Faber-12-Model_Stock
- ohyeah1/Violet-Lyra-Gutenberg-v2
- yamatazen/EtherealAurora-12B-v3
- redrix/patricide-12B-Unslop-Mell-v2
Configuration
The following YAML configuration was used to produce this model:
models:
- model: DreadPoor/Faber-12-Model_Stock
- model: ohyeah1/Violet-Lyra-Gutenberg-v2
- model: redrix/patricide-12B-Unslop-Mell-v2
- model: yamatazen/EtherealAurora-12B-v3
merge_method: model_stock
base_model: yamatazen/EtherealAurora-12B-v2
normalize: false
int8_mask: true
dtype: bfloat16
- Downloads last month
- 7