Eva-Mindlink-72b / README.md
maldv's picture
Update README.md
a933143 verified
metadata
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B/raw/main/LICENSE
library_name: transformers
language:
  - en
tags:
  - chat
  - conversational
base_model:
  - Qwen/Qwen2.5-72B
  - Skywork/MindLink-72B-0801
  - Unbabel/Tower-Plus-72B
  - EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
pipeline_tags:
  - text-generation
  - conversational
  - chat

image/png

GGUF iMat

Eva Mindlink 72B

Eva Mindlink 72B is a normalized denoised fourier interpolation of the following models:

output_base_model: "Qwen/Qwen2.5-72B"
output_dtype: "bfloat16"
finetune_merge:
  - { "model": "Skywork/MindLink-72B-0801", "base": "Qwen/Qwen2.5-72B", "alpha": 0.9, "is_input": true }
  - { "model": "Unbabel/Tower-Plus-72B", "base": "Qwen/Qwen2.5-72B", "alpha": 0.5 }
  - { "model": "EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base": "Qwen/Qwen2.5-72B", "alpha": 0.8, "is_output": true }

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the MindLink-72B-0801 input layer and the EVA-Qwen2.5-72B-v0.2 output layer.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{eva-mindlink-72b,
    title = {Eva Mindlink 72B},
    url = {https://huggingface.co/maldv/Eva-Mindlink-72B},
    author = {Praxis Maldevide},
    month = {August},
    year = {2025}
}