How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "Youlln/ECE-MIRAGE-3" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Youlln/ECE-MIRAGE-3",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "Youlln/ECE-MIRAGE-3" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Youlln/ECE-MIRAGE-3",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Quick Links

ECE-MIRAGE-3

ECE-MIRAGE-3 est un modèle de langage fusionné développé à l'ECE (École d'Ingénieurs) en utilisant la méthode de fusion SLERP (Spherical Linear Interpolation). Ce modèle combine les forces des architectures rombodawg/Rombos-LLM-V2.5-Qwen-32b et Sakalti/ultiima-32B pour offrir des performances optimisées sur des tâches complexes de traitement du langage naturel (NLP).

Caractéristiques

  • Méthode de fusion : SLERP (Spherical Linear Interpolation).
  • Modèles sources :
    • rombodawg/Rombos-LLM-V2.5-Qwen-32b
    • Sakalti/ultiima-32B
  • Optimisation : bfloat16 pour des calculs rapides et efficaces.
  • Applications :
    • Raisonnement mathématique.
    • Compréhension contextuelle.
    • Tâches instructives (Instruction Following).

Configuration

slices:
  - sources:
      - model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
        layer_range: [0, 64]
      - model: Sakalti/ultiima-32B
        layer_range: [0, 64]
    merge_method: slerp
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.25, 0.5, 0.75, 1]
    - filter: mlp
      value: [1, 0.75, 0.5, 0.25, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
7
Safetensors
Model size
33B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Youlln/ECE-MIRAGE-3

Merge model
this model
Quantizations
1 model