Severe RAM Leak and Crash on Second Run with FP8 .safetensors Version in ComfyUI

#9
by sunnyyy - opened

Hello, and thank you for your amazing work on this model.

I'm writing to report a critical memory leak issue specifically related to the FP8 .safetensors version of the Remix model when used in ComfyUI. This issue makes the model unusable for any sequential or batch processing.

The first video generation with the model completes successfully. However, this first run causes a severe and non-recoverable RAM leak, leaving the system's memory usage permanently high even after the generation is finished.

Attempting a second generation invariably leads to a RAM Out-Of-Memory (OOM) crash. The reason for the crash is that ComfyUI tries to load the model into the already-filled RAM from the first run's leak, exceeding the system's total memory.

Screenshot From 2025-10-15 21-29-04
Screenshot From 2025-10-15 21-28-55

Crucially, this issue does not occur with other versions of the model, such as the wan2.2.gguf (Q8_0)

model, which runs perfectly stable in the exact same workflow, demonstrating correct memory loading and unloading.

Wan2.2_Remix_NSFW_t2v_14b_*_lighting_v1.0_dyno.safetensors
this model

Please update to version 2.0. I personally tested and did not encounter this issue. If you still encounter these problems, you can try using my workflow.
Online Workflow: Wan2.2-Remix-I2V-Comfy-Qwen3
Experience here: https://www.runninghub.ai/post/1986632318448267265/?inviteCode=rh -v1325

我也遇到在32G内存的设备上也遇到了严重内存泄漏问题。这个问题使模型无法用于任何顺序处理或批量处理,使用的模型是Wan2.2_Remix_NSFW_i2v_14b_high_lighting_v2.0

我也遇到在32G内存的设备上也遇到了严重内存泄漏问题。这个问题使模型无法用于任何顺序处理或批量处理,使用的模型是Wan2.2_Remix_NSFW_i2v_14b_high_lighting_v2.0

怀疑是comfyui内存控制的锅,跑flux2dev也是泄露的一塌糊涂内存显存互相喂屎那种苦命鸳鸯

Sign up or log in to comment