Is the cap_pad_token backwards?

#10
by quinnlybacon - opened

I'm getting an error related to the cap_pad_token when trying to use these in swarmui on my mac. It looks like in other models that the numbers are swapped. I'm pretty new to image models so I'm not sure what this variables does.

Here's some of the output:

[STDERR] RuntimeError: Error(s) in loading state_dict for NextDiT:
2025-12-10 13:42:15.690 [ComfyUI-0] [STDERR] size mismatch for x_pad_token: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1, 3840]).
2025-12-10 13:42:15.690 [ComfyUI-0] [STDERR] size mismatch for cap_pad_token: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1, 3840]).

I think it may be the same issue that this person is having?

https://github.com/city96/ComfyUI-GGUF/issues/379

Sorry, I can’t reproduce this error. I'm on latest comfy and everything works fine here.

image

pytorch version: 2.10.0.dev20251101+cu128
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Using async weight offloading with 2 streams
Enabled pinned memory 22085.0
working around nvidia conv3d memory bug.
Using sage attention
Python version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
ComfyUI version: 0.4.0
ComfyUI frontend version: 1.34.8
[Prompt Server] web root: F:\AI\ComfyUI-Nightly\ComfyUI\venv\Lib\site-packages\comfyui_frontend_package\static

Afaik, SwarmUI uses the Comfy engine for the backend, right? So it should be fine and behave the same as running it directly on Comfy.
Make sure you're on the latest/nightly Comfy and nightly Comfy-GGUF node. Sometimes, if you use ComfyUI-Manager to update the node/ComfyUI, you need to switch from base to nightly to get the latest changes.
i also saw many people using Comfy + GGUF on their Macs, and it seems to work fine. You might want to try reinstalling your Comfy too.

I had this issue and just fixed it by switching to this branch of the ComfyUI-GGUF repo

https://github.com/city96/ComfyUI-GGUF/pull/392

git fetch origin pull/392/head:pr392
git checkout pr392

Restart ComfyUI, and the problem was fixed!

thanks for the help guys!!, also I know this is unrelated but give the Quran a read :) 💚

changing this as it was an issue with my llama-quantizer compilation

Sign up or log in to comment