Error after model updated three hours ago
Hello, I was working on my project that uses colmodernvbert and as I re-ran one my notebooks I suddenly started getting this error after what looked like new model weights were downlaoded. I have been attempting to fix it on my own including using previous revisions of this repo, but I cant seem to fix it. This problem started immediately after new commits started being pushed today, so I think its probably related.
Any ideas on what might be going on and how I might fix it? Thank you!
--> 524 self._load_model_and_processor()
525 unprocessed = [item for item in self.image_embeddings if item[1] is None]
526 if not unprocessed:
File ~/Documents/project/model.py:154, in LitePali._load_model_and_processor(self)
152 def _load_model_and_processor(self):
153 if self.model is None or self.processor is None:
--> 154 self.model = ColModernVBert.from_pretrained(
155 self.model_name, torch_dtype=torch.bfloat16, device_map=self.device
156 ).eval()
157 self.processor = ColModernVBertProcessor.from_pretrained(self.model_name)
File ~/anaconda3/envs/milvus/lib/python3.13/site-packages/transformers/modeling_utils.py:311, in restore_default_torch_dtype.._wrapper(*args, **kwargs)
309 old_dtype = torch.get_default_dtype()
310 try:
--> 311 return func(*args, **kwargs)
312 finally:
313 torch.set_default_dtype(old_dtype)
File ~/anaconda3/envs/milvus/lib/python3.13/site-packages/transformers/modeling_utils.py:4766, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs)
4758 config = cls._autoset_attn_implementation(
4759 config,
4760 torch_dtype=torch_dtype,
4761 device_map=device_map,
4762 )
4764 with ContextManagers(model_init_context):
4765 # Let's make sure we don't run the init function of buffer modules
-> 4766 model = cls(config, *model_args, **model_kwargs)
4768 # Make sure to tie the weights correctly
4769 model.tie_weights()
File ~/anaconda3/envs/milvus/lib/python3.13/site-packages/colpali_engine/models/modernvbert/colvbert/modeling_colmodernvbert.py:24, in ColModernVBert.init(self, config, mask_non_image_embeddings, **kwargs)
22 def init(self, config, mask_non_image_embeddings: bool = False, **kwargs):
23 super().init(config=config)
---> 24 self.model = ModernVBertModel(config, **kwargs)
25 self.dim = 128
26 self.custom_text_proj = nn.Linear(self.model.config.text_config.hidden_size, self.dim)
File ~/anaconda3/envs/milvus/lib/python3.13/site-packages/colpali_engine/models/modernvbert/modeling_modernvbert.py:237, in ModernVBertModel.init(self, config)
235 self.vision_model = ModernVBertModel.init_vision_model(config)
236 self.connector = ModernVBertConnector(config)
--> 237 self.text_model = ModernVBertModel.init_language_model(config)
238 self.image_seq_len = int(
239 ((config.vision_config.image_size // config.vision_config.patch_size) ** 2) / (config.scale_factor**2)
240 )
241 self.image_token_id = config.image_token_id
File ~/anaconda3/envs/milvus/lib/python3.13/site-packages/colpali_engine/models/modernvbert/modeling_modernvbert.py:272, in ModernVBertModel.init_language_model(config)
262 text_model_config = AutoConfig.from_pretrained(
263 config.text_config.text_model_name,
264 _attn_implementation=config._attn_implementation,
265 trust_remote_code=True,
266 )
267 text_model = AutoModel.from_config(text_model_config, trust_remote_code=True)
268 embed_layer = DecoupledEmbedding(
269 num_embeddings=text_model_config.vocab_size,
270 num_additional_embeddings=config.additional_vocab_size,
271 embedding_dim=config.hidden_size,
--> 272 partially_freeze=config.freeze_config["freeze_text_layers"],
273 padding_idx=config.pad_token_id,
274 )
275 text_model.set_input_embeddings(embed_layer)
276 return text_model
TypeError: 'NoneType' object is not subscriptable
Hey @droptile can you update colpali-engine by cloning the latest version? I think everything should be fine now
Edit : we just published a new colpali-engine version on PyPi
Hey @droptile can you update colpali-engine by cloning the latest version? I think everything should be fine now
Exactly, we were doing migration to transformers modeling. There was a delay between the moment we updated the weights keys and the modeling in colpali repo.
Can you tell us if everything is fine now?
Thanks for the reply @QuentinJG and @paultltc !
Yes I just tried it again with colpali-engine v3.15 and everything is working again. Thanks!