You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I cloned the repository Video-LLaVA to fine-tune video-llava-7b-hf. After cloning, I installed the required Python packages and tried running the scripts/v1.5/finetune_lora.sh script. However, I encountered the following error:
FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/videollava-7b-pretrain/mm_projector.bin'
Can anyone help me resolve this issue? Thanks in advance!
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.71s/it] Adding LoRA adapters... Traceback (most recent call last): File "/workspace/Video-LLaVA/videollava/train/train_mem.py", line 13, in <module> train() File "/workspace/Video-LLaVA/videollava/train/train.py", line 1003, in train model.get_model().initialize_vision_modules( File "/workspace/Video-LLaVA/videollava/model/llava_arch.py", line 118, in initialize_vision_modules mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cuda:0') File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 791, in load with _open_file_like(f, 'rb') as opened_file: File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like return _open_file(name_or_buffer, mode) File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 252, in __init__ super().__init__(open(name, mode)) **FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/videollava-7b-pretrain/mm_projector.bin'** [2025-02-24 11:44:59,757] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 116598 [2025-02-24 11:44:59,757] [ERROR] [launch.py:321:sigkill_handler] ['/workspace/Video-LLaVA/VL7b_ftEnv/bin/python3', '-u', 'videollava/train/train_mem.py', '--local_rank=0', '--lora_enable', 'True', '--lora_r', '128', '--lora_alpha', '256', '--mm_projector_lr', '2e-5', '--deepspeed', './scripts/zero2_offload.json', '--model_name_or_path', '/workspace/Video-LLaVA/vicuna-7b-v1.5/models--lmsys--vicuna-7b-v1.5/snapshots/3321f76e3f527bd14065daf69dad9344000a201d', '--version', 'v1', '--data_path', '/workspace/Video-LLaVA/datasets/videochatgpt_tune_.json', '/workspace/Video-LLaVA/datasets/nlp_tune.json', '--video_folder', '/workspace/Video-LLaVA/datasets/videos', '--video_tower', '/workspace/Video-LLaVA/LanguageBind_Video_merge', '--mm_projector_type', 'mlp2x_gelu', '--pretrain_mm_mlp_adapter', './checkpoints/videollava-7b-pretrain/mm_projector.bin', '--mm_vision_select_layer', '-2', '--mm_use_im_start_end', 'False', '--mm_use_im_patch_token', 'False', '--image_aspect_ratio', 'pad', '--group_by_modality_length', 'True', '--bf16', 'False', '--fp16', 'True', '--output_dir', './checkpoints/videollava-7b-lora', '--num_train_epochs', '1', '--per_device_train_batch_size', '2', '--per_device_eval_batch_size', '2', '--gradient_accumulation_steps', '1', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '50000', '--save_total_limit', '1', '--learning_rate', '2e-4', '--weight_decay', '0.', '--warmup_ratio', '0.03', '--lr_scheduler_type', 'cosine', '--logging_steps', '1', '--tf32', 'False', '--model_max_length', '2048', '--tokenizer_model_max_length', '3072', '--gradient_checkpointing', 'True', '--dataloader_num_workers', '4', '--lazy_preprocess', 'True', '--report_to', 'tensorboard', '--cache_dir', './cache_dir'] exits with return code = 1
The text was updated successfully, but these errors were encountered:
I cloned the repository Video-LLaVA to fine-tune video-llava-7b-hf. After cloning, I installed the required Python packages and tried running the scripts/v1.5/finetune_lora.sh script. However, I encountered the following error:
FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/videollava-7b-pretrain/mm_projector.bin'
Can anyone help me resolve this issue? Thanks in advance!
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:19<00:00, 9.71s/it] Adding LoRA adapters... Traceback (most recent call last): File "/workspace/Video-LLaVA/videollava/train/train_mem.py", line 13, in <module> train() File "/workspace/Video-LLaVA/videollava/train/train.py", line 1003, in train model.get_model().initialize_vision_modules( File "/workspace/Video-LLaVA/videollava/model/llava_arch.py", line 118, in initialize_vision_modules mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cuda:0') File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 791, in load with _open_file_like(f, 'rb') as opened_file: File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like return _open_file(name_or_buffer, mode) File "/workspace/Video-LLaVA/VL7b_ftEnv/lib/python3.10/site-packages/torch/serialization.py", line 252, in __init__ super().__init__(open(name, mode)) **FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/videollava-7b-pretrain/mm_projector.bin'** [2025-02-24 11:44:59,757] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 116598 [2025-02-24 11:44:59,757] [ERROR] [launch.py:321:sigkill_handler] ['/workspace/Video-LLaVA/VL7b_ftEnv/bin/python3', '-u', 'videollava/train/train_mem.py', '--local_rank=0', '--lora_enable', 'True', '--lora_r', '128', '--lora_alpha', '256', '--mm_projector_lr', '2e-5', '--deepspeed', './scripts/zero2_offload.json', '--model_name_or_path', '/workspace/Video-LLaVA/vicuna-7b-v1.5/models--lmsys--vicuna-7b-v1.5/snapshots/3321f76e3f527bd14065daf69dad9344000a201d', '--version', 'v1', '--data_path', '/workspace/Video-LLaVA/datasets/videochatgpt_tune_.json', '/workspace/Video-LLaVA/datasets/nlp_tune.json', '--video_folder', '/workspace/Video-LLaVA/datasets/videos', '--video_tower', '/workspace/Video-LLaVA/LanguageBind_Video_merge', '--mm_projector_type', 'mlp2x_gelu', '--pretrain_mm_mlp_adapter', './checkpoints/videollava-7b-pretrain/mm_projector.bin', '--mm_vision_select_layer', '-2', '--mm_use_im_start_end', 'False', '--mm_use_im_patch_token', 'False', '--image_aspect_ratio', 'pad', '--group_by_modality_length', 'True', '--bf16', 'False', '--fp16', 'True', '--output_dir', './checkpoints/videollava-7b-lora', '--num_train_epochs', '1', '--per_device_train_batch_size', '2', '--per_device_eval_batch_size', '2', '--gradient_accumulation_steps', '1', '--evaluation_strategy', 'no', '--save_strategy', 'steps', '--save_steps', '50000', '--save_total_limit', '1', '--learning_rate', '2e-4', '--weight_decay', '0.', '--warmup_ratio', '0.03', '--lr_scheduler_type', 'cosine', '--logging_steps', '1', '--tf32', 'False', '--model_max_length', '2048', '--tokenizer_model_max_length', '3072', '--gradient_checkpointing', 'True', '--dataloader_num_workers', '4', '--lazy_preprocess', 'True', '--report_to', 'tensorboard', '--cache_dir', './cache_dir'] exits with return code = 1
The text was updated successfully, but these errors were encountered: