Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qwen2-vl 推理时,图片分辨率高会报错 #5773

Closed
1 task done
xyf0106 opened this issue Oct 22, 2024 · 2 comments
Closed
1 task done

Qwen2-vl 推理时,图片分辨率高会报错 #5773

xyf0106 opened this issue Oct 22, 2024 · 2 comments
Labels
solved This problem has been already solved

Comments

@xyf0106
Copy link

xyf0106 commented Oct 22, 2024

Reminder

  • I have read the README and searched the existing issues.

System Info

  • llamafactory version: 0.9.0
  • Platform: Linux-5.4.143-2-velinux1-amd64-x86_64-with-glibc2.35
  • Python version: 3.10.15
  • PyTorch version: 2.4.1+cu121 (GPU)
  • Transformers version: 4.45.2
  • Datasets version: 2.19.1
  • Accelerate version: 0.34.2
  • PEFT version: 0.12.0
  • TRL version: 0.9.6
  • GPU type: NVIDIA A100-SXM4-80GB
  • DeepSpeed version: 0.15.2

Reproduction

FORCE_TORCHRUN=1 CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/inference/Qwen2_vl_lora_sft.yaml

Qwen2_vl_lora_sft.yaml 内容如下

`model_name_or_path: Qwen/Qwen2-VL-7B-Instruct/
adapter_name_or_path: saves/Qwen2VL-7B-Instruct/lora/train_2024-10-21-v2/checkpoint-2000/

method

stage: sft
do_predict: true
finetuning_type: lora

dataset

eval_dataset: item_satisfy_data_1001_temp
template: qwen2_vl
cutoff_len: 1024
max_samples: 2
overwrite_cache: true
preprocessing_num_workers: 1

output

output_dir: /mnt/nas/xuyufan/LLaMA-Factory/output/Qwen2-VL-7B-Instruct/train_2024-10-21/checkpoint-2000
overwrite_output_dir: true

eval

per_device_eval_batch_size: 1
predict_with_generate: true
ddp_timeout: 180000000
`

Expected behavior

用finetune后的模型完成推理

Others

有两个问题:

  1. 一个是图片size大的时候,推理会报错:RuntimeError: shape mismatch: value tensor of shape [3, 1049] cannot be broadcast to indexing result of shape [3, 1007]
  2. 还有一个是,max_samples超过几百之后,会卡在切词这里:Running tokenizer on dataset
@github-actions github-actions bot added the pending This problem is yet to be addressed label Oct 22, 2024
@hiyouga
Copy link
Owner

hiyouga commented Oct 22, 2024

增加 cutoff_len

@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Oct 22, 2024
@hiyouga hiyouga closed this as completed Oct 22, 2024
@xyf0106

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants