Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cot is not as effective as direct answer in MLLM (from InternVL2.5-MPO paper) #922

Open
EchoDreamer opened this issue Feb 24, 2025 · 0 comments

Comments

@EchoDreamer
Copy link

In the InternVL2.5-MPO paper, the author mentioned that cot is not as effective as direct answer for MLLM. I wonder why cot is so bad for MLLM compared to LLM? In addition, my recent experiment used QwenVL2.5 to answer questions and found that cot's reasoning effect is also very good, but it is difficult for it to completely follow instructions such as outputing yes or no, which leads to challenges when using the automated evaluation framework to extract answers. I would like to ask the author whether to have any further explanation for this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant