-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize to real scenes #227
Comments
Dear authors, Do you have any comments on the aforementioned questions? Thanks. |
Dear authors, I have another question: How does PSMNet address the case that two cameras' optical axes are not parallel (e.g., using Nuscenes / DDAD datasets). Look forward to your reply. Thanks. |
Hi, @dongli12 Jia-Ren |
Thanks @JiaRenChang for your comments. For Q1, I mean that DDAD / Nuscenes have multiple cameras for full surround view, is it possible to use every two adjacent cameras to construct a "stereo camera" (their optical axes are not parallel) for disparity estimation (Figure as below)? How do I optimize PSMNet for that case? Is there any special calibration or preprocessing to do that? Best, |
Hi @JiaRenChang,
Thanks for your interesting work. I have questions about the generalization of the disparity estimation method. When applying PSMNet to real scenes in practice, is there any requirement for stereo camera calibration or other algorithm preprocessing for obtaining a disparity of two images? How about converting to real depth (anything else required except focal length and camera baseline)? When applying PSMNet trained on A to a new dataset B, is it necessary for B to have the same focal length and camera baseline for good performance?
Look forward to your reply.
Thanks,
Dong
The text was updated successfully, but these errors were encountered: