-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions. #8
Comments
Hi the annotations for AVA will be out in a week. We will try to release the HMAR training code as soon as possible. |
Sorry to bother you again. Approximately how many sequences with shot switching did you pick, and how was ground truth generated? Thanks. |
Hi, Can the demo.py file be used for a random youtube video for 3d tracking? |
Hi, demo works fine. Actually, this method focuses on tracking 2d People with 3D Representations rather than 3D tracking. |
@xiaocc612, evaluation on AVA includes ~1.3k examples with shot changes. These sequences come from the validation set of AVA. The shot change detection is done automatically and the person bounding boxes are taken from the AVA annotations, but we manually curate this set to establish that the shot change detection is correct and which bounding boxes of humans belong to the same person. |
@somyagoel13 yes, the demo.py works on any youtube videos. We have released a fast online version in the PHALP repo, which also uses same 3D representations. |
@xiaocc612 yes, our method works on monocular RGB images. We don't use any explicit depth information. |
How long before the AVA dataset is released? Very much looking forward to it. |
I am very interested in your work. Can you provide the training code for the HMAR model and the annotation file for the AVA dataset?
Thanks.
The text was updated successfully, but these errors were encountered: