You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying out Flowbot on real data to see how robust it is. I'm finding that the viewing direction of the camera has a big effect on the direction of the articulation flow.
Here I placed a box on a table and manually segmented the articulated part. The articulation flow pretty good. (For target flow I just used the predicted flow in order to make the visualization work)
Now I simply rotate the box about 45 degrees about the gravity vector. Now the articulation flow is skew. In fact, it seem like the flow is always in the direction of the viewing angle.
I checked and when training I use randomize_camera=True So I don't understand what is going on. Is this expected behaviour? Is there maybe a parameter I've set wrong?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, Thanks again for your interest in the project, and trying out the code. What you observed is expected - although I understand why "randomize_camera" is misleading in this case. The training code we released creates a model which can make predictions in one particular world reference frame (the frame you observed has good results). What randomize_camera does is to randomize the camera position during sampling of the points for training, not randomize the reference frame that are fed into the model. In other words, there is one global reference frame where you place a camera to sample points (always expressed in that global frame). You can vary the position of the camera, which will give you a different sampling of the points (i.e. from occlusions, etc.), but the reference frame those points are expressed in will be consistent.
There is no fundamental reason we couldn't train things in the camera frame (and thereby get the results you might expect from any direction). However, some visual affordances rely on the gravity vector being known, so we didn't want to deal with some custom/arbitrary distribution over camera poses. In retrospect we probably could have just rotated the camera around the z-axis (maintaining gravity vector being negative z).
Hello,
I've been trying out Flowbot on real data to see how robust it is. I'm finding that the viewing direction of the camera has a big effect on the direction of the articulation flow.
Here I placed a box on a table and manually segmented the articulated part. The articulation flow pretty good. (For target flow I just used the predicted flow in order to make the visualization work)
Now I simply rotate the box about 45 degrees about the gravity vector. Now the articulation flow is skew. In fact, it seem like the flow is always in the direction of the viewing angle.
I checked and when training I use
randomize_camera=True
So I don't understand what is going on. Is this expected behaviour? Is there maybe a parameter I've set wrong?Thanks!
The text was updated successfully, but these errors were encountered: