-
clone this repo.
https://github.com/XiandaGuo/OpenStereo
-
Install dependenices:
- pytorch >= 1.13.1
- torchvision
- timm == 0.5.4
- pyyaml
- tensorboard
- opencv-python
- tqdm
- scikit-image
Create a conda environment by:
conda create -n openstereo python=3.8
Install pytorch by Anaconda:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
Install other dependencies by pip:
pip install -r requirements.txt
See prepare dataset.
Go to the model zoom, download the model file and uncompress it to output.
Train a model with a Single GPU
python tools/train.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml
Multi-GPU Training on Single Node
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
torchrun --nnodes=1 --nproc_per_node=8 --rdzv_backend=c10d --rdzv_endpoint=localhost:23456 tools/train.py --dist_mode --cfg_file cfgs/igev/igev_sceneflow_amp.yaml
--config
The path to the config file.--dist_mode
If specified, the program will use DDP to train.- your exp will saved in '/save_root_dir/DATASET_NAME/MODEL_NAME/config_file_perfix/extra_tag', save_root_dir and extra_tag can specified in train argparse
Evaluate the trained model by
python tools/eval.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --eval_data_cfg_file cfgs/sceneflow_eval.yaml --pretrained_model your_pretrained_ckpt_path
Generalization Evaluation:
python tools/eval.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --eval_data_cfg_file cfgs/eth3d_eval.yaml --pretrained_model your_pretrained_ckpt_path
python tools/eval.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --eval_data_cfg_file cfgs/middlebury_eval.yaml --pretrained_model your_pretrained_ckpt_path
python tools/eval.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --eval_data_cfg_file cfgs/kitti15_eval.yaml --pretrained_model your_pretrained_ckpt_path
python tools/eval.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --eval_data_cfg_file cfgs/kitti12_eval.yaml --pretrained_model your_pretrained_ckpt_path
--cfg_file
The path to the config file.--eval_data_cfg_file
The dataset config you want to eval.--pretrained_model
your pre-trained checkpoint
Tip: Other arguments are the same as the train phase.
Infer the trained model by
python tools/infer.py --cfg_file cfgs/igev/igev_sceneflow_amp.yaml --left_img_path your_left_img_path --right_img_path your_right_img_path
Tip: the pretrained_model should be written in cfg_file, and if you want to process multiple image pairs at once, please organize the file structure and write a simple loop.
- Read the detailed config to figure out the usage of needed setting items;
- See how to create your model;
- You can set the default pre-trained model path, by
export TORCH_HOME="/path/to/pretrained_models"