This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images.
It is exciting to connect the 2D world with 3D world using Multi-view Stereo(MVS) methods. In this project, we aim to reconstruct several architecture in our campus. Since it’s outdoor reconstruction, We chose to use
AA-RMVSNet to do this work for its marvelous performance is outdoor datasets after comparing some similar models such as
D2HC-RMVSNet. The code is retrieved from here with some modification.
Here we summarize the main steps we took when doing this project. You can reproduce our result after these steps.
First, you need to create a virtual environment and install the necessary dependencies.
conda create -n test python=3.6 conda activate test conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 -c pytorch conda install -c conda-forge py-opencv plyfile tensorboardx
Other cuda versions can be found here
Struct from Motion
Camera parameters are required to conduct the MVSNet based methods. Please first download the open source software COLMAP.
The workflow is as follow:
- Open the COLMAP, then successively click
- Select your Workspace folder and Image folder.
- (Optional) Unclick Dense model to accelerate the reconstruction procedure.
- After the completion of reconstruction, you should be able to see the result of sparse reconstruction as well as position of cameras.(Fig )
Export model as text. There should be a
camera.txtin the output folder, each line represent a photo. In case there are photos that remain mismatched, you should dele these photos and rematch. Repeat this process until all the photos are mathced.
- Move the there txts to the
To use AA-RMVSNet to reconstruct the building, please follow the steps listed below.
Clone this repository to a local folder.
The custom testing folder should be placed in the root directory of the cloned folder. This folder should have to subfolders names
imagesfolder is meant to place the photos, and the
sparsefolder should have the three txt files recording the camera’s parameters.
Find the file
list-dtu-test.txt, and write the name of the folder which you wish to be tested.
python ./sfm/colmap2mvsnet.py --dense_folder name --interval_scale 1.06 --max_d 512
dense_folderis compulsory, others being optional. You can also change the default value in the following shells.
When you get the result of the previous step, run the following commands
sh ./scripts/eval_dtu.sh sh ./scripts/fusion_dtu.sh
Then you are should see the output
.plyfiles in the
Here dtu means the data is organized in the format of DTU dataset.