Lyft Motion Prediction for Autonomous Vehicles
Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle.
input --- Please locate data here src |-ensemble --- For 4. Ensemble scripts |-lib --- Library codes |-modeling --- For 1. training, 2. prediction and 3. evaluation scripts |-results --- Training, prediction and evaluation results will be stored here README.md --- This instruction file requirements.txt --- For python library versions
Hardware (The following specs were used to create the original solution)
- Ubuntu 18.04 LTS
- 32 CPUs
- 128GB RAM
- 8 x NVIDIA Tesla V100 GPUs
Software (python packages are detailed separately in
nvidia drivers v.55.23.0
-- Equivalent Dockerfile for the GPU installs: Use
nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 as base image
Also, we installed OpenMPI==4.0.4 for running pytorch distributed training.
Deep learning framework, base library
- pretrainedmodels ==0.7.4
- efficientnet_pytorch ==0.7.0
- resnest ==0.0.6b20200912
- segmentation-models-pytorch ==0.1.2
We also installed
requirements.txt for more details.
We recommend to set following environment variables for better performance.
export MKL_NUM_THREADS=1 export OMP_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1
Please download competition data:
extract them under
lyft-full-training-set data which only contains
please place it under
input/lyft-motion-prediction-autonomous-vehicles/scenes as follows:
input |-lyft-motion-prediction-autonomous-vehicles |-scenes |-train_full.zarr (Place here!) |-train.zarr |-validate.zarr |-test.zarr |-... (other data) |-... (other data)
Our submission pipeline consists of 1. Training, 2. Prediction, 3. Ensemble.
Training with training/validation dataset
The training script is located under
train_lyft.py is the training script and
the training configuration is specified by
flags yaml file.
[Note] If you want to run training from scratch, please remove
results folder once.
The training script tries to resume from
results folder when
resume_if_possible=True is set.
[Note] For the first time of training, it creates cache for training to run efficiently.
This cache creation should be done in single process,
so please try with the single GPU training until training loop starts.
The cache is directly created under
Once the cache is created, we can run multi-GPU training using same
$ cd src/modeling # Single GPU training (Please run this for first time, for input data cache creation) $ python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml # Multi GPU training (-n 8 for 8 GPU training) $ mpiexec -x MASTER_ADDR=localhost -x MASTER_PORT=8899 -n 8 \ python train_lyft.py --yaml_filepath ./flags/20201104_cosine_aug.yaml
We have trained 9 different models for final submission.
Each training configuration can be found in
and the training results are located in
Prediction for test dataset
src/modeling executes the prediction for test data.
out as trained directory, the script uses trained model of this directory to inference.
--convert_world_from_agent true after
$ cd src/modeling $ python predict_lyft.py --out results/20201104_cosine_aug --use_ema true --convert_world_from_agent true
Predicted results are stored under
results/20201104_cosine_aug/prediction_ema/submission.csv is created with above setting.
We executed this prediction for all 9 trained models.
We can submit this
submission.csv file as the single model prediction.
(Optional) Evaluation with validation dataset
src/modeling executes the evaluation for validation data (chopped data).
python eval_lyft.py --out results/20201104_cosine_aug --use_ema true
The script shows validation error, which is useful for local evaluation of model performance.
Finally all trained models' predictions are ensembled using GMM fitting.
The ensemble script is located under
# Please execute from root of this repository. $ python src/ensemble/ensemble_test.py --yaml_filepath src/ensemble/flags/20201126_ensemble.yaml
The location of final ensembled
submission.csv is specified in the yaml file.
You can submit this
submission.csv by uploading it as dataset, and submit via Kaggle kernel.
Please follow Save your time, submit without kernel inference
for the submission procedure.