Semantic Segmentation with Pytorch-Lightning
Pytorch-Ligthning includes a logger for W&B that can be called simply with:from pytorch_lightning.loggers import WandbLoggerfrom pytorch_lightning import Trainer wandb_logger = WandbLogger() trainer = Trainer(logger=wandb_logger)
Refer to the documentation for more details.
Hyper-parameters can be defined manually and every run is automatically logged onto Weights & Biases for easier analysis/interpretation of results and how to optimize the architecture.
You can also run sweeps to optimize automatically hyper-parameters.
Note: this example has been adapted from Pytorch-Lightning examples.
- A quick way to run the training scrip is to go to the
notebook/tutorial.ipynband play with it.
- Clone this repository.
- Download Kitti dataset
- The dataset will be downloaded in the form of a zip file namely
data_semantics.zip. Unzip the dataset inside the
- Install dependencies through
Pipfileor manually (Pytorch, Pytorch-Lightning & Wandb)
- Log in or sign up for an account ->
python train.pyand add any optional args
Visualize and compare your runs through generated link
Sweeps for hyper-parameter tuning
W&B Sweeps can be defined in multiple ways:
- with a YAML file - best for distributed sweeps and runs from command line
- with a Python object - best for notebooks
In this project we use a YAML file. You can refer to W&B documentation for more Pytorch-Lightning examples.
wandb sweep sweep.yaml
wandb agent <sweep_id>where
<sweep_id>is given by previous command
Visualize and compare the sweep runs
After running the script a few times, you will be able to compare quickly a large combination of hyperparameters.
Feel free to modify the script and define your own hyperparameters.