Classification on CIFAR-10/100 and ImageNet with PyTorch.
- Unified interface for different network architectures
- Multi-GPU support
- Training progress bar with rich info
- Training log and training curve visualization code (see
- Install PyTorch
- Clone recursively
git clone --recursive https://github.com/bearpaw/pytorch-classification.git
Please see the Training recipes for how to train the models.
Top1 error rate on the CIFAR-10/100 benchmarks are reported. You may get different results when training your models with different random seed.
Note that the number of parameters are computed on the CIFAR-10 dataset.
|Model||Params (M)||CIFAR-10 (%)||CIFAR-100 (%)|
|WRN-28-10 (drop 0.3)||36.48||3.79||18.14|
|DenseNet-BC (L=100, k=12)||0.77||4.54||22.88|
|DenseNet-BC (L=190, k=40)||25.62||3.32||17.17|
Single-crop (224x224) validation error rate is reported.
|Model||Params (M)||Top-1 Error (%)||Top-5 Error (%)|
Our trained models and training logs are downloadable at OneDrive.
CIFAR-10 / CIFAR-100
Since the size of images in CIFAR dataset is
32x32, popular network structures for ImageNet need some modifications to adapt this input size. The modified models is in the package
- [x] AlexNet
- [x] VGG (Imported from pytorch-cifar)
- [x] ResNet
- [x] Pre-act-ResNet
- [x] ResNeXt (Imported from ResNeXt.pytorch)
- [x] Wide Residual Networks (Imported from WideResNet-pytorch)
- [x] DenseNet
- [x] All models in
torchvision.models(alexnet, vgg, resnet, densenet, inception_v3, squeezenet)
- [x] ResNeXt
- [ ] Wide Residual Networks
Subscribe to Python Awesome
Get the latest posts delivered right to your inbox