ImageNet 1K Bounding Boxes

For some experiments, you might wanna pass only the background of imagenet images vs passing only the foreground. Here, I’ve included the code to extract the meta-data for the bounding box, cleaning up the the downloaded stuff, and then changing ImageNet Loader to support only the images that have box annotations.

How to use:

from costum_imagenet import BackgroundForegroundImageNet
tr = trans.Compose([trans.Resize(224), trans.CenterCrop(224), trans.ToTensor(), ])
dataset = BackgroundForegroundImageNet(root='./data/imagenet/train', download=True, transform=tr)
x, b, f, y = dataset[0]
torchvision.utils.save_image(torch.stack([x, b, f]), 'test1.png')



If you set the value download=True, the bounding boxes and the indices of imagenet train split that have the bounding boxes will be downloaded. But if for some reason you want to create your own bounding boxes from the scratch, here’s the steps for doing it:

Restarting from the scratch

First download the data from here:

wget ""

Extract the File:

tar -xvf bboxes_annotations.tar.gz 

Extract every subfolder:

cd bboxes_annotations
ls | grep .tar.gz | while read f ; do tar -xvf "${f}" ; done

Convert dataset to JS:


Clean the extra 50GB extracted files:

rm *.tar.gz
ls | grep "n.*" | while read f ; do rm -rf "${f}"  ; done 

Get Indices that have bounding boxes:


Then simply pass the path to the files and to your BackgroundForegroundImageNet constructor

dataset = BackgroundForegroundImageNet(root='.', download=False, boxes='', indices='')


View Github