<<<<<<< HEAD
BlobGAN: Spatially Disentangled Scene RepresentationsOfficial PyTorch Implementation
Paper | Project Page | Video | Interactive Demo
BlobGAN.mp4
This repository contains:
- ๐ Pre-trained BlobGAN models on three datasets: bedrooms, conference rooms, and a combination of kitchens, living rooms, and dining rooms
- ๐ป Code based on PyTorch Lightning โก and Hydra ๐ which fully supports CPU, single GPU, or multi GPU/node training and inference
And, coming soon, easy-to-run ๐scripts to:
- ๐๏ธ๏ธ Generate and edit realistic images with an interactive UI
- ๐ธ Upload your own image and convert it into blobs!
- ๐งฌ Programmatically modify images and reproduce results from our paper
Setup
Run the commands below one at a time to download the latest version of the BlobGAN code, create a Conda environment, and install necessary packages and utilities.
git clone https://github.com/dave-epstein/blobgan.git
mkdir -p blobgan/logs/wandb
conda create -n blobgan python=3.9
conda activate blobgan
conda install pytorch=1.11.0 torchvision=0.12.0 torchaudio cudatoolkit=11.3 -c pytorch
conda install cudatoolkit-dev=11.3 -c conda-forge
pip install tqdm==4.64.0 hydra-core==1.1.2 omegaconf==2.1.2 clean-fid==0.1.23 wandb==0.12.11 ipdb==0.13.9 lpips==0.1.4 einops==0.4.1 inputimeout==1.0.4 pytorch-lightning==1.5.10 matplotlib==3.5.2 mpl_interactions[jupyter]==0.21.0
wget -q --show-progress https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
sudo unzip -q ninja-linux.zip -d /usr/local/bin/
sudo update-alternatives --install /usr/bin/ninja ninja /usr/local/bin/ninja 1 --force
Running pretrained models (coming very soon!)
See scripts/load_model.py
for an example of how to load a pre-trained model and generate images with it. For example:
python scripts/load_model.py --model_name bed --dl_dir models --save_dir out --n_imgs 32 --save_blobs --label_blobs
See the command’s help for more details and options: scripts/load_model.py --help
Training your own model (coming very soon!)
Citation
If our code or models aided your research, please cite our paper:
@misc{epstein2022blobgan,
title={BlobGAN: Spatially Disentangled Scene Representations},
author={Dave Epstein and Taesung Park and Richard Zhang and Eli Shechtman and Alexei A. Efros},
year={2022},
eprint={2205.02837},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Code acknowledgments
This repository is built on top of rosinality’s excellent PyTorch re-implementation of StyleGAN2 and Bill Peebles’ GANgealing codebase.
Official PyTorch implementation of BlobGAN: Spatially Disentangled Scene Representations
More details coming soon! In the meantime, please check out our interactive notebook (run locally or on Colab).
886a44bbc329932c391f357c76d365522dc741ba