This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music information retrival (MIR). audioLIME is based on the method LIME (local interpretable model-agnostic explanations), work presented in this paper and uses source separation estimates in order to create interpretable components. Alternative types of interpretable components are available (see the last section) and more will be added in the future.


If you use audioLIME in your work, please cite it:

    title={{audioLIME: Listenable Explanations Using Source Separation}},
    author={Verena Haunschmid and Ethan Manilow and Gerhard Widmer},
    howpublished={13th International Workshop on Machine Learning and Music}


audioLIME is introduced/used in the following publications:

  • Verena Haunschmid, Ethan Manilow and Gerhard Widmer, audioLIME: Listenable Explanations Using Source Separation

  • Verena Haunschmid, Ethan Manilow and Gerhard Widmer, Towards Musically Meaningful Explanations Using Source Separation

  • Alessandro B. Melchiorre, Verena Haunschmid, Markus Schedl and Gerhard Widmer, LEMONS: Listenable Explanations for Music recOmmeNder Systems

  • Shreyan Chowdhury, Verena Praher and Gerhard Widmer, Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities

  • Verena Praher(*), Katharina Prinz(*), Arthur Flexer and Gerhard Widmer, On the Veracity of Local, Model-agnostic Explanations in Audio Classification


The audioLIME package is not on PyPI yet. For installing it, clone the git repo and install it using

git clone  # HTTPS
git clone [email protected]:CPJKU/audioLIME.git  # SSH
cd audioLIME
python install

To install a version for development purposes
check out this article.


To test your installation, the following test are available (more to come :)):

python -m unittest tests.test_SpleeterFactorization

Note on Requirements

To keep it lightweight, not all possible dependencies are contained in
Depending on the factorization you want to use, you might need different packages,
e.g. spleeter.

Installation & Usage of spleeter

pip install spleeter==2.0.2

When you're using spleeter for the first time, it will download the used model in a directory
pretrained_models. You can only change the location by setting an environment variable
MODEL_PATH before spleeter is imported. There are different ways to
set an environment variable,
for example:

export MODEL_PATH=/path/to/spleeter/pretrained_models/