NLP Embeddings Visualizer
Compares embedding vectors for two different texts visually and by numerical metrics. At the present time BERT-Base, Cased (12-layer, 768-hidden, 12-heads, 110M parameters, English language) in DeepPavlov library is used, but it can be extended in the future if needed.
Both input and output embeddings from BERT can be compared for two consecutive text inputs. This allows empirical investigation of model characteristics. Vectors are compared by visual silhouette, Euclidean and Cosine distances, and also by difference and side-by-side graphs.
Comparison by visual silhouettes and distance metrics:
Comparison by difference and side-by-side graphs (opened by clicking on the right):
- Enter the first text into the input field and press "REQUEST" button
- You should see blue message
Fetching new data from server, do not change input kind until the loading finished
- If you see red message
Promise error 'TypeError: NetworkError when attempting to fetch resource.'please check that server container is running and its address is configured properly, see Start and Stop section for details
- When text is received, you should see the green message
- Note: First execution can take several seconds while the model is loaded into memory
- You should see blue message
- Enter the second text into the same input field and press "REQUEST" button
- After the reply from the server, you should see results in the main window
Start and Stop
- Build Docker container with BERT model
cd backend docker build -t "mera_nlp_embeddings_server" -f docker_container_src/Dockerfile .
- Start this container on your local machine or server and choose port to listen
docker run --rm -d -p 8888:5000 --name "mera_nlp_embeddings_server_container" "mera_nlp_embeddings_server"
8888is the port number, which will be listened to on the host where the container has started. You can use another value if needed.
- Open HTML file with frontend in browser:
Note: If you see the message
Unable to load stored config, will use the default instead, it is OK for the first start.
- If you launched the container not on the same machine or changed default port
8888, correct address and port in the "Server Settings" window (opens by gear button) in "Server address" field
- You are ready to compare embeddings for different texts, as described in Usage section
- Stop container when it is no longer used:
docker stop "mera_nlp_embeddings_server_container"
- OS: Windows, Linux
- Docker installed
- Browser: Firefox or Chrome
- At least 4GB RAM on the host where container with BERT is started, BERT takes about 2.5 GB of RAM
- Alexander Ganyukhin - Frontend implementation, backend implementation, GitHub
- Georgy Dyuldin - Help with backend implementation, GitHub
- Yury Yakhno - Idea, technical driving, GitHub
See also the list of contributors who participated in this project.
- Whole team in MERA, who use BERT models for the discussion of project idea, help with coding and testing. Especially for Konstantin Kulikov and Leila Ishkuvatova as a part of this team.
- DeepPavlov team for the convenient library
Subscribe to Python Awesome
Get the latest posts delivered right to your inbox