Galvanalyser is a system for automatically storing data generated by battery cycling
machines in a database, using a set of “harvesters”, whose job it is to monitor the
datafiles produced by the battery testers and upload it in a standard format to the
server database. The server database is a relational database that stores each dataset
along with information about column types, units, and other relevant metadata (e.g. cell
information, owner, purpose of the experiment)
There are two user interfaces to the system:
- a web app front-end that can be used to view the stored datasets, manage the
harvesters, and input metadata for each dataset
- a REST API which can be used to download dataset metadata and the data itself. This
API conforms to the battery-api
OpenAPI specification, so tools based on this specification (e.g. the Python client)
can use the API.
documentation directory contains more detailed documentation on a number of topics. It contains the following items:
- FirstTimeQuickSetup.md – A quick start guide to
setting up your first complete Galvanalyser system
- AdministrationGuide.md – A guide to performing
administration tasks such as creating users and setting up harvesters
- DevelopmentGuide.md – A guide for developers on
- ProjectStructure.md – An overview of the project folder
structure to guide developers to the locations of the various parts of the project
This section provides a brief overview of the technology used to implement the different parts of the project.
Dockerfiles are provided to run all components of this project in containers. A docker-compose file exists to simplify starting the complete server side system including the database, the web app and the Nginx server. All components of the project can be run natively, however using Docker simplifies this greatly.
A Docker container is also used for building the web app and its dependencies to simplify cross platform deployment and ensure a consistent and reliable build process.
The harvesters are python modules in the backend server which monitor directories for
tester datafiles, parse them according to the their format and write the data and any
metadata into the Postgres database. The running of the harvesters, either periodically
or manually by a user, is done using a Celery
distributed task queue.