MLReef
Your Machine Learning life cycle in one platform
MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine Learning work
MLReef is a ML/DL development platform containing four main sections:
- Data-Management - Fully versioned data hosting and processing infrastructure
- Publishing code repositories - Containerized and versioned script repositories for immutable use in data pipelines
- Experiment Manager - Experiment tracking, environments and results
- ML-Ops - Pipelines & Orchestration solution for ML/DL jobs (K8s / Cloud / bare-metal)
Data Management
- Host your data using git / git LFS repositories.
- Work concurrently on data
- Fully versioned or LFS version control
- Full view on data processing and visualization history
- Data set management (access, history, pipelines)
Publishing Code
Adding only parameter annotations to your code...
# example of parameter annotation for a image crop function
@data_processor(
name="Resnet50",
author="MLReef",
command="resnet50",
type="ALGORITHM",
description="CNN Model resnet50",
visibility="PUBLIC",
input_type="IMAGE",
output_type="MODEL"
)
@parameter(name='input-path', type='str', required=True, defaultValue='train', description="input path")
@parameter(name='output-path', type='str', required=True, defaultValue='output', description="output path")
@parameter(name='height', type='int', required=True, defaultValue=224, description="height of cropped images in px")
@parameter(name='width', type='int', required=True, defaultValue=224, description="width of cropped images in px")
def init_params():
pass
...and publishing your scripts gets you the following:
- Containerization of your scripts
- Always working scripts including easy hyperparameter access in pipelines
- Execution environment (including specific packages & versions)
- Hyper-parameters
- ArgParser for command line parameters with currently used values
- Explicit parameters dictionary
- Input validation and guides
- Multiple containers based on version and code branches
Experiment Manager
- Complete experiment setup log
- Full source control info including non-committed local changes
- Execution environment (including specific packages & versions)
- Hyper-parameters
- Full experiment output automatic capture
- Artifacts storage and standard-output logs
- Performance metrics on individual experiments and comparative graphs for all experiments
- Detailed view on logs and outputs generated
- Extensive platform support and integrations
- Supported all python based ML/DL frameworks, for example: PyTorch, Tensorflow, Keras or Scikit-Learn
ML-Ops
- Concurrent computing pipelining
- Governance and control
- Access and user management
- Single permission management
- Resource management
- Model management
MLReef Architecture
The MLReef ML components within the ML life cycle:
- Data Storage components based currently on Git and Git LFS.
- Model development based on working modules (published by the community or your team), data management, data processing / data visualization / experiment pipeline on hosted or on-prem and model management.
- ML-Ops orchestration, experiment and workflow reproducibility, and scalability.
Why MLReef?
MLReef is our solution to a problem we share with countless other researchers and developers in the machine learning/deep learning universe: Training production-grade deep learning models is a tangled process. MLReef tracks and controls the process by associating code version control, research projects, performance metrics, and model provenance.
We designed MLReef on best data science practices combined with the knowleged gained from DevOps and a deep focus on collaboration.
- Use it on a daily basis to boost collaboration and visibility in your team
- Create a job in the cloud from any code repository with a click of a button
- Automate processes and create pipelines to collect your experimentation logs, outputs, and data
- Make you ML life cycle transparent by cataloging it all on the MLReef platform