/ Machine Learning

A Python library for building robust production-ready data

A Python library for building robust production-ready data


“The centre of your data pipeline.”

Kedro is a workflow development tool that helps you build data pipelines that are robust, scalable, deployable, reproducible and versioned. We provide a standard approach so that you can:

  • spend more time building your data pipeline,
  • worry less about how to write production-ready code,
  • standardise the way that your team collaborates across your project,
  • work more efficiently.

Kedro was originally designed by Aris Valtazanos and Nikolaos Tsaousis to solve challenges they faced in their project work.

How do I install Kedro?

kedro is a Python package. To install it, simply run:

pip install kedro

For more detailed installation instructions, including how to setup Python virtual environments, please visit our installation guide.

What are the main features of Kedro?

1. Project template and coding standards

  • A standard and easy-to-use project template
  • Configuration for credentials, logging, data loading and Jupyter Notebooks / Lab
  • Test-driven development using pytest
  • Sphinx integration to produce well-documented code

2. Data abstraction and versioning

  • Separation of the compute layer from the data handling layer, including support for different data formats and storage options
  • Versioning for your data sets and machine learning models

3. Modularity and pipeline abstraction

  • Support for pure Python functions, nodes, to break large chunks of code into small independent sections
  • Automatic resolution of dependencies between nodes
  • (coming soon) Visualise your data pipeline with Kedro-Viz, a tool that shows the pipeline structure of Kedro projects

Note: Read our FAQs to learn how we differ from workflow managers like Airflow and Luigi.

4. Feature extensibility

  • A plugin system that injects commands into the Kedro command line interface (CLI)
  • (coming soon) List of officially supported plugins:
    • Kedro-Airflow, making it easy to prototype your data pipeline in Kedro before deploying to Airflow, a workflow scheduler
    • Kedro-Docker, a tool for packing and shipping Kedro projects within containers
  • Kedro can be deployed locally, on-premise and cloud (AWS, Azure and GCP) servers, or clusters (EMR, Azure HDinsight, GCP and Databricks)

Kedro-Viz Pipeline Visualisation
Random pipeline visualisation using Kedro-Viz (coming soon)

How do I use Kedro?

Our documentation explains:

  • A typical Kedro workflow
  • How to set up the project configuration
  • Building your first pipeline
  • How to use the CLI offered by kedro_cli.py (kedro new, kedro run, ...)

Note: The CLI is a convenient tool for being able to run kedro commands but you can also invoke the Kedro CLI as a Python module with python -m kedro

How do I find Kedro documentation?

This CLI command will open the documentation for your current version of Kedro in a browser:

kedro docs

Documentation for the latest stable release can be found here. Check these out first:

Can I contribute?

Yes! Want to help build Kedro? Check out our guide to contributing.

How do I upgrade Kedro?

We use Semantic Versioning. The best way to safely upgrade is to check our release notes for any notable breaking changes.

Once Kedro is installed, you can check your version as follows:

kedro --version

To later upgrade Kedro to a different version, simply run:

pip install kedro -U