Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
This project is composed of two parts: Part1 and Part2


This part involves ingesting an application lifecycle raw data in .csv formats (“CC Application Lifecycle.csv”).
The data is transformed to return various Application stages as column names, and the time of stage completion, as values against each customer ID via python.

Files included in this section include:

  • Solution Directory:
    • (Contains transformation class for application lifecycle raw data)
    • (Ingest and executes transformations for application lifecycle raw data)
  • Test Directory:
    • (runs a series of test for objects in the transformation class)
    • Input Directory (Contains all the input test files)
    • Output Directory (Contains all the output test files)


  1. Execute to obtain output file for transformed application lifecycle data.


  1. Extra transformation, bug fixes and other modification can be added in as an object.
  2. For new transformations (new functions), add a test for the function in and execute it with pytest -vv.
  3. Call the object in after test passes to return desired output.


This part presents an architectural design to ingest data from a MongoDB database – into a Redshift data platform. The solution accomodates the addition of more data sources in the near future. The DDL scripts which form part of the solution is resusable for ingesting and loading data into redshift.

Files included in this section establishes the creation of target tables for the data ingestion process:

  • dwh.cfg (Infrastucture parameters and configuration)
  • (DDL queries to drop, creat, copy/insert data into Redshift)
  • (Class to manage the establish connection to database setup and teardown of tables in Redshift)
  • (script to execute processes in table_setup_load class)
  • (script to test the setup and teardown of resources.)
  • requirement.txt (key libraries needed to execute .py scripts)
  • makefile (file to automate process of installing and testing libraries and .py scripts respectively.)


  1. Execute to create and load data into target tables from S3.


  1. Bucket file sources and other config paramters can be added in dwh.cfg
  2. New DDl queries which includes ingesting data from multiple tables from aggregations/joins can be added in
  3. For other functions not captured in this section work, custom functions can be added in
  4. Before executing scripts for production environments, test the modifications by executing

The architecture below highlights the processes involved in ingesting data from various data sources into redshift

  • Architeture

Data Architecture


View Github