cookiecutter-pyspark-cloud

Logo

Zero to PySpark on AWS EMR - quickly, sensibly.

View the Project on GitHub daniel-cortez-stevenson/cookiecutter-pyspark-cloud

cookiecutter-pyspark-cloud

Made with PythonBuilt with Love

Built for Data ScientistsCloudLicense

Run PySpark code in the 'cloud' with Amazon Web Services (AWS) Elastic MapReduce (EMR) service in a few simple steps with this cookiecutter project template!

Quickstart

pip install -U "cookiecutter>=1.7"
cookiecutter --no-input https://github.com/daniel-cortez-stevenson/cookiecutter-pyspark-cloud.git
cd pyspark-cloud
make install
pyspark_cloud

Your console will look something like:

pyspark_cloud command-line banner

Features

Infrastructure Overview

As defined in the Cloudformation template AWS Cloudformation template InfViz.io diagram

Usage

  1. Clone this repo:
git clone https://github.com/daniel-cortez-stevenson/cookiecutter-pyspark-cloud.git
cd cookiecutter-pyspark-cloud
  1. Create a Python environment with dependencies installed:
conda create -n cookiecutter -y "python=3.7"
pip install -r requirements.txt

conda activate cookiecutter
  1. Make any changes to the template, as you wish.

  2. Create your project from the template:

cd ..
cookiecutter ./cookiecutter-pyspark-cloud
  1. Initialize git:
cd *your-repo_name*
git init
git add .
git commit -m "Initial Commit"
  1. Create a new Conda environment for your new project & install project development dependenices:
conda deactivate
conda create -n *your-repo_name* -y "python=3.6"

make install-dev

Contribute

Contributions are welcome! Thanks!

Submit an Bug or Feature Request

Submit a Pull Request

Acknowledgements

Most of the ideas expressed in this repo are not new, but rather expressed in a new way. Thanks, folks! :raised_hands: