Easily deploying microservices to AWS Elastic Beanstalk

Everytime I read about Microservices deployment it seems undoable without a DevOps genius that writes 500 lines of Chef scripts using Ruby that orchesrtates 50 Linux virtual machines. It doesn’t have to be this way – deploying autonomous components can be done with less effort over a typical PaaS environment (e.g. AWS Beanstalk, Azure App Services) using Docker to greatly simplify the developer/DevOps work. That said, deploying microservices still poses some differences like exposing all the services under a single domain, allow services to find each other and more – the following 4 parts tutorial demonstrates how to overcome these questions with uncomplicated deployment tools

 

Parts

Part 1 – Storing Microservice as Docker images in a repository (ECR – Elastic Container Services)

Part 2 – Deploying the Docker images as Beanstalk applications – soon

Part 3 – Connecting and discovering Microservices – soon

Part 4 – Exposing Microservices under a single gateway and domain – soon

 

Part 1 – Storing Microservice as Docker images in ECR (Elastic Container Services)

There’s nothing too fancy here, like any common monolith application we’re about to create a typical AWS deployment stack with familiar services (Elastic Beanstalk, ECS, API Gateway) only this time we’re deploying Microservices – a set of independent software components that live in different processes and collaborate using HTTP or message queue services.

Why is Microservices deployment any difference comparing to monolith deployment? most of the platforms we depend on were built with monolithic state of mind – running all components in a single process. Consider the following examples:

  1. Service discovery – for the purpose of approaching a REST API, how can service A (e.g. orders) discover the address of service B (e.g. members)
  2. Splitting the code base – each PaaS environment (app platform like Beanstalk) allows uploading a single code base per application, how can we structure our solution using multiple code bases (one per service)?
  3. Exposing services to the world – we’d like to avoid exposing each service under different domain name as it reflects the inner architecture to the outside world, consequently internal changes might break the contract with external callers. Moreover, web pages might not be able to approach different domains due to cross origin http requests restrictions

Our goal is to overcome these challenges without spending our budget on deployment tools. I assume that you’re familiar with AWS basics. Ready?

A: preparing the containers for shipment

At first, let’s create a docker image for each of our microservices – this allow treating all services, whether written in Java or Node.JS, as ‘the same thing’ that can be deployed with the same set of tools:

1. Create a .dockerfile for each μService

A Docker file describes the desired container: the code and environment artifacts packed together for deployment (that’s the beauty of docker…). Creation of docker files is language-related and span beyond the scope of this article (read here about docker files). The following example illustrates a typical docker file for a Node.JS app:

 

 

2. Test locally with docker compose (or swarm)

before sending the container images in the narrow sea, you want to gauge their behaviour locally using any orchestration/composition tool that provides a virtual environment for containers. Your best bet is docker-compose which allows instantiation of multiple containers in a virtual network with one command. Following is a typical compose file:

 

 

 

Now we can instantiate all the containers together with a single command:

 

 

3. Build the Docker image

run the docker build command and tag the image with a meaningful name

 

 

 

B: preparing the ship

Our Docker images are ready, let’s now create a repository, AWS ECR, to host these images

1. Create ECS repository

ECS, elastic container services, is AWS service for managing a cluster by hosting, deploying, scheduling and monitoring containers. We’re currently interested in it’s hosting services:

 


ecs

 

2. Tag the images before pushing

using AWS CLI and Docker CLI we can login to AWS, tag each image and push to the repository that was created in the previous step. First run the following command within AWS CLI to get back a login string:

 

 

 

Following is a typical result – copy this text:

 

 

Now paste the login string into the Docker CLI:

Time to give each of our images a meaningful name so we can identify it in the repository:

 

 

 

3. Push the images to AWS elastic container repository (ECR)

We’re ready for shipment! let’s now push the images using Docker CLI:

 

 

 

To ensure that the images were indeed uploaded, navigate again to the repository and ensure that each images is in place:

 

 

We got it, our code is now ready for deployment.

Now that we have our ship loaded with containers, time to sail away – in part 2 (to be published very soon) we’ll walk through creating Beanstalk applications that run these containers. Stay tuned

 

  • doublemarked

    A gripe – your article is not dated and part 2 is “coming soon”. Because there is no date, the reader cannot determine whether this information is up to date or not. And because there is no date, there is no way for a reader to determine whether part 2 is indeed “coming soon” or whether your blog is dead.

    • Yoni Goldberg

      Thanks for the gripe. I’ll definitely add dates to posts and part 2 really comes very soon

      • doublemarked

        Ok great! 🙂

  • Anil Bhanushali

    waiting for part 2 – 3 – 4 🙂

  • Abdi Darmawan

    part 2 ?, find other website how to deploy container from images ECR to Beanstalk lol

© 2017 Yoni Goldberg. All rights reserved.