Easily deploying microservices to AWS Elastic Beanstalk

Everytime I read about Microservices deployment it seems undoable without a DevOps genius that writes 500 lines of Chef scripts using Ruby that orchesrtates 50 Linux virtual machines. It doesn’t have to be this way – deploying autonomous components can be done with less effort over a typical PaaS environment (e.g. AWS Beanstalk, Azure App Services) using Docker to greatly simplify the developer/DevOps work. That said, deploying microservices still poses some differences like exposing all the services under a single domain, allow services to find each other and more – the following 4 parts tutorial demonstrates how to overcome these questions with uncomplicated deployment tools

 

Parts

Part 1 – Storing Microservice as Docker images in a repository (ECR – Elastic Container Services)

Part 2 – Deploying the Docker images as Beanstalk applications – soon

Part 3 – Connecting and discovering Microservices – soon

Part 4 – Exposing Microservices under a single gateway and domain – soon

 

Part 1 – Storing Microservice as Docker images in ECR (Elastic Container Services)

There’s nothing too fancy here, like any common monolith application we’re about to create a typical AWS deployment stack with familiar services (Elastic Beanstalk, ECS, API Gateway) only this time we’re deploying Microservices – a set of independent software components that live in different processes and collaborate using HTTP or message queue services.

Why is Microservices deployment any difference comparing to monolith deployment? most of the platforms we depend on were built with monolithic state of mind – running all components in a single process. Consider the following examples:

  1. Service discovery – for the purpose of approaching a REST API, how can service A (e.g. orders) discover the address of service B (e.g. members)
  2. Splitting the code base – each PaaS environment (app platform like Beanstalk) allows uploading a single code base per application, how can we structure our solution using multiple code bases (one per service)?
  3. Exposing services to the world – we’d like to avoid exposing each service under different domain name as it reflects the inner architecture to the outside world, consequently internal changes might break the contract with external callers. Moreover, web pages might not be able to approach different domains due to cross origin http requests restrictions

Our goal is to overcome these challenges without spending our budget on deployment tools. I assume that you’re familiar with AWS basics. Ready?

A: preparing the containers for shipment

At first, let’s create a docker image for each of our microservices – this allow treating all services, whether written in Java or Node.JS, as ‘the same thing’ that can be deployed with the same set of tools:

1. Create a .dockerfile for each μService

A Docker file describes the desired container: the code and environment artifacts packed together for deployment (that’s the beauty of docker…). Creation of docker files is language-related and span beyond the scope of this article (read here about docker files). The following example illustrates a typical docker file for a Node.JS app:

 

FROM node:argon

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install

# Bundle app source
ADD . /usr/src/app
EXPOSE 3000

#Execcute a command to start the application
CMD forever -f -w --minUptime 3000 --spinSleepTim

 

2. Test locally with docker compose (or swarm)

before sending the container images in the narrow sea, you want to gauge their behaviour locally using any orchestration/composition tool that provides a virtual environment for containers. Your best bet is docker-compose which allows instantiation of multiple containers in a virtual network with one command. Following is a typical compose file:

 

#definition of a database container we need in run-time
db:
  image: mongo
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"
#we also need rabbimq for messaging between services
amqp:
  image: rabbitmq:3-management
  ports:
    - "5672:5672"
    - "15672:15672"
#stating that we need the members microservice
members:
  build: ./services/members
  command: npm start
  ports:
    - "2999:3000"
  links:
    - db
    - amqp
#we also need the order microservice
orders:
  build: .services/orders
  command: npm start
  ports:
    - "3000:3000"
  links:
    - db
    - amqp
    - membership

 

 

Now we can instantiate all the containers together with a single command:

docker-compose

#this by itself will download a MongoDB machine, RabbitMQ machine, 
#Ubuntu for our Microservices and start all in a virual environment!

 

 

3. Build the Docker image

run the docker build command and tag the image with a meaningful name

 

docker build -t {my microservice name} .

 

 

B: preparing the ship

Our Docker images are ready, let’s now create a repository, AWS ECR, to host these images

1. Create ECS repository

ECS, elastic container services, is AWS service for managing a cluster by hosting, deploying, scheduling and monitoring containers. We’re currently interested in it’s hosting services:

 


ecs

 

2. Tag the images before pushing

using AWS CLI and Docker CLI we can login to AWS, tag each image and push to the repository that was created in the previous step. First run the following command within AWS CLI to get back a login string:

 

aws ecr get-login --region eu-west-1

 

 

Following is a typical result – copy this text:

 

 

Now paste the login string into the Docker CLI:

Time to give each of our images a meaningful name so we can identify it in the repository:

 

docker tag members 083382950718.dkr.ecr.eu-west-1.amazonaws.com/maindockerrepository:members

 

 

3. Push the images to AWS elastic container repository (ECR)

We’re ready for shipment! let’s now push the images using Docker CLI:

 

docker push 083382950718.dkr.ecr.eu-west-1.amazonaws.com/maindockerrepository:members

 

 

To ensure that the images were indeed uploaded, navigate again to the repository and ensure that each images is in place:

 

 

We got it, our code is now ready for deployment.

Now that we have our ship loaded with containers, time to sail away – in part 2 (to be published very soon) we’ll walk through creating Beanstalk applications that run these containers. Stay tuned

 

  • doublemarked

    A gripe – your article is not dated and part 2 is “coming soon”. Because there is no date, the reader cannot determine whether this information is up to date or not. And because there is no date, there is no way for a reader to determine whether part 2 is indeed “coming soon” or whether your blog is dead.

    • Yoni Goldberg

      Thanks for the gripe. I’ll definitely add dates to posts and part 2 really comes very soon

      • doublemarked

        Ok great! 🙂

      • John Stephen Soriao

        I really hope that very soon is tomorrow since I’ve been trying those next steps for almost 3 days already but nothing’s working. My two images comprise of a golang rest api and a react + express frontend app. The react exposes 3000 while api exposes 8080, frontend calls to api via localhost:8080 since they’re in same machine. But I don’t know how to create the correct Dockerrun.aws.json file for this. Please part 2 please

  • Anil Bhanushali

    waiting for part 2 – 3 – 4 🙂

  • Abdi Darmawan

    part 2 ?, find other website how to deploy container from images ECR to Beanstalk lol

  • Yury Makarchuk

    Thanks for the post.
    Hope part 2 will e released 🙂

© 2018 Yoni Goldberg. All rights reserved.