Kubernetes is a container orchestration system developed by Google. It is widely used and has grown in popularity over the years. In this article, I demonstrate how to get up and running with Kubernetes on AWS using the official installation script.

Edit: For a more up to date post on setting up a Kubernetes cluster in AWS, please see my new Creating a Kubernetes cluster using KOPS on AWS post.

Pre-requisites for Kubernetes on AWS

To get started with AWS you will need to install the AWS CLI tools which are available at https://aws.amazon.com/cli/.

Once installed you will need to log in with an IAM user that has full AWS access so that it can create the necessary AWS resources as required.

$ aws configure
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: ENTER

Installing Kubernetes on AWS is easy thanks to a bash script install provided by Google. This script can be customised via environment variables which we must set up first.

Firstly we need to specify that we want to use AWS as the provider:


To start we need to make sure EC2 instances are created in the correct region as by default they will be created in us-west-2a (Oregon). Being in the UK I prefer eu-west-1 (Ireland):

$ export AWS_S3_REGION=eu-west-1
$ export KUBE_AWS_ZONE=eu-west-1a

We can also override the master and node instance type and the number of nodes to create like so:

$ export MASTER_SIZE=m3.medium
$ export NODE_SIZE=m3.large
$ export NUM_NODES=3

By default, t2.micro instances will be used for the nodes and a m3.medium instance for the master. If you are deploying anything substantial then you will require bigger instance types than t2.micro.

In order to reduce costs during dev/testing you can request spot instances by setting the maximum spot price you are prepared to pay per instance hour (in USD):

$ export NODE_SPOT_PRICE=0.030

This will result in an auto-scaling group being created that uses a launch configuration that will automatically request spot instances for you as and when required.

Should you go down the route of spot instances you need to be aware that if the spot price goes above your maximum bid price (3 cents an hour in the example above) then your spot instances will terminate without warning!

Installing Kubernetes on AWS

With the customisation via environment variables done we can begin:

$ curl https://get.k8s.io > kubernetes_install.sh
$ bash kubernetes_install.sh

This will start by downloading the Kubernetes release tarball file. This file is quite sizeable – at the time of writing the v1.3.5 release is 1.4GB! Once downloaded it will be extracted in the directory you executed the bash script from to a directory named kubernetes.

KUBE_MANIFESTS_TAR_URL: unbound variable

At the time of writing there is a bug in release v1.3.5 which causes a failure. The full error message is:

./cluster/../cluster/../cluster/aws/../../cluster/common.sh: line 518: KUBE_MANIFESTS_TAR_URL: unbound variable

Until the fix has been released you can apply a workaround yourself by editing the ./kubernetes/cluster/common.sh file and adding the following lines:

function build-kube-env {
  local master=$1
  local file=$2


  local server_binary_tar_url=$SERVER_BINARY_TAR_URL
  local salt_tar_url=$SALT_TAR_URL
  local kube_manifests_tar_url="${KUBE_MANIFESTS_TAR_URL:-}"
  if [[ "${master}" == "true" && "${MASTER_OS_DISTRIBUTION}" == "coreos" ]] || \
     [[ "${master}" == "false" && "${NODE_OS_DISTRIBUTION}" == "coreos" ]] ; then
    # TODO: Support fallback .tar.gz settings on CoreOS
    server_binary_tar_url=$(split_csv "${SERVER_BINARY_TAR_URL}")
    salt_tar_url=$(split_csv "${SALT_TAR_URL}")
    kube_manifests_tar_url=$(split_csv "${KUBE_MANIFESTS_TAR_URL}")

Once you have updated the ./kubernetes/cluster/common.sh file you can execute the kubernetes_install.sh script again. However, this will download the release again and override your script changes. To prevent that simply set the following environment variable before running the bash script again:


Whilst the installation is in progress you can install the command line tool, kubectrl, so that you can interact with your Kubernetes cluster from the command line. Point your browser at https://kubernetes.io/docs/user-guide/prereqs/ and follow the instructions for your OS.

Once the installation is complete you will see the IP address of the master node. You can get the cluster configuration at any time using the following command:

$ kubectl config view

This will produce some YAML such as that below. Note the server address and the username and password:

apiVersion: v1
- cluster:
    certificate-authority-data: REDACTED
  name: aws_k8s
- context:
    cluster: aws_k8s
    user: aws_k8s
  name: aws_k8s
current-context: aws_k8s
kind: Config
preferences: {}
- name: aws_k8s
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: GsYnDhdf3Dwg4hgsd8wWFgHXS7Srknmq
- name: aws_k8s-basic-auth
    password: buwjcEkgcenEVq2g
    username: admin

You can then go to in your browser and when prompted by the browser enter the credentials. You should then see the web interface below:

Kubernetes UI

Deploying to Kubernetes on AWS

The simplest way to get started is to deploy an NGINX container as a single Kubernetes Pod. Similar to a Docker run command we can create an NGINX container and expose it on port 80.

$ kubectl run hello-nginx --image=nginx --port=80
$ kubectl get deployment hello-nginx

hello-nginx   1         1         1            1           10s

We can also easily scale the service to more than one container:

$ kubectl scale deployment hello-nginx --replicas=2
$ kubectl get deployment hello-nginx

hello-nginx   2         2         2            2           47s

If you reload the Kubernetes UI in the browser you should see that there are now two NGINX pods running:

Kubernetes UI

The NGINX containers are not yet exposed on the internet. We can expose the NGINX container to the public internet by updating the deployment to a LoadBalancer type.

$ kubectl expose deployment hello-nginx --type="LoadBalancer"
$ kubectl get services hello-nginx

hello-nginx   a511d59a96f9c...   80/TCP    5s

This will create an Elastic Load Balancer (ELB) on AWS providing you with a public DNS name to access the service.

EC2 Management Console

Once the health check passes enough times to bring the nodes in service you will be able to access the exposed NGINX container:

Welcome to NGINX

You can, of course, use this ELB DNS name to add a CNAME record to your DNS such that your domain name can point to the load balanced NGINX containers within the Kubernetes cluster.

This tests that the Kubernetes cluster works so we can delete the NGINX container (and the ELB on AWS) with one simple command:

$ kubectl delete service,deployment hello-nginx