Hello readers!

Sorry for the pretty long wait for this final post in the series. I finally found the time to write this and to bring closure to our three part journey on deploying microservices to the cloud.

In this post, I'll show you how to deploy a scalable WordPress instance to Google Container Engine (GKE). The idea here is the same as it would be for migrating the ownCloud service on my old laptop server. If you're interested, check out my take on a Nextcloud Docker image on GitHub. I'll be ditching ownCloud for Nextcloud when I get around to do the actual migration.

Gee...I thought you said in the beginning that you would use Azure as your cloud provider? Yep, I did say that, but as I wanted to cut corners, I went with Google Container Engine (GKE) as deploying Kubernetes (K8S) on GKE is really just a one liner on the new Google cloud shell - so ridiculously easy.

Let's begin...

1. Account stuff

First of all, create a Google account if you haven't got one already. Sign in and activate a cloud instance at Google Cloud Console. You'll have to forfeit your credit card information as mandatory which always sucks. But hey! You'll at least get some free credit and months to tinker around, and when I say some, I mean heaps!

Before moving on to the next part, create the project which you'll be working on. The project is an abstraction to hold services inside logical boundaries. It's probably easiest just to select "create project" from the GUI. Finally, pop open the cloud shell and move on to the next part.

2. Launching the container cluster (K8S)

Let's set your default zone for this project. All resources created afterwards will use the default zone unless otherwise specified. You don't necessarily need to do this, but then you'll have to specify the zone using the --zone flag every time you touch a resource with the gcloud tool. I chose the the europe-west1-b zone: $ gcloud config set compute/zone europe-west1-b. You can list all available zones: $ gcloud compute zones list and choose the one you prefer.

Now, it's time for magic!

  • Spin up the cluster:
$ gcloud container clusters create k8sio-magic \
--num-nodes 3 --machine-type f1-micro --disk-size 20

The number of nodes is the total amount of VMs used in the cluster, so not just the number of worker nodes (or minions in the K8S world). The disk size and machine type flags are good places to save on monthly fees as the default machine type is n1-standard-1 and the default disk size is 100GB. If you want to enable node auto-balancing afterwards you can enable it like so:

$ gcloud container clusters update k8sio-magic \
--enable-autoscaling --min-nodes 3 --max-nodes 10 
```I didn't enable auto-balancing for this demo, but for production it is definitely something to consider.

* Configure the Kubernetes tool (**kubectl**) to use the newly created cluster:

$ gcloud container clusters get-credentials k8sio-magic


* Enable Google Compute Engine and Container Engine APIs from the console GUI. You'll need to also create credentials for the Compute Engine API. 

####2.1 Now, just what is Kubernetes?
K8S is an open source container orchestration management system developed at Google. Figure 1. is a simplified presentation of the K8S architecture.

![Figure1. Kubernetes overview](/content/images/2016/08/Selection_007.png)
A typical workflow goes like this: 

1. A user generates a deployment specification template that it gives to the Kubernetes API via the **kubectl** tool. Typically, this is a YAML or JSON file, but you can spin up pods "on the fly" with the `$ kubectl run` command as well.  
2. The master creates the pods and services defined in the deployment template.
3. The scheduler is responsible for scheduling pods to nodes.
4. The kubelet on a node Drives pods and Docker. The proxy gives a stable endpoint to which services can talk to and from.
5. Etcd is used as master storage and it holds information on the persistent state of the cluster. 

This is really a scratch on what K8S is and does. I highly recommend on diving in deeper and reading the official documentation 
at [kubernetes.io](http://kubernetes.io/docs/).

##3. Let's deploy 
Pheew, now that we got the boring technical part done, let's head straight to creating the pods and services needed for our WordPress instance. 

Our demo will consist of a MariaDB back-end, a WordPress app tier and Traefik load balancer. The load balancer will also be used to terminate SSL encryption.

I'll set up the MariaDB back-end pod and service using the `$ kubectl run` command.

kubectl run mariadb --image=mariadb
--expose --env=MYSQL_ROOT_PASSWORD=Password \
--port=3306 --labels=tier=backend,app=wordpress

Check for the pod to startup:

juhaniatula@k8s-magic:~$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mariadb-1900861925-n00r1 0/1 ContainerCreating 0 24s
NAME READY STATUS RESTARTS AGE
mariadb-1900861925-n00r1 1/1 Running 0 29s

Now, the clever reader that you are, you probably noticed that I didn't setup a disk or a volume for the database. So if I remove the pod now, the database will be erased as well. 

Persistent data is the Achilles heel of running stateful apps as containers. Luckily, there are solutions available to us. In this demo, I'll circumvent the problem of persistent data by using the new dynamic persistent volume claim (note that this is still an alpha resource so probably better not use it in production). It will ask the Google Cloud API for a persistent disk and then mount it as a volume for the pod being created. Now, using Google persistent disks has a downside. Read-and-write permissions can be allocated only once to one pod. After that, the data is read-only to other pods. This means that in production you'll probably want to setup a NFS or some other persistent data provider like Flocker or GlusterFS as you can't really scale stateful applications with GCE disks. 

For now though, let's create the persistent disk and the associated persistent volume and volume claim by running `$ kubectl create -f <filename>.yaml`. My YAML file looks like this: 

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: slow-pd1
annotations:
volume.alpha.kubernetes.io/storage-class: slow
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 20Gi
Note that:
> The value of the storage-class annotation does not matter in the alpha version of this feature. There is a single implied provisioner per cloud (which creates 1 kind of volume in the provider). The full version of the feature will require that this value matches what is configured by the administrator. 

[Read more here](https://github.com/kubernetes/kubernetes/blob/release-1.3/examples/experimental/persistent-volume-provisioning/README.md).

Now, let's edit the MariaDB deployment and add the volume claim as a volume mount. Use `kubectl edit deployments/mariadb` to open up the vim editor and change this: 

...
ports:
- containerPort: 3306
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
status:
availableReplicas: 1
..

to this:
    ports:
    - containerPort: 3306
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/www/html
      name: db-claim
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  securityContext: {}
  terminationGracePeriodSeconds: 30
  volumes:
  - name: db-claim
    persistentVolumeClaim:
      claimName: slow-pd1

status:
availableReplicas: 1

Let's check that the volume mounts successfully to the pod with `kubectl describe pods/<podname>`.

1m 1m 1 {default-scheduler } N
ormal Scheduled Successfully assigned mariadb-1983077234-ljkpr to gke-k8sio-magic-default-pool-1e37ceae-kbvr
1m 1m 1 {kubelet gke-k8sio-magic-default-pool-1e37ceae-kbvr} spec.containers{mariadb} N
ormal Pulling pulling image "mariadb"
1m 1m 1 {kubelet gke-k8sio-magic-default-pool-1e37ceae-kbvr} spec.containers{mariadb} N
ormal Pulled Successfully pulled image "mariadb"
1m 1m 1 {kubelet gke-k8sio-magic-default-pool-1e37ceae-kbvr} spec.containers{mariadb} N
ormal Created Created container with docker id 6e7363ff0efd
1m 1m 1 {kubelet gke-k8sio-magic-default-pool-1e37ceae-kbvr} spec.containers{mariadb} N
ormal Started Started container with docker id 6e7363ff0efd

Yey! We have persistent data and the MariaDB pod and service are up and running. 

Let's move on and setup our WordPress app next. Run and expose the WordPress deployment like so:

kubectl run wordpress --env=WORDPRESS_DB_HOST=mariadb:3306
--env=WORDPRESS_DB_PASSWORD=Password --image=wordpress
--expose --port=80 --labels=app=wordpress,tier=app

After the pod has been scheduled successfully, edit the deployment to add the persistent volume. This is the same ordeal we did with the MariaDB deployment. First, create the persistent volume and claim. Just use the same YAML file as before and change the names. After creating the volume claim and running `$ kubectl get pv` and `$ kubectl get pvc`, you should see the something similar.

$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-6d6f1287-6646-11e6-8da7-42010a84005a 20Gi RWO Bound default/slow-pd2 12s
pvc-dc11345b-661f-11e6-8da7-42010a84005a 20Gi RWO Bound default/slow-pd1 4h

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
slow-pd1 Bound pvc-dc11345b-661f-11e6-8da7-42010a84005a 0 4h
slow-pd2 Bound pvc-6d6f1287-6646-11e6-8da7-42010a84005a 0 22s

Then, edit the WordPress deployment:
    name: wordpress
    resources: {}
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/www/html
      name: slow-pd2
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  securityContext: {}
  terminationGracePeriodSeconds: 30
  volumes:
  - name: slow-pd2
    persistentVolumeClaim:
      claimName: slow-pd2
And wait that the pod is successfully scheduled.

The last piece of our microservices scenario is the load balancer. Most of you are probably familiar with either Nginx or HAProxy as a load balancer. They are both excellent choices, and usually you're going to use products that you're familiar with (that would be HAProxy for me). Today though, I'm going with Traefik as the load balancer. It's a relatively new, open source load balancer and HTTP reverse proxy written in GO especially for microservice purposes. It suppports a wide variety of different back-ends out-of-the-box like Swarm, Mesos/Marathon, Kubernetes, Consul, Etcd and Zookeeper. Read more about it [here](http://traefik.io/). 

I will substitute the GKE L7 load balancer with Traefik in this demo as in my experience, the default load balancer does weird and funky stuff if you ask it to do TLS termination. 

1. Let's generate the needed TLS certificate for HTTP to HTTPS redirection. I'll create a self signed cert, but using free and valid certificates from [Let's Encrypt](https://letsencrypt.org/) is definitely a valid choice. I use a Let's Encrypt certificate for this blog, for example. :) 

$ openssl req -newkey rsa:2048 -nodes -keyout tls.key
-x509 -days 365 -out tls.crt

Now use the key and associated certificate to create a K8S secret that can be used consumed as a volume.

kubectl create secret tls tls-secret --cert=tls.crt --key=tls.key


2. I'm going to give my HTTP to HTTPS configuration to Traefik as a configmap. In short, confimpaps are key=value tables that you can give to deployments. In our case, the key will default to the filename and the value will hold the content of the file. Read more about configmaps [here](http://kubernetes.io/docs/user-guide/kubectl/kubectl_create_configmap/). My Traefik configuration looks like this: 

traefik.toml

defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/tls.crt"
KeyFile = "/ssl/tls.key"

Let's generate the configmap next:

kubectl create configmap traefik-conf --from-file=traefik.toml

Quite a lot steps before we even get to deploy anything. Now though, run `$ kubectl create -f traefik-depl.yaml` and wait for the pods to come alive. 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 2
labels:
app: traefik-lb
name: traefik-controller
spec:
replicas: 2
selector:
matchLabels:
app: traefik-lb
name: traefik-lb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: traefik-lb
name: traefik-lb
spec:
containers:
- args:
- --configfile=/config/traefik.toml
- --kubernetes
- --logLevel=DEBUG
image: traefik
imagePullPolicy: Always
name: traefik-lb
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /ssl
name: ssl
- mountPath: /config
name: config
volumes:
- name: ssl
secret:
secretName: tls-secret
- configMap:
name: traefik-conf
name: config

Take note of the volumes and volume mounts. It's that easy to give our certificates and configurations to pods in the K8S world. Just so cool! 

Next, set up a service end point for our Traefik load balancer. Use the `kubectl create -f` command to create the Traefik service, the YAML file looks like this:

apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:

  • name: http
    port: 80
    protocol: TCP
    targetPort: 80
  • name: https
    port: 443
    protocol: TCP
    targetPort: 443
    selector:
    app: traefik-lb
    sessionAffinity: None
    type: LoadBalancer

Finally, publish WordPress to the world with the following ingress configuration. Same create command as before. YAML file below. As we are using GKE, we have to annotate the **ingress.class** to avoid GKE L7 load balancer from picking it up. Googles controller ignores ingress rules that don't have the default class **gcp**. 

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-ing
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
tls:
- secretName: tls-secret
rules:

  • host:
    http:
    paths:
    • path: /
      backend:
      serviceName: wordpress
      servicePort: 80
If everything is okay, and you followed me step-by-step, you should be able to curl the WordPress install page now. Check and copy the external IP from services by running: `$ kubectl get services`. Then curl the install page of WordPress:  `$ curl -k https://<traefik service public ip>/wp-admin/install.php`. 

## Cleaning up
To take it all down, I'll just remove the whole cluster.

gcloud container clusters delete k8sio-magic --zone europe-west1-b

If you want to continue playing around, and want to only take down pods and services (let's say, the WordPress tier), you can use the power of K8S labeling to target only the WordPress deployment and service. 

$ kubectl delete deployments,services -l app=wordpress


So sad to see it go... :( 
## Wrap up
Congratulations! You've learned about microservices, Docker and Kubernetes - maybe even a little about Google Cloud Platform on the side. A good place to continue on this kubernetes learning journey is to check out this short but information packed mini course on Udacity [Scalable Microservices with Kubernetes](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615).  

Hopefully you've had fun reading and following my instructions. I'll write much more on microservices and containers in general in the future for sure, as they are really **hot hot hot** in the industry right now.