In this post I dive into the mysterious world of the HashiCorp stack, namely Terraform and Vault.

Beginnings...

Just a few months ago, I took a nose dive straight into an ad-hoc world of a multi-team software development project as an ops / devops dude.

The project has been going on for some time now, so for good or bad, the teams have already established a set of tools and practices for the whole lifecycle - and now these tools and processes were given to me to use. Now don't get me wrong, I'm not complaining here! I'm just explaining my thought process, so you can better understand why I felt like writing this post. Although, who doesn't want to influence technologies and tools used for any given project?

So, having a slight OCD, I thought: 'Hey! What if I'd try to accomplish a somewhat similar production environment we have at work but using tools that aren't familiar to me'. What's better as a learning experience?

Being mostly self-thought, as most tech people are, I think that in order to keep oneself motivated through a process of hundred-and-one pit holes, you really need to set yourself clear goals, and those goals should be incremental. Small steps towards a bigger picture. Trying to remember in moments of frustration, that all technical problems are just technical problems - in the end, they always get solved. That's where I feel slack comes to its own. You can talk directly to devs and get help from the community in real time. The flipside is when you're the one getting bombed by people waiting for something only you can solve. The stress factor rises fast.. :S

Phew, that was a long 'rantish' warm-up for what in essence will be a learning journey of maybe 2-3 posts??

But without further ado, let me tell you what it is I'm trying to achieve.

End goal

hypeisreal-1

Terraforming

So I want to have my infrastructure-as-code in version control and preferably use a product that handles state, and another that deals with the template credentials, plus other secrets that I might need. In comes Terraform and Vault from HashiCorp. The former is a pretty straightforward resource provisioning tool, and the former a secrets handling "vault software". Both use the HashiCorp's configuration language, or HCL in short, which in turn looks awfully similar to normal JSON syntax, but just a little simpler.

The really cool thing about Terraform is how it handles state. Whenever you run $ terraform apply, Terraform automatically locks and writes to .tfstate file that describes the whole stack in its current state. This state file can then be shared among teams dynamically, let's say, in a cloud storage, for example, Amazons S3. Pretty cool stuff, huh? The state file is just normal JSON syntax, so you could edit your resources writing directly to the state file, but I wouldn't recommend it. And why would you? You've got a pretty cool tool for that already. One other major upside to using Terraform is that its provider agnostic, so you're not letting yourself to fall into vendor lock-in with any specific provisioning tool, like CloudFormation.

So Terraform dynamically provisions resources, is cloud-agnostic and handles state. Check, check and check. So what is it missing? Well, you might have guessed it, it's secrets. The one thing you never want to do is store credentials in version control. That's where Vault is handy. We can store Terraform secrets in Vault that in turn encrypts everything at rest, so we can sleep more peacefully. :) To call for a secret inside Terraform is as simple as:

data "vault_generic_secret" "rundeck_auth" {
  path = "secret/rundeck_auth"
}

# Rundeck Provider, for example
provider "rundeck" {
  url        = "http://rundeck.example.com/"
  auth_token = "${data.vault_generic_secret.rundeck_auth.data["auth_token"]}"
}

There's much much much...much more to know and learn about Terrafrom and Vault, so I highly recommend that you check out their docs.

CI/CD

In 2017, it's almost a heresy not to have some kind of automated code integration and deployment in place. Preferably you've got a test environment to where you push automatically, a staging environment where you either push automatically or not and a production environment, which has some kind of a sanity check in place. Usually, this means a human being pushing a button to deploy after it's decided that a version is ready to be published for production.

Personally, I've really grown fond of GitLab and GitLab's CI. As a product, GitLab is a near perfect one-stop-shop for version control, incident and project management, CI/CD and wiki. I can't imagine anything else I'd need. Especially the CI configuration is easy and the syntax is lovely YAML. Below is a small pipeline example.

# Following variables need to be configured in Project "CI/CD Pipelines" settings:
# REGISTRY_USERNAME - docker registry username
# REGISTRY_PASSWORD - docker registry password

variables:
  IMAGE_PREFIX: "jatula"
  IMAGE_TAG: $CI_PIPELINE_ID
  IMAGE_NAME: "landingapp"

stages:
  - test-build
  - build-docker
  - trigger_deploy_prod

test-build:
  stage: test-build
  script:
    - docker run --name server -d -v "$(pwd)/public:/usr/share/nginx/html" -p 8080:80 nginx
    - robot --outputdir ui-tests  ./ui-tests/headless.robot
  artifacts:
    paths:
    - ui-tests
    expire_in: 1 hour
    when: always
  after_script:
    - docker stop server && docker rm server
  tags:
    - k8s

build-docker-image:
  stage: build-docker
  script:
    - docker login -u $REGISTRY_USERNAME -p $REGISTRY_PASSWORD 
    - docker build --pull -t $IMAGE_PREFIX/$IMAGE_NAME:$IMAGE_TAG .
    - docker push $IMAGE_PREFIX/$IMAGE_NAME:$IMAGE_TAG
  after_script:
    - docker rmi $IMAGE_PREFIX/$IMAGE_NAME:$IMAGE_TAG
  tags:
    - k8s

trigger_deploy_to_prod:
  stage: trigger_deploy_prod
  script:
    - source update.sh
    - sleep 15
    - source checkup.sh
  tags:
    - k8s

Looking at the .yaml file, you probably agree, that it's fairly easy to understand the order of steps in the pipeline and what's going to happen in each step. One other interesting CI tool, that people give credit to is Concourse CI. I've read the introduction and basic usage of Concourse, and feel that it probably deserves a blog post of its own at some point. ;)

But let's give the whole shebang a spin now, that we're ready to deploy.

Giving it a go!

Running a full resource is as simple as:

$ terraform init
$ terraform get
$ terraform plan => It's always important to plan your resources before applying. Among other things, planning is a great way to check that there are no syntax errors. Finally:
$ terraform apply

After waiting for a while, GitLab spins up and it's runner registers. GKE gets provisioned and volá, you're ready to start developing!

collage