Categories
Life

Kaholo: a new DevOps tool – Part 1

It’s been a while since I’ve written a technical post. We’re in the midst of the SARS-COV-2 pandemic, and being confined to my home is giving me a lot of time to practice some old tech concepts (such as Chef/Puppet and Ansible), to try along with some slightly newer things such as Kubernetes, Terraform, and my latest find: Kaholo!

About these posts

While Kaholo itself is simple to use, the evaluation is complicated because it involves several platforms and tools that are typical of popular development environments.

As a result, I decided to create a series of blog posts instead of one long one. Today I will introduce the basic concepts and principles, list the “ingredients”, and introduce my evaluation strategy:

  1. Introduction to Kaholo, my evaluation goals, and the tools I chose
  2. Installing & Configuring Kaholo, and Installing plugins
  3. Creating a sample project in Gitlab, triggering a build on commit, and triggering Kaholo on build success
  4. Designing & Testing our Kaholo pipeline + Conclusions

Part 1 – Introduction to Kaholo

The promise behind Kaholo is “DevOps Automation, with less coding/scripting”. The platform comes with a bunch of plugins for existing platforms such as Amazon’s AWS/EC2, Google Cloud Compute / GSuite, Kubernetes, Terraform, Slack, Gitlab, Github, Git, NewRelic, and others.

You can either host an instance on your own infrastructure, or you can have the Kaholo team run an instance for you. I chose to host my own for the purpose of this evaluation. As part of my evaluation I’m going to design some useful pipelines using a visual editor. I will then run some of those pipelines manually, and have some of them trigger from Gitlab (Source Control with CI/CD) and Observium (Monitoring tool).

This is the main pipeline I plan to evaluate with:

  1. Announce on some DevOps Slack channel that a successful build was detected and a deployment is being attempted
  2. Email the same message to the product’s release manager (me!)
  3. Fetch the built packages and put them on some bucket (maybe in a Google Cloud Storage bucket?), or push them to a private or public package repository server
  4. Execute a script on one or more remote servers to install this latest package from the repository
  5. Potentially erect a Google Cloud SQL instance of PostgreSQL or MySQL
  6. Take the details of that DB instance and store them in a CSV file (Kaholo has a CSV Writer plugin)
  7. Load a test SQL dump into this new database instance
  8. Configure the software on the remote servers to connect to that new database instance
  9. Apply some firewall policy to those instances based on the type of software we installed (maybe we’re opening port 443?)
  10. Create a DNS record with the name of that branch or build in CloudFlare via API (no CloudFlare plugin yet, but I’m writing one)
  11. Run the software on all of those remote servers
  12. Perform some basic sanity test (run a script, check output)
  13. If the test fails, roll back everything we did so far and email the release manager + slack the DevOps team that the build didn’t deploy successfully
  14. At this point we’re successful, so email the release manager with the details on how to connect to this new instance + Slack the DevOps team the same details

The scenario above is not complicated to write for anyone with basic scripting skills. However there are some advantages to doing this with Kaholo.

Here’s a list of the ones I can think of (that matter to me):

  • Lots of useful, easy to use plugins to talk to everything on the internet
  • A repository of all of your pipelines, everything in one place
  • Proper logging of every step, every time a pipeline runs
  • New engineer in the team? they can run pipelines from day one!
  • Revision history on pipelines (code, configuration, design, etc)
  • Easy to duplicate and modify pipelines for new scenarios (even for people who are less comfortable with scripting)
  • Visual pipelines are easy to read and understand by the entire DevOps team, as well as the engineering team (and maybe even by leadership!)
  • Kaholo implements an encrypted “vault” concept where important credentials and API tokens are stored safely
  • Kaholo also has a scheduler / calendar, so you can schedule pipelines! (cleanups, reporting, renewing LetsEncrypt certificates, etc. It can all be fully automated!)
  • IAM lets you create users & groups, so you can share the right projects/pipelines with the right team members
  • You can still write code within Kaholo (in JS), if you need to implement complex scenarios
  • A clear and simple dashboard showing pipeline statuses
  • I can host it myself, on my own internal infrastructure (security, customization & control)

Ingredients

For this evaluation we’re going to play with the following tools:

  1. Kaholo Server – DevOps Automation
  2. Slack – Team Chat
  3. Observium – Monitoring Tool
  4. Vagrant – Virtualization Wrapper for OSX
  5. VirtualBox – Virtualization Software by Oracle
  6. Gitlab – Source Control, Issue Tracker & CI/CD
  7. Bash scripting, SSH, etc.
  8. CloudFlare – DNS & DDoS Mitigation
  9. Hetzner Cloud API – To create / destroy VMs on the fly

Preparation

For the Kaholo server I created a 4.90 EUR / Month VM with Hetzner, running Ubuntu Server 18.04 Bionic (2 vCPU, 4gb RAM, 40gb SSD).

I set it up with nginx-full, certbot (for the LetsEncrypt SSL cert), mongodb, and of course Kaholo Server and Kaholo Agent.

Stay tuned for Part 2 – I will explain how to install Kaholo on your own instance, complete with Copy & Paste commands so you can be up and running in minutes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.