Hello DevOps — A Mobile Engineer’s Learning Experience
Talking about my experience as a mobile engineer while learning some DevOps
I recently completed the Cloud Dev Ops Engineer Nanodegree Program offered by Udacity. It was fun, exciting and scary. I was excited to venture into an aspect of software development which has mostly been a black box area to me.
As soon as I completed my capstone project, I was left with the familiar feeling of doubt. “Is this all there is?”, “Do you feel like a Cloud Dev Ops Engineer now?”, “What happens to your Android development career now?”. I would later be able to answer these questions as the weeks went by.
Reminiscing over the activities of the Cloud Dev Ops program, I can say that they felt quite similar to the activities I had done some years back when I worked on a machine learning problem. Even though they cannot be mapped one-to-one, the resemblance is there.
You see, in ML one needs to analyse and prepare their input data, decide the architecture of their ML model, train their model with the data, edit some parameters until they get the best model that they can produce using that input data. At least that is what I did then.
In the Cloud Dev Ops program, One needs to understand the project being deployed, decide on the architecture of the systems where the deployment would run, provision these systems, edit some parameters until the architecture that they had in mind is created.
The biggest similarity between them is that there is a lot of waiting, staring at your screen, looking at the log output while hoping for the best outcome. I had a discussion with a Dev Ops engineer who switched to being a backend engineer. He admitted that this waiting might have contributed to the reason of him being bored with Dev Ops engineering.
I had to make use of some AWS services throughout the course of this program. AWS seem to be the favourite cloud provider from most companies. Their Free Tier services were useful for me because I did not have to pay for the simple compute instances that I spun up.
I will discuss below some thoughts about the tech and concepts I have learned in this program. Some of them were not new to me as of when I started the program but the program gave me some fresh insights into them.
Infrastructure as Code (IAC)
This just makes sense. When you have the blueprint or template for your infrastructure written as code and checked into version control, it is easier for see everything at at glance and track changes to your architecture.
AWS Cloudformation is a good example of this. Assuming I would like to provision some AWS resources for my app, I could do this by defining these resources in a template file. I can then use this template file to create an AWS Cloudformation stack. A stack in this case is a collection of AWS resources, the advantage of having a stack is that it is easy to manage resource at once.
It is bad practice to edit the resources rather than editing the template file that configures these stack of resources. The goal is to ensure that we do not have a configuration drift between our template and the deployed resources. That is why it is advised to have these changes in the code and apply those changes using the CLI.
I also used Ansible to automate a lot of tasks in my newly create ec2 instances. For example, I had to install some apps like Java, Node, Jenkins, Prometheus exporter, etc on the ec2 instances. I could
ssh into the instances and do these manually but these could lead to some human errors if I had to do this again on another instance.
This might not seem too important for an android engineer because the packaged app is ready to be installed on any supported Android OS. There is no need to worry about wrapping the app in container for most cases.
Nevertheless, I have some experience working on a project where we used the GitLab CI to handle our CI/CD needs. I had to make use of Docker images here to provide a virtual environment where my android build tools were provided in. My build jobs ran in this virtual environment.
I could specify all the dependencies that I want to be available in my Docker image whenever it is running, I could install the android build tools and and specific API level of the android sdk required for my projects to build. I did all these in my Dockerfile. The Dockerfile contains instructions on how you want your docker image to be built.
Containerising an app is also an important prerequisite to have if you would want to manage your containerised app using Kubernetes.
I created my K8 cluster on Amazon EKS using eksctl. The
eksctl tool eliminates most of the cumbersome steps required to have a K8 cluster up and running on AWS EKS. In fact, I created my cluster using one line of code.
eksctl create cluster --name=udacity-project-capstone --nodes=2 --node-type=t2.micro
Following the IAC principle, I defined my K8 config file where I specified my configurations such as the docker image to manage, the load balancer service, the ports I want to expose to the public, the number of replicas I want to have for my deployment, among other things.
One thing I find cool about K8 is the ability to deploy an update to an app without any downtime. I am impressed by how seamless it feels.
Continuous Integration and Continuous Delivery (CI/CD)
Continuous Integration is the practice of merging all developers’ working copies to a shared mainline several times a day to avoid conflicts in the code in the future. It’s the first step towards ensuring that we have a high quality, deployable artifact. Some of the steps in this stage include: compiling, testing, running static analysis, checking for vulnerabilities in the our dependencies and storing the code artifacts.
Continuous Deployment is the process by which verified changes in codebase or system architecture are deployed to production as soon as they are ready and without human input. Some steps in this stage include: setting up infrastructure, provisioning servers, copying files, smoke testing, promoting to production and even rolling back a change if something did not look right.
It is either you do your monitoring or you allow your customers to do this for you. In an ideal case, one wouldn’t want to have the second scenario.
I used Prometheus to handle the monitoring of my AWS ec2 instances. I was impressed with the number of metrics that are being tracked by Prometheus. I was able to do some visualisation by creating queries for metrics that I was interested in.
Another important thing about monitoring is that we want to be able to get notified whenever things aren’t running normally. I was able to configure alerts using Prometheus too. There are many ways to get notified whenever a condition for an alert is met.
To wrap up, I will say that venturing into DevOps has been a rewarding investment of time and money. I feel like I can be able follow basic DevOps conversations. I would be naive to say that I have covered all aspects of DevOps engineering, that would be laughable. I would rather say that I have gained some experience that would be the foundation of my DevOps journey.