Backend & DevOps

My name is Noah McClelland and I graduated from James Madison University in 2016 with a Bachelor’s of Science in Computer Science and Mathematics. I have since been learning and honing my skills primarily as a backend developer, developing RESTful APIs in Java. 

As I became more deeply involved in the software development lifecycle, I discovered my curiosity for IT operations. This intersection is commonly known as DevOps. I’m particularly interested in how it enables a smooth development and testing process while minimizing operational distractions. This results in a better development experience and ultimately produces production-ready applications more efficiently. For this reason, in 2021, I shifted my focus to improve my skill set in this area.

During the first few years of my career, I worked in a team where IT operations were a concern that was separate from the software development lifecycle. Another middleware team was responsible for the creation and management of the application servers. This process was largely manual and introduced me to the pain points that are exposed when IT ops are not owned by the development team. I found this to be a valuable learning experience where I could gain perspective on the pros and cons of this paradigm.

The following year I took part in an organizational shift to fully realize the process improvements offered by DevOps. This provided me with a stark contrast between the approaches. I began contributing to pipeline improvements, app configuration, performance troubleshooting and log analysis; wherever I could be useful while gaining some familiarity with the tooling and best practices. I found my backend experience blends quite naturally with this work. I was already intimately familiar with the development lifecycle, common build tools and deployment artifacts, app and secrets configuration, among other pieces of the puzzle. Although not all prerequisites, I was immediately able to put these skills to good use. I highly encourage other engineers in a similar field to become more actively involved in DevOps.

Although I speak from the perspective of a developer, developers are not the only stakeholders of DevOps. It has wide-reaching ramifications on a project. Quality assurance testers who own the testing environment also have requirements of the automated deployment system. Ensuring the test environment is stable and predictably receives features and fixes ready for testing is imperative. The product owner also depends on a reliable DevOps system to create and manage the live environment without service interruptions. It is crucial to take the requirements of each of these groups into account when designing and building a DevOps pipeline.

After working with multiple DevOps systems, I noticed some principles shared between most software engineering disciplines. Even if a DevOps pipeline is currently functioning as intended, it can suffer from underlying technical debt that makes it more difficult to modify safely. A DevOps pipeline that has changed over time to accommodate additional scope may cease to seamlessly solve the problem it initially intended to. At times, consolidation and refactoring based on the current need and observed patterns are required. It can feel like a balancing act; however, the acronym KISS (Keep It Simple, Stupid) is a sound guiding ethic. Additional features should not be present unless they are anticipated in the immediate future. This works well in many DevOps setups because pipelines are created for specific tasks based on given assumptions. If those assumptions change, the pipeline should be modified. The complexity should not be added until it is required.

Other useful guiding principles that reduce wasted time over the course of a project is consistency, and convention over configuration. In a perfect world, DevOps should fade into the background. It should not slow down or block the development cycle. The team will frequently interact with the environment setup throughout, such as deploying updated code, modifying configurations and secrets, and performing environment testing. Even if there is a small inconsistency that takes up a fraction of someone’s time, this can quickly add up. The pipeline should have sane defaults and act reliably to avoid this time waste.

One example I’ve seen of this principle being violated is with specific regard to Maven + Spring Boot Java applications built as Docker images and orchestrated using Kubernetes. Each application has many places where it was referenced:

  • Source control repository name
  • Built artifact name
  • Application’s internal name, as registered with a microservice registry
  • Spring Cloud Config name
  • Kubernetes ingress, service, deployment and pod name

Because the same application is being referenced in so many contexts, a small inconsistency in naming is multiplied by dozens of applications. This means more time will be wasted mentally mapping between names. If an application is called foo, it should be referenced as such in all of these contexts unless there is a valid reason otherwise; in which case, that reasoning should be applied consistently across all of the microservices.

I am still in the process of learning and improving my DevOps skill set but I am magnetized to the process improvements it can offer. If done correctly, subsequent application setups and deployments can be a breeze. Developers are empowered to easily build and deploy applications with less mental overhead. DevOps affords developers fewer distractions and allows for a smooth development cycle that sets up a project for success.

%d bloggers like this: