How to accelerate consistently with pipelines

How to accelerate consistently with pipelines

Sharpening the axe before chopping down the tree is an essential part of software engineering. All too often, organizations miss their opportunity to optimize their processes because of unfamiliarity with automation. Nonetheless, it is important to optimize your CI/CD pipelines, for example with automated testing, automated change and automated compliance to speed up the delivery of new features.

Many organizations — across industries — are working to shorten their release cycles using their CI/CD pipelines. The idea behind this is, of course, that the lead time from idea to delivery should be as short as possible. Something that can be achieved by organizing the development of features in such a way that they fit within the sprints to production. With the additional advantage that by developing software in sprints, you can ultimately work faster towards a good end product.

Minimize manual work
It is the idea of a Minimum Viable Product (MVP), which is good enough for production and immediately delivers business value. By working in CI/CD pipelines, this can be done through short processes that allow you to deliver value faster and reach an endpoint faster in multiple iterations. This already includes the necessary steps, such as version and quality checks, validation with the customer and testing.

A problem is that due to too much manual work, the delivery of software is still delayed. We are talking about the manual change process to bring something to production, but also technical, functional and security testing. This is often manual work and there is also a lot of repetition. Even while pipelines offer the possibility to automate validation, simply because a pipeline can be able to show where possible problems in code may arise. Meanwhile, the need to accelerate development is present, with tasks such as security checking (for example on malicious code and libraries) becoming increasingly important to do this well and consistently.

Don’t lose valuable engineering time
In short, we use pipelines to accelerate, simply and safely. I recently did an inventory and what I noticed: an average two-week sprint has 60 hours of waiting time through testing. This translates to the situation where you might spend two hours writing code, after which you still spend a week validating how it works. As a result, you lose valuable time, which you would much rather spend on building new features.

Our goal: remove traffic jams from the sprints, look through processes and use tools for process mining. You can use the data from this to find delays, solve problems and speed up even more. You can use your own pipelines for this, or you can choose an integrated solution such as GitHub and GitLab. Initially, we opted for self-build at de Volksbank, but now we opt for a standard (SAAS) solution because we want to be able to better integrate process mining into the development process. Self-build requires a lot of engineering attention; you must think carefully about process steps and how they fit together. With a standard solution, it becomes possible to focus more on the primary process.

It shouldn’t be a big bang
Does this still sound like a lot of work? Well, you don’t have to do everything at once. For example, in our team, we initially added a step in the process for scanning for unsafe libraries. However, doing this does require adding further steps; from building and compiling code to testing — at first technically, then functionally and finally the security. And then — preferably — add automated penetration/security testing whenever possible. In this way, you slowly expand the number of steps in the process that ultimately help improve and accelerate this process. It doesn’t have to be a big bang.

Automation in pipelines is an opportunity that organizations quickly pass by if they are not used to it. All too often it is a cultural issue: “We don’t have time to improve the process”. But wasn’t it Abraham Lincoln who once said: “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”? From a technical point of view, it is of course necessary to get to know and understand new tooling. The learning curve plays a role in the adoption of tools for automation. Another aspect is the way you organize processes. Do you stick very strictly to certain pipelines, or do you make processes ‘pluggable’ so your team can add new components where it sees fit?

Don’t get stuck in the process
So you don’t have to be stuck in the process at all. There should always be room to optimize. We choose to do that, and it already brings us so much. In principle, we can do more work. Or the same, with fewer people involved. We choose to do more because we see that we can translate our acceleration into features and functionalities that we deliver to our customers. We work smarter and make more impact. So again: not having time to optimize is not a good argument at all.

Also published on Medium:

https://medium.com/@sebastiaankalshoven/how-to-accelerate-well-and-consistently-with-pipelines-87ad19f4e5dc