What is a CI/CD pipeline?
Before we understand a CI/CD pipeline, let us take a step back and understand software delivery. A software delivery pipeline is a series of dependent stages through which a code/package/artifact flows from the developer’s system to a production server. The journey from the developer’s system is not simple, and the code has to pass through multiple stages before we deploy it into production.
The code will progress from code check-in through the test, build, deploy, and production stages. Engineers over the years have automated the steps for this process. The automation led to two primary processes known as Continuous Integration and Continuous Delivery.
Standard Definition: A CI/CD pipeline is defined as a series of interconnected steps that include stages from code commit, testing, staging, deployment testing, and finally, deployment into the production servers. We automate most of these stages to create a seamless software delivery.
Why CI/CD is Important?
Within a CI/CD pipeline, a software release artifact can move and progress through the pipeline right from the code check-in stage through the test, build, deploy, and production stages. This concept is powerful because once a pipeline has been specified, parts or all of it can be automated, speeding the process and reducing errors. In other words, the goal of a pipeline workflow is to make it easier for enterprises to deliver software multiple times a day.
Often, DevOps engineers confuse the CI/CD pipeline with automation of individual stages in CI/CD. Although different CI/CD tools can help automate each complicated stage in the pipeline, still, they can break the whole software supply chain of CI/CD. This happens due to manual intervention. Before proceeding further, let us first understand various stages in a CI/CD process and why a CI/CD pipeline is essential for your organization to deliver code at speed and scale.
How is Continuous Delivery different from Continuous deployment?
It is easy to confuse Continuous Delivery with Continuous Deployment. Continuous Delivery (CD) begins at the end of the continuous integration process. In CD, the recent code changes automatically flow from testing to staging. Next, the DevOps engineers choose to deploy these changes that have been waiting in queue on the production server. This last step is on-demand.
When we enable Continuous Deployment, the last stage of the pipeline, which is deployment into production servers, is also automated. Although continuous deployment is a great way to update applications where developers can get feedback on their code at the earliest, still it poses a challenge at the customer’s end. Customers do not want to update their software so frequently as it causes downtime. This is the reason why Windows or Android applications get feature updates at periodic intervals.
Components of a CI/CD Pipeline
For a CI CD pipeline to work, we require a series of sub-processes or stages that need to continuously check and verify the code updates. These sub-stages are as follows-
- Code Commit
- Static Code Analysis
- Build, and
- Test stages/scenarios
- Deployment Testing and Verification
- Monitoring, and
Developers, testers/QA engineers, operation engineers, and SREs (Site Reliability Engineers) or IT Operations teams typically form any enterprise application development team. They work together closely to deliver quality software into customers’ hands. Hence, CI/CD is a combination of two separate processes: Continuous Integration and Continuous Deployment.
Let’s explore the major steps of each of the processes.
What is Continuous Integration?
Continuous Integration (CI) is the process where code updates and changes are collected from a developer or a group of developers and merged with the original source branch. This merger of code with the main branch is called continuous integration.
In simple terms, the code is packaged into an executable form where the code becomes immutable. This maintains the sanctity of the code as it removes the chance of any code changes once we have committed it. This ensures that the code remains the same until it reaches the end-user without being tampered with.
1. CI- Code-Commit: People Process and Technology:
- People: Developers and Engineers, Database Administrator (DBA), Infrastructure team
- Technology: GitHub, Gitlab, SVM, BitBucket
A code commit stage is also known as version control. A commit is an operation that sends the latest changes written by a developer to the repository. Every version of the code written by a developer is stored indefinitely. After a discussion and review of the changes with collaborators, developers write the code and commit once the software requirements, feature enhancements, bug fixes, or change requests are completed. The repository where the edits and commit changes are managed is called Source Code Management (SCM tool). After the developer commits the code (code Push Request), the code changes are merged into the base code branch stored at the central repository such as GitHub.
2. CI- Static Code Analysis : People Process and Technology
- People: Developers and Engineers, Database Administrators (DBA), Infrastructure team, testers
- Technology: GitHub, Gitlab, SVM, BitBucket
Once the developer writes a code and pushes it to the repository, the system is triggered automatically to start the next process of code analysis. Imagine a step where the code committed gets to build directly and fails during the build or deployment. This becomes a slow and costly process in terms of resource utilization, both machine and man. We must check the code for static policies. SAST or Static Application Security Test is a white-box testing method to examine the code from inside using SAST tools like SonarQube, Veracode, Appscan, etc., to find software flaws, vulnerabilities, and weaknesses (such as SQL injection, etc.). This is a fast check process where the code is checked for any syntactic errors. Still, this stage lacks features that can help to check runtime errors, which is performed at a later stage.
Placing an additional policy check into an automated pipeline can dramatically reduce the number of errors found later. This is where the OpsMx Intelligent Software Delivery or ISD platform can help. The OpsMx ISD offers a Delivery Intelligence module that helps to ensure that the processes are followed correctly and policies are automatically enforced.
3. CI- Build: People Process and Technology
- People: Developers and Engineers
- Technology: Jenkins, Bamboo CI, Circle CI, Travis CI, Maven, Azure DevOps
One of the goals of Continuous Integration is to merge the regular code commits and continuously build binary artifacts. Developers can benefit from this process by finding bugs quickly and verifying if the newly added module plays well with the existing modules. Hence, it helps in reducing the overall time required to verify a new code change. The build tools help in compiling and creating executable files or packages (.exe,.dll, .jar, etc.) depending on the programming language used to write the source code. During the build, the SQL scripts are also generated and then tested along with infrastructure configuration files. So, in a nutshell, the build stage is where your applications are compiled.
Other sub-activities that are a part of the Build process are Artifactory Storage, Build Verification, Unit tests, and many more.
3.1 Build Verification Test (BVT)/Smoke Tests and Unit Tests:
Smoke testing or BVT is performed immediately after the build is created. BVT verifies whether all the modules are integrated properly and the program’s critical functionalities are working fine. The aim is to reject a badly broken application so that the QA team does not waste time installing and testing the software application.
Post these checks, a Unit test (UT) is added to the pipeline to further reduce failures during production. Unit Testing validates if individual units or components of a code written by the developer perform as per expectation.
3.2 Artifactory Storage:
Once a build is prepared, the packages are stored in a centralized location or database called Artifactory or Repository tool. There can be many builds getting generated per day, and keeping track of all builds can be difficult. Hence, as soon as the build is generated and verified, it is sent to the repository for storage. Repository tools such as Jfrog Artifactory are used to store binary files such as .rar, .war, .exe. Msi, etc. From here, testers can manually pick, deploy an artifact in a test environment to test.
4. CI- Test Stages: People Process and Technology:
- People: Testers, QA Engineers
- Technologies: Selenium, Appium, Jmeter, SOAP UI, Tarantula
Post a build process, a series of automated tests validate the code veracity. This stage prevents errors from entering into production. So, depending upon the size of the build, this check can last from a few seconds to hours. In large organizations where multiple teams are involved, these checks are run in parallel environments which save precious time and notify developers of bugs early.
These automated tests are set up by testers (known as QA engineers) who have set up test cases and scenarios based on user stories. They perform regression analysis, stress tests to check deviations from the expected output. Some activities that are associated with testing are Sanity tests, Integration tests, and Stress tests. This is an advanced level of testing. To sum up, this testing process helps reveal issues that were probably unknown to the developer while developing the code.
4.1 Integration Tests:
Tools such as Cucumber, Selenium, and many more enable QA engineers to perform integration tests by combining individual application modules and testing them as a group while evaluating their compliance against specified functional requirements. Eventually, someone needs to approve the set of updates and move them to the next stage which is performance testing. And even though this verification process can be cumbersome, still, it is an important part of the overall process. Thankfully, there are some emerging solutions to take care of the verification process. The Delivery Intelligence module of the ISD platform by OpsMx is one such solution.
4.2 Load and Stress Testing:
One of the primary responsibilities of QA engineers is to ensure that an application is stable and performing well when exposed to high traffic. To drive this, they perform load balancing and stress testing using automated testing tools such as Selenium, JMeter, and many more. However, this test is not run on every single update, as full stress testing is a time-consuming process. So, whenever teams need to release a set of new capabilities, they usually group multiple updates together and run full performance testing. But in other cases, when only a single update has to be moved to the next stage, the pipeline may include canary testing as an alternative.
What is Continuous Delivery?
Continuous Delivery (CD) is a DevOps methodology that enables development teams to deploy changes, such as new features, configuration, bug fixes, and experiments into production safely and quickly in a sustainable manner. The continuous delivery process requires a collaborative approach from different stakeholders in a software delivery process, such as developers, operations, and testing teams. It simplifies the process of software updates by eliminating manual scripting and enabling real-time monitoring.
Now, for this blog, we have considered Spinnaker to be the standard CD tool. So, we will have to introduce a new concept into the Deployment process called Bake. Note that Bake is only for Spinnaker and other CD tools don’t have this step.
1. CD- Bake and Deploy: People Process and Technology
- People: Infrastructure Engineers, Site Reliability Engineers (SRE), Operation Engineers
- Technology: Spinnaker, Argo CD, Tekton CD
Once the code has completed its journey through the testing stage, it is safe to assume that it is now qualified to be deployed into the servers, where it will merge with the main application. But before getting deployed into production, it will be deployed into the test/staging or a beta environment that is internally tested by the product team.
Finally, before the builds are moved to these environments, they have to pass through two substages which are Bake and Deploy. Both these stages are native to Spinnaker.
“Baking” refers to creating an immutable image instance from the source code with the current configuration at the production. These configurations can be a range of things such as database changes or other infrastructure updates. So, either Spinnaker can trigger Jenkins to perform this task, or in other instances, some organizations prefer a Packer to achieve this.
Spinnaker will automatically pass the baked image to the deploy stage. This is where the server group will be deployed to a cluster. A functionally identical process is carried out during the Deployment stage similar to the testing processes described above. Deployments are first moved to test, stage, and then finally to production environments post approvals and checks. This entire process is handled by tools like Spinnaker.
1.3 Deployment Testing and Verification
This is also a key phase for teams to optimize the overall CI/CD process. By now, the code has undergone a rigorous testing phase, so it is rare that it should fail at this point. Even then, if it fails in any case, teams must be ready to resolve the failures as quickly as possible, so as to minimize the impact on end customers. Furthermore, teams must consider automating this phase as well. Deploying to production is carried out using deployment strategies like Blue-Green, Canary Analysis, Rolling Update, etc. During the deployment stage, the running application is monitored to validate whether the current deployment is right or it needs to be rolled back.
2. CD-Monitoring: People Process and Technology
- People: SREs, Ops Team
- Technology: Zabbix, Nagios, Prometheus, Elastic Search, Splunk, Appdynamics, Tivoli
To make a software release failsafe and robust, tracking the release’s health in a production environment is essential. Application monitoring tools trace the performance metrics, such as CPU utilization and latency of releases. Similarly, log analyzers scan torrents of logs produced by underlying middleware and OS to identify behavior and track the source of problems. In case of any issue in the production, stakeholders are notified to ensure the production environment’s safety and reliability. The monitoring stage helps businesses gather intelligence about how their new software changes are contributing to revenue and helps the infrastructure team track the system behavior trends and carry out capacity planning.
3. CD-Continuous Deployment: People Process and Technology
- People: SREs, Ops, and Maintenance Team
- Technology: JIRA, ServiceNow, Slack, Email, Hipchat
One of the primary goals of the DevOps team is to release faster and continuously, and then continually reduce errors and performance issues. This is done through frequent feedback to developers, project managers about the new version’s quality and performance through slack or email and by raising tickets promptly in ITSM tools. Usually, feedback systems are a part of the entire software delivery process; so any change in the delivery is frequently logged into the system so that the delivery team can act on it.
An enterprise must evaluate a holistic, continuous delivery solution that can automate or facilitate automation of these stages, as described above. If you are considering implementing a CI CD pipeline or automating your CICD pipeline workflow, OpsMx can help.
The OpsMx ISD platform leverages cloud architecture that can get you started by implementing a CICD workflow on a cloud server of your choice. The pipeline-as-a-code feature simplifies shifting any on-premise CICD pipeline over to the cloud architecture within minutes.