The State of Security and Compliance Automation
The significant increase in malicious attacks during recent years has forced organizations to shift their efforts from reactive to proactive, preventive to diagnostic approaches. Teams that integrate security practices through their entire software supply chain deliver software quickly, safely, and reliably. This means they can successfully achieve continuous security and compliance.
But this can be a challenging task for many organizations as they still rely on manual, outdated methods of compliance and security practices that impede software delivery performance. As a result, they experience compliance breaches late in the software delivery pipeline, which results in costly mistakes and is difficult to correct.
At OpsMx, we recommend integrating security into your continuous delivery pipeline, which helps in improving software delivery, hardening software, enhancing compliance and boosting operational performance by leveraging the following practices:
- Security testing
- Integrating information security reviews into every part of the software delivery lifecycle
- Using build pre-approved codes
Imagine for a moment if there were a continuous compliance and security toolchain. Within that compliance and security automation, OpsMx helps to integrate security and compliance as Compliance-as-a-Code into the Continuous Delivery pipeline.
Challenges of Integrating Security and Compliance in the Software Delivery Pipeline
When trying to integrate security and policy governance into the software delivery pipeline, companies often face some common challenges. Some of the challenges that OpsMx customers face are-
- Integrating app security
- Securing delivery process
- Managing security at scale
Integrating app security
Application security does not only mean securing the application code-wise or monitoring its risks. It includes addressing challenges related to compliance and policies as well. When trying to achieve this, teams must ensure that all artifacts within the software delivery pipeline are deployed into the production environment, certified by the security scans and no changes are made after it goes through the testing phase. This way, the real-time assessment of any security findings will speed triage, speed approvals, and maintain your delivery flow.
Securing delivery process
Next, the challenge is to ensure the security and compliance of the delivery workflow itself. Companies must focus on security right from code check-in to code complete, to multi-cloud, on-prem to a hybrid deployment, and through automated policies to ensure separation of duties in application administration and application deployment by maintaining access control to the cluster in secrets used by an application.
Managing security at scale
When integrating security and compliance into the delivery pipeline, the challenge is to provide visibility into the supply chain running artifacts. Running artifacts in the production environment and speedily rectifying vulnerabilities found in libraries or application components as quickly as possible can be key to achieving continuous security and compliance.
OpsMx supports the ability to integrate supply chain security into your delivery pipeline and help improve resilience. Within the delivery pipeline itself, OpsMx ensures that you can manage your keys and secrets and empowers you to harden the security of your CI/CD pipeline. By leveraging the least privilege access controls within the operational environment, OpsMx ISD ensures that only the right people connect in the production environment.
When trying to automate their pipeline with security and compliance, companies often take the manual approach. But whether transforming or simply improving your software delivery process through value stream mapping, implementing the OpsMx ISD platform can give you significant benefits.
Here’s how OpsMx has helped transform several companies to achieve some of these goals.
Positive Business Outcomes Experienced by Our Customers with OpsMx ISD Platform
A cohort of companies who experienced major benefits, including the ability to manage the substantial increases in pipelines, thereby scaling their systems and increasing the volume and velocity of releases. By reducing the human toil and eliminating manual errors through intelligent automation, these companies have significantly reduced verification and approval times. And now, as we’ve automated compliance and security, we can help increase software delivery performance further. This, in turn, results in faster discovery, detection, and remediation of security violations and vulnerabilities.
A leading digital commerce player in Africa, Interswitch, was facing challenges with frequent innovation to provide sustainable payment solutions. Interswitch was instrumental in transforming Africa’s payment landscape but they were struggling with a number of problems. Some of them were-
1. Script-based deployment was time-consuming
Interswitch migrated from monolithic software to microservices and containerized applications. Initially, their teams wrote numerous scripts using Jenkins and SSH plugins to automate deployments into multiple environments. Eventually, their pace of innovation increased and the non-standard deployment process (required for maintaining these scripts) made software deployments slow and complex.
2. Complying with Financial Regulations-
Being in the financial industry it was imperative that Interswitch strictly complies with many regulations and standards. This was particularly challenging for the IT team to manually address policy checks and adhere to SDLC regulations set by the compliance team. Moreover, auditing and investigating non-compliance issues was a cumbersome task for their compliance managers.
3. Lack of Secured Delivery
Interswitch stores and processes personal information in their own data centers, so it was necessary that they implement ways to reduce the risk of loss, unauthorized access, and leak of personal information. Their IT team uses firewalls and data encryption mechanisms to control access to its data centers. For security reasons, they have their production Kubernetes clusters configured in on-prem data centers. However, there was no proper mechanism to deliver applications automatically into secured clusters. That’s when they realized that they needed a streamlined, modern continuous delivery process that eliminated excessive dependence on tribal knowledge in order to adhere to compliance requirements as well as handle their scale.
Interswitch adopted OpsMx ISD, a scalable, modern continuous delivery platform to speed and safely deploy apps into Kubernetes through the ISD platform. With OpsMx, Interswitch has successfully automated its end-to-end software delivery pipeline, seemingly integrating its DevOps toolchain. This includes a code management system (Bitbucket), a CI system ( Jenkins), the vulnerability scanning tool (SonarQube), and an artifact repository (Jfrog). Now, Interswitch’s developers and DevOps engineers are using OpsMx ISD to securely deploy applications into the cloud and on-prem Kubernetes behind the firewall. As a result, Interswitch is able to deliver applications at scale, with nearly 100 developers, 10 DevOps engineers, and project managers using OpsMx ISD to orchestrate the deployment and delivery of their payment applications. Most importantly, their DevOps team is automating Canary analysis seamlessly and estimating the risk of new releases easily.
OpsMx Autopilot automatically gathers metrics of Canary and baseline pods and applies AI ML to perform a risk assessment and enables automated decisions to roll out or rollback based on the risk score. Additionally, Autopilot is also helping the DevSecOps teams address regulatory and compliance concerns by allowing them to define, automate and enforce policies within their delivery pipeline. All software passes through these automated policy gates, thereby ensuring security and compliance before reaching production. As a result, their code check into production times has now reduced from days to hours, as everything is streamlined under one umbrella, helping them reduce at least 70% of lead time. With OpsMx, Interswitch has gained the ability to identify deployment failures in the process of software delivery with end-to-end visibility. And we track the progress at all times from Dev through UAT to production their compliance stakeholders get an audit report for a particular time period. This gives them insights into policy violations, pipeline failures, or bad deployments.
Success Case 2- Symphony Communication Services
Symphony is a large force in the financial services industry, acting as the collaboration hub for large (and small) firms across banking, brokerage, and wealth management.
As the collaboration platform is highly regulated and renders services for millions of customers, meeting these rigorous standards means maintaining the utmost level of security and regulatory compliance. Symphony needed to reduce development costs and speed innovation, and they turned to us to help.
Moreover, they were looking to deploy a major update to the platform that would enable a multi-tenant system allowing them to allocate dedicated resources to each client to achieve maximum performance. One particular pain point with their existing system was using a CI value stream that was creating a bottleneck in achieving maximum performance. This was because their distributed SaaS platform with an ever-growing number of pipelines and software delivery environments was increasing the burden on human resources and skyrocketing infrastructure costs. So, maintaining compliance standards with an increased scale of the operation was becoming quite difficult. Most importantly, any potential security vulnerability introduced during delivery could cripple the business. So to avoid this, they looked for a new modern continuous delivery solution for both infrastructure and software that could retrofit into their existing platform. And of course, they wanted to accomplish all this within 90 days.
- Lack of an agile infrastructure
- Lack of a hardened Continuous Delivery platform with security features
- They needed to meet business demand
1. Lack of an agile infrastructure
Previously Symphony applied changes to their production system using manual steps and complex scripts on TerraForm. But idling systems were hard to manage, and it was inflating costs. Finally, the process for deploying new infrastructure to onboard new customers was becoming too long. Instead, they needed a significantly shorter and simplified deployment process.
2. The requirement to meet business demand
In order to attain a higher velocity of deployments, Symphony needed to secure the software delivery. They needed a platform that could help them put together a verification check through the entire delivery process and provide useful insights into the risks informing approvals and maintaining velocity.
3. Lack of Secured Delivery
The process of moving changes from development to test and finally to production was complex. Tenant infrastructure deployment failed to keep pace with their needs and clients had to wait for their product. With expanding pipelines, their SREs were burdened with troubleshooting which increased the triage time as well. So, they needed a platform that could guide the SREs to the problem area, rather than wasting their time troubleshooting the source of the problem.
Symphony automated the onboarding of new tenants by automating the entire CI/CD process by using the OpsMx ISD platform. They increased deployment from 2-3 updates per week to 50 per day. This allowed Symphony to reduce the time to onboard new clients by 98%. Now it takes hardly 10 minutes to get the infrastructure and platform ready for the client. Finally, as Symphony continuously increases the number of services provided to customers, these automated pipelines with granular access control ensure safety and security.
Additionally, OpsMx also addressed cost optimization by integrating with TerraForm. Due to the deep integration of OpsMx ISD and Terraform, they now combine database, infrastructure, and software updates in an automated pipeline. This slashed their infrastructure costs because they are able to automatically provision and de-provision their dev, test, and staging environments. This also was a big enabler of them going faster, since developers no longer needed to wait for a DevOps engineer to build test environments. They’re also using the security framework from OpsMx for the development, test, and the SRE team of delivery intelligence data and delivery intelligence layer Autopilot which relieves the burden that the SRE has previously had to manage. OpsMx also allowed them to define and enforce policies through their pipeline with AI algorithms, managing compliance checks at every stage of the pipeline. And this red flags erroneous codes or configurations which notify the SRS. This, in turn, frees their SREs from the daily toil of troubleshooting by red-flagging any events. Thus, teams have seen a significant improvement in delivery velocity, without compromising security.
Automating Reliable Enforcement of Policy through Compliance-as-Code
What is Compliance-as-Code? Are there evaluations and what we want the system to do is provide the ability to support that changes?
Staying compliant in a cloud-first world is a ubiquitous challenge not just for startups but also for enterprises. Many organizations fail to achieve continuous security and compliance and as a result succumb to ad hoc, time-consuming and error-prone processes. Traditionally, they relied upon a quick checklist of scripts for achieving their compliance in the software delivery. Of course, this exposes the organization to business risks. Most organizations agree that they need to improve their speed of delivery which is why it is important to eliminate manual steps or hard coding checks and also provide better visibility into the compliance checks in the delivery process.
This is where Compliance-as-Code comes into the picture. Compliance-as-Code refers to the practices that allow DevSecOps to embed the three core activities at the heart of compliance: prevent, detect, and remediate. It enables automation and reliably enforces governance and security policies across the delivery pipeline. For instance, enforcing Sarbanes Oxley (SOX) compliance in the software delivery process, or specific role-based access control in providing the approvals checks before deploying to production. These approvals for the SOX compliance, for example, could be direct or indirect in the sense that a developer checking in the code in the Git is the act that the developer is in triggering the pipeline- that is one person that’s approving to deliver the code.
Another set of approvals can come from the quality issue on the operations where you find the data for the testing or security checks required for the delivery and approval to be deployed to the production. This approval needs visibility into what checks were performed, and the results of the checks, and they can be approved through Slack and chat Ops. The visibility for the security checks can be static code analysis or binary scans, or dynamic code analysis. Also, when evaluating performance and functional tests, asking them to deploy to the production can be helpful. The visibility requires the data- verification on the delivery platform and the results of these different tests that were performed to be able to make that decision. The ability to fetch this data and the defined policies empowers organizations to deliver faster. And these policies that one defines are done by the people from the central organization that may be different from the people who are actually configuring the delivery pipeline. Therefore, as a practice, organizations must refrain from configuring these policies in text form in a document, instead of enforcing them as rules that can be used and applied as code can help.
Enforcing Compliance into the Delivery workflow
While enforcing compliance, there are certain steps that happen within the delivery workflow. For example, there is a build step, there is a static code analysis step or integration step or dynamic code analysis, binary scan. These are individual steps done in a certain sequence. Some of these intangibles are not in sequence. Conceptually, there are three steps key to achieving security and compliance automation.
- Provide the guardrails in the delivery pipeline, ensuring the necessary checks are included in the pipeline.
- Provide a gating mechanism. These checks fail. We want to stop the delivery from going into production environments and have those things resolved before they move forward.
- Define and automate specific policies that are set up within these pipelines.
Additionally, we want to ensure the compliance partners AWS and EKS environment that we can apply continuously for the software that is being deployed for this example the load balancer policy or firewall rules on SSL certificates. Thus, automating everything end-to-end can give the confidence and reliability in being able to deliver software faster.
To summarize, automating these steps for continuous compliance and security automation helps to integrate security checks in the continuous delivery pipeline. It should be an integrated process in your delivery. And also to ensure that the guardrails are there such as specific tests that need to be done for the security as part of the delivery pipeline and the checklist ensures that the results from these tests are verified and conform to the policies.
If you’d like to learn more, you can watch the webinar Using DevSecOps for Continuous Compliance and Security Automation.