CI/CD

In CI, developers commit code frequently to a code repository. The code is built frequently. Each build is tested using automated unit tests and integration tests. In CD, you go further and frequently deploy your code in production. Builds are deployed to test environments and are tested using automated and possibly manual tests. Successful builds pass tests and are deployed to staging or production environments.

Continuous Monitoring

You can proactively monitor services by creating alerts and performing real-time analysis. You can track various metrics to monitor and improve your DevOps practice. Examples of DevOps-related metrics are as follows: 

Change volume: This is the number of user stories developed, the number of lines of new code, and the number of bugs fixed. 

Deployment frequency: This indicates how often a team is deploying an application. This metric should generally remain stable or show an upward trend. 

Lead time from development to deployment: The time between the beginning of a development cycle and the end of deployment can be used to identify inefficiencies in the intermediate steps of the release cycle. 

Percentage of failed deployments: The percentage of failed deployments, including the number of deployments that resulted in outages, should be low. This metric should be reviewed in conjunction with the change volume. Analyze potential points of failure if the change volume is low but the number of failed deployments is high. 

Availability: Track how many releases caused failures that possibly resulted in violations of service-level agreements (SLAs). What is the average downtime for the application? 

Customer complaint volume: The number of complaint tickets filed by customers indicates the quality of your application. 

Percentage change in user volume: The number of new users signing up to use your application. and the resulting increase in traffic can help you scale your infrastructure to match the workload.

IAC

With laC, we can define our infrastructure in the form of templates. A single template may consist of a part or the entirety of an environment. More importantly, this template can be used repeatedly to create the same environment again.

 

In laC, infrastructure is spun up and managed using code. An laC model helps you interact with infrastructure programmatically at scale and avoid human errors by automating resource configuration. As the infrastructure is managed through code, the application can be deployed using a standardized method, and any patches and versions can be updated repeatedly without errors.

Configuration Management

Configuration management (CM) is the process of using automation to standardize resource configurations across your entire infrastructure and applications. CM tools such as Chef, Puppet, and Ansible can help you manage laC and automate most system administration tasks, including provisioning, configuring, and managing IT resources.

 

A CM application allows you to maintain version control as well, in addition to storage. CM is also a way to track and audit configuration changes. If necessary, you can even maintain multiple configuration settings versions for various software versions.

DevSecOps

A DevSecOps practice must be embedded with every CI/CD pipeline step. DevSecOps ensures the security of the CI/CD pipeline by managing the proper access and roles assigned to each server and making sure the build servers, such as Jenkins, are hardened to be protected from any security glitch. In addition to that, we need to ensure that all artifacts are validated, and code analysis is in place. It’s advisable to be ready for incident response by automating continuous compliance validation and incident response remediation.

For instance, if an organization needs to comply with the Payment Card Industry Data Security Standard (PCI-DSS), continuous compliance validation would involve setting up automated tools and processes to constantly check that the handling, processing, and storage of credit card information meet PCI-DSS requirements.

  • In the Code phase, scan all code to ensure no secret or access key is hardcoded between code lines. 
  • During the Build phase, include all security artifacts, such as the encryption key and access token management, and tag them for easy identification. 
  • During the Test phase, scan the configuration to make sure all security standards are met by test security. 
  • In the Deploy and Provision phases, ensure all security components are registered. Perform a checksum to ensure no changes in the build files. A checksum is a technique used to determine the authenticity of received files. Operating systems provide a checksum command to validate the file and ensure no changes are made during file transfer. 
  • Monitor phase. Perform continuous audits and validation in an automated way. 

You can integrate multiple tools into DevSecOps pipelines to identify security vulnerabilities at various stages and aggregate the vulnerability findings.

Application security testing (AST), which involves using tools to automate the testing, analysis, and reporting of security vulnerabilities, critical component of application development. AST can be broken down into the following four categories to scan security vulnerabilities in software applications: 

  • Software composite analysis (SCA): SCA evaluates the open-source software’s security, license compliance, and code quality in a codebase. SCA attempts to detect publicly disclosed vulnerabilities contained within a project’s dependencies. Popular SCA tools are OWASP Dependency-Check, Synopsys’ Black Duck, WhiteSource, Synk, and GitLab. 
  • Static application security testing (SAST): SAST involves scanning an application’s code prior to compilation. These tools provide developers with immediate feedback during the coding process, allowing for the early correction of issues before the code build phase. As a white-box testing method, SAST analyzes the source code to identify vulnerabilities that could make applications prone to attacks. Its key advantage is its integration early in the DevOps cycle, during the coding stage, as it doesn’t require a functioning application or code execution. Popular SAST tools include SonarQube, PHPStan, Coverity, Synk, Appknox, Klocwork, CodeScan, and Checkmarx. 
  • Dynamic application security testing (DAST): DAST identifies security vulnerabilities by mimicking external attacks on an application while it is running. It assesses the application from the outside, probing exposed interfaces for vulnerabilities. Known as black-box security testing or a web application vulnerability scanner, DAST tools include OWASP ZAP, Netsparker, Detectify Deep Scan, StackHawk, Appknox, HCL AppScan, GitLab, and Checkmarx. 
  • Interactive application security testing (IAST): IAST examines code for security vulnerabilities while the application is actively being tested or used, thus reporting issues in real time without causing delays in the CI/CD pipeline. IAST tools are typically implemented in QA environments alongside automated functional tests. Notable LAST tools are GitLab, Veracode, CxSAST, Burp Suite, Acunetix, Netsparker, InsightAppSec, and HCL AppScan. 

CD Strategies

CD provides seamless migration of the existing version of an application to the new version. Some of the most popular techniques to implement through CD are as follows: 

  • In-place deployment: Update application on a current server 
  • Rolling deployment: Gradually roll out the new version in the existing fleet of servers 
  • Blue-green deployment: Gradually replace the existing server with the new server 
  • Red-black deployment: Instant cutover to the new server from the existing server 
  • Immutable deployment: Stand up a new set of servers altogether

 

Best practices for choosing the right deployment strategy 

 

In-place deployment: In-place deployment is ideal for scenarios where simplicity is key and the application is relatively small or has a limited user base. For instance, updating a company’s internal tool with a small team fits this approach well. It involves updating the application on the current server, but it’s important to note that it can cause downtime. This strategy is not the best fit for large-scale or high-availability applications. A notable example would be updating a small-scale web service overnight with low user traffic. It’s crucial to have a rollback strategy in case the update fails to restore the previous version and minimize disruption quickly. 

 

Rolling deployment: Rolling deployment is suitable for applications that need minimal down- time but don’t require additional resources. This approach updates the application gradually across the existing fleet of servers. An example would be deploying an update to an e-commerce website’s servers in stages, ensuring that only a portion of users experience any potential issues at a time. However, this method is unsuitable for applications that cannot simultaneously handle different versions. Continuous monitoring of application performance during the deployment is key to addressing issues as they arise. 

 

Blue-green deployment: Blue-green deployment is best for critical applications where zero downtime is essential. A financial services company might use this strategy to update its cus- tomer-facing application. Once the green environment is thoroughly tested and ready, traffic is switched from blue to green. This method requires double the resources but offers a seamless user experience and quick rollback capability. It’s crucial to ensure that load balancing and DNS switching mechanisms are robust and reliable. 

 

Red-black deployment: Red-black deployment is similar to blue-green but focuses on a faster cutover to the new version. It is particularly effective for quickly releasing new versions and is often used in containerized environments. For example, a media streaming service might deploy a new version of its platform using this strategy, ensuring immediate availability of new features to all users. While it offers rapid release and immediate switching, thorough testing of the new version is crucial as rollback involves reverting to the old environment. 

 

Immutable deployment: Immutable deployment ensures consistency and reliability, especial- ly in cloud environments. Each deployment involves setting up new servers, guaranteeing a predictable and stable state. This approach could benefit an application with complex dependencies, as it avoids the “configuration drift” seen in long-lived environments. This strategy requires efficient management of infrastructure resources, as it involves provisioning new servers and decommissioning old ones with each release.

Code Pipeline

The code pipeline enables you to add actions to stages in your CI/CD pipeline. Each action can be associated with a provider that executes the action. The code pipeline action categories and examples of providers are as follows: 

Source: Your application code needs to be stored in a central repository with version control called source code repositories. Some popular code repositories are AWS CodeCommit, Bit- bucket, GitHub, Concurrent Versions System (CVS), Subversion (SVN), and so on. 

Build: The build tool pulls code from the source code repository and creates an application binary package. Some of the popular build tools are AWS CodeBuild, Jenkins, Solano CI, and so on. Once the build is completed, you can store binaries in an artifactory such as JFrog. 

Deploy: The deployment tool helps you to deploy application binaries on the server. Some popular deployment tools are AWS Elastic Beanstalk, AWS CodeDeploy, Chef, Puppet, Jenkins, and so on. 

Test: Automated testing tools help you to complete and perform post-deployment validation. Some popular test-validating tools are Jenkins, BlazeMeter, Ghost Inspector, etc. 

Invoke: You can use an events-based script to invoke activities such as backup and alert. Any scripting language, such as a shell script, PowerShell, and Python, can be used to invoke var- ious customized activities.

Approval: Approval is an essential step in CD. You can either ask for manual approval by an automated email trigger, or approval can be automated from tools.

Best Practices

Consider the following points while designing the pipeline: 

  • The number of stages: Stages could be development, integration, system, user acceptance, and production. Some organizations also include dev, alpha, beta, and release stages. 
  • Types of tests in each stage: Each stage can have multiple types of tests, such as unit tests, integration tests, system tests, UATs, smoke tests, load tests, and A/B tests at the production stage. 
  • The sequence of a test: Test cases can be run in parallel or need to be in sequence. 
  • Monitoring and reporting: Monitor system defects and failures and send notifications as failures occur.
  • Infrastructure provisioning: Methods to provision infrastructure for each stage. 
  • Rollback: Define the rollback strategy to fall back to the previous version if required.

It is better to store build configurations outside of code. Externalizing these configurations to tools that keep them consistent between builds enables better automation and allows your process to scale much more quickly. 

The twelve-factor methodology can be used to apply architecture best practices at each step of application development, as recommended by The Twelve-Factor App (https://12factor.net/)

In DevSecOps, securing the CI/CD pipeline is achieved through AWS Identity and Access Management (IAM) roles, which restricts access strictly to the necessary resources. Encryption and Secure Sockets Layer (SSL) are employed to protect pipeline data both at rest and in transit. Sensitive details like API tokens and passwords are securely stored in the AWS Parameter Store.

Excerpts from Solutions Architect’s Handbook

Leave a Reply

Your email address will not be published. Required fields are marked *