There are many DevOps lifecycle tools out there, however GitLab is a complete package designed for coordinating CI/CD pipelines.

GitLab is a web-based DevOps lifecycle tool. This application offers functionality to automate the entire DevOps life cycle from planning to creation, build, verify, security testing, deploying, and monitoring, offering high availability and replication. It is highly scalable and can be used on-prem or on the cloud. GitLab also includes a wiki, issue-tracking, and CI/CD pipeline features.

When DevOps projects are spread across large, geographically dispersed teams a complete DevOps tool is highly useful to maintain collaboration, incorporate feedback, avoid mistakes, and speed up the development process.

GitLab goes beyond being just a repository manager; it has a built-in CI/CD, which saves enormous amounts of time and keeps the workflow smooth. Along with its own CI/CD, GitLab also allows for a range of 3rd party integrations with external CI, so you always have the option of working with the tools based on your workflow. 

Here is a quick run-through on how to start with GitLab.

Project:

In GitLab, we can create projects for hosting codebase, use it as an issue tracker, collaborate on code, and continuously build, test, and deploy apps with built-in GitLab CI/CD. Projects can be available publicly, internally, or privately, at our choice. GitLab does not limit the number of private projects we create.

Create a project in GitLab

In the dashboard, click the green “New project” button or use the plus icon in the navigation bar. This opens the New Project page.

On the New Project page :

  • Create a Blank project
  • Fill the name of your project in the Project name field
  • Project URL Field which is the URL path for the project that the GitLab instance will use
  • Project slug field will be auto-populated
  • The Project description (optional) field enables you to enter a description for the project's dashboard
  • Changing the Visibility Level modifies the project’s viewing and access rights for users
  • Selecting the Initialize repository with a README option creates a README file so that the Git repository is initialized, has a default branch, and can be cloned
  • Click Create project

Repository

A repository is a part of a project, which has a lot of other features.

Host your codebase in GitLab repositories by pushing files to GitLab. You can either use the user interface (UI) or connect your local computer with GitLab through the command line.

GitLab Basic Commands: https://docs.gitlab.com/ee/gitlab-basics/command-line-commands.html

Branch

When you create a new project, GitLab sets the master as the default branch for your project. You can choose another branch to be your project’s default under your project’s Settings > Repository.

Commits

When you commit your changes, you are introducing those changes to your branch. Via a command line, you can commit multiple times before pushing.

A commit message is important to identify what is being changed and, more importantly, why. In GitLab, you can add keywords to the commit message that will perform one of the actions below:

  • Trigger a GitLab CI/CD pipeline: If you have your project configured with GitLab CI/CD, you will trigger a pipeline per push, not per commit.
  • Skip pipelines: You can add to you commit message the keyword [ci skip], and GitLab CI will skip that pipeline.
  • Cross-link issues and merge requests: Cross-linking is great to keep track of what’s somehow related in your workflow. If you mention an issue or a merge request in a commit message, they will be shown on their respective thread.

CI/CD Pipeline

Continuous Integration works by pushing small code chunks to your application’s code base hosted in a Git repository, and, to every push, run a pipeline of scripts to build, test, and validate the code changes before merging them into the main branch.

 Continuous Delivery and Deployment consist of a step further CI, deploying your application to production at every push to the default branch of the repository.

These methodologies allow you to catch bugs and errors early in the development cycle, ensuring that all the code deployed to production complies with the code standards you established for your app.

Two top-level components are:

  1. .gitlab-ci.yml
  2. GitLab Runner
.gitlab-ci.yml

The .gitlab-ci.yml file is where we configure what CI does with the project. It lives in the root of the repository. On any push to the repository, GitLab will look for the .gitlab-ci.yml file and start jobs on Runners according to the contents of the file, for that commit.

Pipeline configuration begins with jobs. Jobs are the most fundamental element of a .gitlab-ci.yml file.

Jobs are:

  • Defined with constraints stating under what conditions they should be executed
  • Top-level elements with an arbitrary name and must contain at least the script clause
  • Not limited in how many can be defined

For example:

job1:
  script: "execute-script-for-job1"

job2:
  script: "execute-script-for-job2"

GitLab Runner

GitLab Runner is a build instance that is used to run jobs and send the results back to GitLab. It is designed to run on the GNU/Linux, macOS, and Windows operating systems.

Runners run the code defined in .gitlab-ci.yml. They are isolated (virtual) machines that pick-up jobs through the coordinator API of GitLab CI. If we want to use Docker, install the latest version. GitLab Runner requires a minimum of Docker v1.13.0.

Types of Runner :
  1. Specific Runner - useful for jobs that have special requirements or for projects with a specific demand.
  2. Shared Runner - useful for jobs that have similar requirements between multiple projects. Rather than having multiple Runners idling for many projects, you can have a single or a small number of Runners that handle multiple projects.
  3. Group Runner - useful when you have multiple projects under one group and would like all projects to have access to a set of Runners. Group Runners process jobs using a FIFO (First In, First Out) queue.
How to use Runner:
  1. Install Runner  -  https://docs.gitlab.com/runner/#install-gitlab-runner
  2. Register Runner - https://docs.gitlab.com/runner/register/index.html

Sample Docker Project:

  1. Create/Upload 2 files. .gitlab-ci.yml and Dockerfile
File Contents:
.gitlab-ci.yml:

# Official docker image.
image: docker:19.03.0-dind

services:
  - docker:19.03.0-dind

before_script:
  - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY

build-master:
  stage: build
  script:
    - docker build --pull -t "$CI_REGISTRY_IMAGE" .
    - docker push "$CI_REGISTRY_IMAGE"
  tags:
    - docker

Dockerfile

FROM python:2.7
RUN pip install howdoi
CMD ['howdoi']

Once we create .gitlab-ci.yml file, each push will trigger the pipeline. We didn't create the Runner yet. So, while creating these files, add “[ci skip]” to commit message. This will skip the CI/CD pipeline.

2. Install Runner

For this example, we are using a specific runner and are going to install a runner in Windows. Refer to this link to Install Runner in Windows: https://docs.gitlab.com/runner/install/windows.html

3. Register Runner:

In order to register a runner, we need a registration token which can be found in Settings > CI/CD > Runner Tab

Check below Specific Runner section. You can copy the token from there.

To register a Runner under Windows run the following command in the path where we install GitLab Runner:
./gitlab-runner.exe register

Enter your GitLab instance URL:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com)
https://gitlab.com

Enter the token you obtained to register the Runner:
Please enter the gitlab-ci token that we copied earlier

Enter a description for the Runner, you can change this later in GitLab’s UI:
Please enter the gitlab-ci description for this Runner
[hostname] my-runner

Enter the tags associated with the Runner, you can change this later in GitLab’s UI:
Please enter the gitlab-ci tags for this Runner (comma separated):
docker

Enter the Runner executor:
Please enter the executor: ssh, docker+machine, docker-ssh+machine, kubernetes, Docker, parallels, virtualbox, docker-ssh, shell:
docker

If you chose Docker as your executor, you’ll be asked for the default image to be used for projects that do not define one in .gitlab-ci.yml:
Please enter the Docker image (eg. ruby:2.6):
python:2.7

Once the Runner is created successfully, it will be displayed under Settings > CI/CD > Runner > Specific Runner section.

4. Now, Go to CI/CD > Pipelines and Click  Run Pipeline Button

It will open a new window. Click Run Pipeline button again.

5. Once the job completed successfully, it will be displayed as below.

Automation Testing helps complete the entire software testing life cycle (STLC) in less time and improve efficiency of the testing process.

Test Automation enables teams to verify functionality, test for regression and run simultaneous tests efficiently. In this article we will take a detailed look at the Automation Testing Tools available, standards and best practices to be followed during Test Automation.

Following the best practices for Software Testing Life Cycle (Unit testing, Integration Testing & System Testing) ensures that the client gets the software as intended without any bugs. End-to-end testing is the methodology used to test whether the flow of an application is performing as designed from start to finish. Carrying out end-to-end tests helps identify system dependencies and ensure the flow of right information across various system components and the system.

Ultimately Automation Testing increases the speed of test execution and the test coverage.

When to Choose Automation Testing
  • There is lots of regression work
  • GUI is same, but you have lot of often functional changes
  • Requirements do not change frequently
  • Load and performance testing with many virtual users
  • Repetitive test cases that tend well to automation & saves time
  • Huge projects
  • Projects that need to test the same areas

Steps to Implement Automation Testing
  • Identify areas within software to automate
  • Choose the appropriate tool for test automation
  • Write test scripts
  • Develop test suits
  • Execute test scripts
  • Build result reports
  • Find possible bugs or performance issues
Choosing your Automation Testing Tool

The strategy to adopt test automation should clearly define when to opt for automation, its scope and selection of the right kind of tools for execution. And when it comes to tools the top ones to go for are

  • Cypress
  • Selenium
  • Protractor
  • Appium(Mobile)
Why Cypress?

Cypress is a JavaScript based testing framework built for the modern web. Cypress helps to create End-to-end tests, Integration tests and Unit tests. Cypress takes a different approach compared to other testing frameworks, since it’s executed in the same run loop as the application. It also leverages a Node.js server to handle any task that needs to happen outside of the browser. With its ability to understand everything happening inside and outside of the browser, it produces more consistent results.

Key Features of Cypress
  • Automatic Waiting – No need for adding wait and sleep.
  • Spies, Stubs, and Clocks – Verify and control the behaviour of functions, server responses, or timers.
  • Network traffic control and monitoring – Easily control, stub, and test edge cases without involving your server. You can stub network traffic however you like.
  • Consistent Results – Cypress architecture doesn’t use Selenium or WebDriver. It is fast, consistent and does reliable tests that are flake-free.
  • Screenshots and Videos – View screenshots taken automatically on failure, or videos of your entire test suite when run from the CLI.
Azure CICD Setup with Cypress

Cypress runs on most of the following CI providers.

Azure DevOps / VSTS CI / TeamFoundation
BitBucket
CircleCI
Docker
GitLab
Jenkins
TravisCI

Azure DevOps – Steps to Integrate Cypress Automation Tests
  • Pre-Build Testing
  • Install the Node module and run application in test mode
  • Run the tests
  • Publish the test results
  • Cypress Containerization
  • Build the docker container of cypress
  • Push the image to container
  • Publish the Build

Before we get started here are the basic Cypress installation commands

Clean up the old results
$ rm -rf cypress/reports/
 
Run the cypress application with required spec file.
$ cypress run –spec \”cypress/integration/**/*.spec.ts\” // mention your spec file
 
Configure the mocha reports path for publishing test results.
–reporter junit –reporter-options ‘mochaFile=cypress/reports/test-output-[hash].xml,toConsole=true’
 
Uninstall the application.
$ npm uninstall cypress-multi-reporters; npm uninstall cypress-promise; npm uninstall cypres

Pre-Build Testing

It is critical to test the application before the Build, Deployment or Release. Essentially the process involves regression and smoke testing. And don’t forget the sanity checks before the build is deployed in the staging environment.

Cypress comes in handy for testing angular / JavaScript applications before they are deployed to staging or production environment.

Install the Node module and run application in test mode

Install the required node module of the application then run the application with test mode.

$ npm install –save-dev start-server-and-test

$ start-server-and-test start http://localhost:4200

Publish the test results

The results of the Cypress test execution are stored in specified path and are added to the Azure DevOps test results. Cypress supports JUnit, Mocha, Mochawsome test results reporter formats and provides options to create customised test results and merge all the test results as well.

Cypress Containerization

Cypress supports docker containerization and that makes it easy to set it up in a cluster environment like AKS. The Cypress base images are available at the link below.

https://github.com/cypress-io/cypress-docker-images

Copy the package.json and UI source code to the app folder and run the Cypress test. The following commands are used to run the docker and execute.

  script: |
        docker run -d -it --name cypressName:cypressImageTag bash
        docker commit -p cypressName:cypressImageTag
        docker stop cypressName
        docker rm -f cypressName
    
    - script: docker tag cypressName:cypressImageTag
      displayName: Tag Cypress image 
      
    
    - task: Docker@1
    displayName: Push image To Registry
    inputs:
        command: push
        azureSubscriptionEndpoint: azureSubscriptionEndpoint
        azureContainerRegistry: $(azureContainerRegistry)
        imageName: acrImageName:BuildId
 
    - script: sudo rm -rf /test-results/*
    displayName: Removing Previous Results
 
    - task: ShellScript@2
    displayName: 'Bash Script - cypress base image post-deployment'
    inputs:
        scriptPath: ./cypress-deployment.sh
        args: $(azureRegistry) $(cypressImageName) $(azureContainerValue) $(CYPRESS_OPTIONS) 
        continueOnError: true
    - task: PublishTestResults@1
    displayName: 'Publish Test Results ./test-results-*.xml'
    inputs:
 
    cypress-base-image-post-deployment.sh
 
    docker run -v $systemSourceDirectory:/app/cypress/reports --name vca-arp-ui 
    $cypress_Latestimage npx cypress run $cypressOptions bash

Now the container should be set up on on your local machine and start running your specs.

Cypress is simple and easily integrates with your CI environment. Apart from the browser support, Cypress reduces the efforts of manual testing and is relatively faster when compared to other automation testing tools.

There are several open source tools available to manage infrastructure as code that are backed by large communities of contributors with enterprise offering and good documentation. Why do we choose Terraform and what makes it unique / stand out? Terraform is used to provision an infrastructure and manage the infrastructure changes by versioning. It can manage components such as compute instances, storage, and networking, as well as high-level components such as DNS entries etc.

Good Fit for Cloud Agnostics Strategy

Enterprises would be interested to mitigate the availability risk of mission critical system in cloud by spreading their services across multiple cloud providers. Also, enterprises would always look for avenues to reduce their infrastructure cost by moving away from vendor locking situation. Terraform comes as savior for these use cases by being cloud-agnostic and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies by simplifying management and orchestration.

An Orchestration Tool

Chef, Puppet, Ansible, and SaltStack are all “configuration management” tools that are designed to install and manage software on existing servers whereas Terraform is an “orchestration tool” that is designed to provision the servers and leaving the configuring job to other tools. While there might be certain overlapping features between orchestration and configuration tools, each tool is going to be a better fit for certain use case. For example, when an infrastructure is dominated by Containers, all you need to do is provision a bunch of servers, then an orchestration tool like Terraform is typically going to be a better fit than a configuration management tool

Combat Configuration Drift

While configuration tools are best known to combat the configuration drift in the infrastructure, they are mostly used to manage a subset of machine’s state that will lead to some gap in the infrastructure state. The management will see diminishing returns to close those gaps against the matter that needs the most for daily operations. This set of issues can be mitigated with Terraform along with Containers.

For example, if you tell configuration tool to install a new version of OpenSSL, it’ll run the software update on your existing servers and the changes will happen in-place. Over time, as you apply more and more updates, each server builds up a unique history of changes, causing configuration drift. If you’re using Docker and an orchestration tool such as Terraform, the docker image is already built ready for the new servers. A new server will be deployed and then uninstall the old servers. All the server states will be maintained by Terraform. This approach reduces the likelihood of configuration drift bugs.

Conclusion

Overall, Terraform is an open source and cloud-agnostic orchestration tool with salient features. While it might be a less mature tool compared to other tools in the market, Terraform is still a good candidate to meet a specific set of requirements.

Requirements:-
  1. First step in getting up and running is to install VirtualBox. You can get appropriate version from the www.virtualbox.org
  2. Need to install vagrant. The same procedure is applies; grab the installer from ww.vagrantup.com.

We can start the cluster setup, so we need the vagrant file for cluster setup using that only we can set it out.

Or Else clone the below git repository for getting sample vagrant file

https://github.com/coreos/coreos-vagrant

Now that every thing is downloaded, we can look at how to configure vagrant for your CoreOS development environment

  1. Make copies and rename the configuration files: copy-user-data to user-data, and copy and rename config.rb.sample to config.rb
  2. Open confi.rb so that you can change the a few parameters to get vagrant up and running properly.
     # Size of the CoreOS cluster created by Vagrant
            $num_instances=2
             
  3. You may also want to tweak some other settings in config.rb. CPU, Memory settings can be modified as per your need.
     #Customize VMs
            $vm_gui = false
            $vm_memory = 1024
            $vm_cpus = 1
            $vb_cpuexecutioncap = 100
             

And then open the git shell to interact with vagrant

Go to your current working directory in your shell and issue this command

 vagrant up
         

You will see the things happening, which will look like this ,

Once the operation is completed you can verify that everything is up and running properly by logging in to one of the machines and using fleetctl to check the cluster

 vagrant ssh core-01
        fleetctl list-machines
         

If you see list of machines you created then you are finished, you now have a local cluster of CoreOS machines.

A microservices-based architecture introduces agility, flexibility and supports a sustainable DEVOPS culture ensuring closer collaboration within businesses and the news is that it’s actually happening for those who embraced it.

True, monolith apps architectures have enabled businesses to benefit from IT all along as it is single coded, simple to develop, test and run. As they are also based on a logical modular hexagonal or layered architectures (Presentation Layer responsible for handling HTTP requests and responding with either HTML or JSON/XML, Business logic layer, Database access and Apps integration) they cover and tie all processes, functions and gaps to an extent.

Despite these ground level facts, monolith software, which is instrumental for businesses embrace IT in their initial stages and which even exists today, is seeing problems. The growing complex business operation conditions are purely to be blamed.

So, how do businesses today address new pressures caused by digitization, continuous technology disruptions, increased customer awareness & interceptions and sudden regulatory interventions? The answer lies in agility, flexibility and scalability of the underlying IT infrastructure- the pillars of rapid adaptability to changes.

Monolith Apps, even though it is based on a well-designed 3 tier architecture, in the long run, loses fluidity and turns rigid. Irrespective of its modularity, modules are still dependent on each other and any minimal change in one module needs generation and deployment of all artifacts in each server pool, touched across the distributed environment.

Besides whenever there is a critical problem, the blame game starts amongst the UI developers, business logic experts, backend developers, database programmers, etc as they are predominantly experts in their domains, but have little knowledge about other processes. As the complexity of business operations sets in, the agility, flexibility and scalability part of your software is highly tested in a monolithic environment.

Here’s where Microservices plays a huge role as the underlying architecture helps you break your software applications into independent loosely coupled services that can be deployed and managed solely at that level and needn’t have to depend on other services.

For example, if your project needs you to design and manage inventory, sales, shipping, and billing and UI shopping cart modules, you can break each service down as an independently deployable module. Each has its own database, where monitoring and maintenance of application servers are done independently as the architecture allows you to decentralize the database, reducing complexity. Besides it enables continuous delivery/deployment of large, complex applications which means technology also evolves along with the business.

The other important aspect is that microservices promotes a culture wherein whoever develops the service is also responsible to manage it. This avoids the handover concept and the following misunderstandings and conflicts whenever there is a crisis.

In line with the DevOps concept, Microservices enables easy collaboration between the development and operations team as they embrace and work on a common toolset that establishes common terminology, as well as processes for requirements, dependencies, and problems. There is no denying the fact that DevOps and microservices work better when applied together.

Perhaps that’s the reason companies like Netflix, Amazon, etc are embracing the concept of microservices in their products. And for other new businesses embracing it, a new environment where agility, flexibility and closer collaboration between business and technology becomes a reality providing the much-needed edge in these challenging times.

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2020 CloudIQ Technologies. All rights reserved.

Get in touch

Please contact us using the form below

USA

626 120th Ave NE, B102, Bellevue, WA, 98005.

+1 (206) 203-4151

sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097

+91-044-48651163