Welcome to Cloud View! This week we look at some of the technologies of the future and insights on container performance & security, connected vehicles, and chatbots. Latest in Tech Industry Insights Service Mesh Containers are dominating the software world, but despite their popularity and orchestration software like Kubernetes, they are still challenging to manage. […]


LATEST BLOG

Welcome to Cloud View!

This week we look at some of the technologies of the future and insights on container performance & security, connected vehicles, and chatbots.

Latest in Tech

CES 2020

2020 starts with the biggest tech event of the year – CES 2020. The Las Vegas event showcases the technology of the future! Here are the highlights of some of the enterprise technologies on display.

Technology 2020

Want to know the latest technology trends that will impact businesses in 2020? From the empowered edge to human augmentation and more, here are 15 of them.


Industry Insights

Service Mesh

Containers are dominating the software world, but despite their popularity and orchestration software like Kubernetes, they are still challenging to manage. Service meshes come as the answer to improving container performance and security.


Connected Vehicles

The world of vehicle software is heating up! BlackBerry QNX and AWS are targeting automotive OEMs to bring services, personalization, health monitoring, and advanced driver assistance (ADAS) to vehicles.


Chatbots

In the next couple of years, 70% of white-collar workers will chat with conversation AI platforms daily – predicts Gartner. Here is a case study of improving customer service with an intelligent virtual assistant using IBM Watson.


From CloudIQ

Azure Database for MySQL and Grafana to monitor Azure services

More and more organizations run their business-critical applications on containers using Azure and that calls for a more intuitive dashboard to monitor and track Azure Services.

How to Create and Run Spark Clusters with Qubole using AWS

Qubole is a platform that puts big data on the cloud to power business decisions based on real-time analytics. Here is how you can create and run Spark Clusters with Qubole using AWS.

As more and more organizations run their business-critical applications on containers using Azure, there are new challenges in monitoring and managing them. Of course, there is the Azure dashboard, but with elaborate set-ups and such, IT teams feel the need for a more intuitive dashboard to monitor and track Azure services.

The answer, Grafana.

Grafana is an open-source dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. It is a powerful visualization application that deals effectively with large-scale measurement data and time-series data.

As compared to other dashboards, especially the native Azure dashboard, Grafana offers a wider variety of visualization options (graphs, heatmaps, tables, and more) and can collect and collate data from multiple sources. It is designed for evaluating metrics such as system CPU, memory, disk, and I/O utilization.

A Grafana dashboard will help you understand, analyze, monitor, and explore your data with flexible and fast visualization tools.

In this article we will look at using Azure Database for MySQL and Grafana to monitor Azure services

Access Requirements

In your Azure subscription, your account must have “Microsoft.Authorization/*/Write” access to assign and AD app to a role. This action is granted through “Owner” role or “User Access Administrator” role. “Contributor” role will not have the required permissions.

Virtual machine requirements,
  • VM Operating System      :  Linux (ubuntu 18.04)
  • VM Size                                   :  Standard D2s v3 (2 vcpus, 8 GiB memory) is more than enough
  • SSH access                            :  username and password.
  • Default port                       :  3000
  • NSG rule                                 :  open an inbound rule in network security group with
                                                        limited access to port 3000 and 22 for SSH.
  • Assign a static public IP address to the VM

MySQL Creation and Linking to Grafana

1. Create an Azure database for MySQL server from

2. Select the resource group, provide Server name, admin username, password, confirm password. Take a note of the password; it is used several times throughout the set-up.

3. To select compute and storage,

a. There are three pricing tiers, (choose basic)

  • Basic
  • General-purpose
  • Memory-optimized

b. Select the appropriate sizes.

  • Computer generation: Gen 5
  • vCore: 1
  • storage: 5GB
  • Auto-growth:
  • Backup retention period:
  • Local redundant / Geo-Redundant

For basic compute and storage,

The maximum vcore is 2, and Storage is 1024 GB. Choose as per your needs.

4. Then click Review+Create

5. Once the Azure database for MYSQL server is deployed, go to connection security and do the following changes,

  • Add a client IP.
  • Set “Allow access to Azure services” to ON

6. Do the following in the SQL server by connecting to it using the Server admin login name and password in SQL workbench. Create a new query tab. (You can use any tool to connect MySQL)

7. Run the following commands in the query tab,

  • create database named “grafana” ;

8. Now the SQL server-side configurations are over. We need to provide the inputs of SQL server configuration to docker containers running Grafana.

9. Login to the VM running Grafana using appropriate SSH credentials (password or access keys).

10. Note the following values and save them as environment variables as an environment list.

Type                =          mysql
Host                =          <servername>:3306 (mysql server name created earlier)
Name              =          grafana (DB name given in the earlier steps and given access)
User                =          <Server admin login name>
Password       =          <server login password>

11. Save the changes mentioned above as a list. As shown,

Installing Grafana as a docker container and required its plugins

1. Login to the server using appropriate credentials,

2. Get updates using, sudo apt-get update

3. Install docker using the command, sudo apt install docker.io

4. Enable and start docker,

  • sudo systemctl start docker
  • sudo systemctl enable docker

5. Verify the installation using the command,

  • docker –version the result will be like this,

6. Now login as root using the command,

  • sudo su

7. Pull Grafana image; this needs an Internet connection as this will download the image from a public hub of docker

  • docker pull grafana/grafana

8. Run the image with saved environment variables,

  • docker run -d –name=grafana -p 8000:3000 –env-file ./env.list grafana/grafana

9. verify the container installation using “docker ps” command

10. The next step is to install the plugins for Grafana, which will be used in setting up the dashboard. We need to login to the container created previously to install these plugins.

11. Now create a shell inside the container using,

  • docker exec -it grafana /bin/bash

12. The result will be as shown,

13. By default, in Grafana dashboard there’ll be limited number of panel plugins, to use more visualization we can manually install plugins. Now copy the plugin installation commands listed below and run them one by one or everything at once.

  • grafana-cli plugins install michaeldmoore-annunciator-panel
  • grafana-cli plugins install grafana-piechart-panel
  • grafana-cli plugins install farski-blendstat-panel
  • grafana-cli plugins install michaeldmoore-multistat-panel
  • grafana-cli plugins install grafana-polystat-panel
  • grafana-cli plugins install flant-statusmap-panel
  • grafana-cli plugins install grafana-clock-panel
  • grafana-cli plugins install neocat-cal-heatmap-panel
  • grafana-cli plugins install briangann-gauge-panel
  • grafana-cli plugins install natel-plotly-panel

14. Once the plugins are installed,

  • verify the installation by going into, /var/lib/grafana/plugins directory by using the commands listed below
  • cd  /var/lib/grafana/plugins
  • to view installed plugins, use ls command

15. Now exit the container, command: exit

16. Now restart the container using,

  • docker container restart Grafana (here “grafana” refers to the container name created earlier, which can be found using docker ps command)

Linking Azure Monitoring Tools

Service Principal

  • Register an app in Azure Active Directory (AD).
  • Create a client secret in the Registered app.
  • Go to subscription à IAM à search for the app registration and provide “READER” access to the registered app in the Azure AD in first place.

Applying the service principal to Grafana,

  • Go to Grafana UI by using public IP address followed by port number,
    i.e., (IP address):3000 example: 13.25.49.164:3000
  • Now Add data source. And click select
  • This page will appear, input the tenant ID, client ID, client secret.
  • Then provide details for log analytics workspace and Application insights.
  • If the provided details are correct this message should be displayed.

Welcome to Cloud View!

We hope you had a joyful and fun holiday! NOW with the festivities behind us, it’s time for business again!

Let’s start with what IT Leaders are planning for 2020.

Leader’s Speak

CIO’s plan for 2020

What are the CIOs thinking and planning for the coming year? It seems finding talent, dealing with rising security problems, and prioritizing the acquisition of new technologies are some of the topics occupying the C-suite.

IT Industry in 2020

Joel Friedman, CTO, Rackspace offers his take on what 2020 will hold for the IT industry. According to Joel, hybrid, and multi-cloud, SaaS and security problems will grow;

A decade since the launch of Azure

2020 marks a decade since the launch of Azure. Ever wondered what the founders think about their creation? Here’s a short interview with Microsoft’s Yousef Khalidi and Hoi Vo, key members of the original Azure ‘dream team’.


Industry insights

AI Solutions

As a digital-first service provider, we at CloudIQ help implement AI solutions across organizations. Impact like this is what we aim for! This is what the future of AI looks like.


Top Big Data companies to watch for in 2020

Big Data is going to dominate many a boardroom in the coming few years. We start the year by tracking some promising companies that will define the coming year with their next-generation data management, data science, and machine learning technology.


From CloudIQ

Deploying a Pod containing three applications using Jenkins CI/CD pipeline and updating them selectively

Kubernetes pod is a layer of abstraction wrapped around containers to group them together for resource allocation and efficient management. Here is how to deploy a pod containing three applications using Jenkins CI/CD pipeline and update them selectively.

Provisioning Cloud Infrastructure using AWS CloudFormation Templates

Spend less time managing cloud infrastructure and focus on building your application, thanks to AWS CloudFormation templates. Here is a quick start guide to creating the templates for provisioning cloud infrastructure.

A Kubernetes pod – incidentally, some say it is named after a whale pod because the docker logo is a whale – is the foundational unit of execution in a K8s ecosystem. While docker is the most common container runtime, pods are container agnostic and support other containers as well.

Simply put, a K8s pod is a layer of abstraction wrapped around containers to group them together to allocate resources and to manage them efficiently.

Continuous integration and delivery or CI/CD is a critical part of DevOps ecosystems, especially for cloud-native applications. DevOps teams frequently use Jenkins CI/CD pipelines to increase the speed and quality of collaborated software development ecosystems by adding automation. Thanks to Helm, deploying Jenkins server to K8s is quick and easy. The difficult bit is building the pipeline.

Here is a post that describes how to deploy a pod containing three applications using a Jenkins CI/CD pipeline and update them selectively.

Task on Hand:

Use a Jenkins pipeline to build a spring-boot application to generate jar file, dockerize the application to create a docker image, push the image to a docker repository and pull the image into the pod to create containers. The pod should contain 3 containers for the three applications, respectively. Upon git commit, only the container for which there is a change must be updated (rolling update).

Steps
  1. Create a pipeline using groovy script to clone the respective git repo, build the project using maven, build the docker images, push it to dockerhub and pull these images to run containers in the pod.
  2. Repeat the steps for all the three applications in separate stages. Make sure to create a separate directory in each stage to prevent conflicts when using similar files. Also, this clones the different git repos into different folders to avoid confusion.
  3. Here is the Jenkinsfile/Pipeline script to perform the above task:
pipeline {
agent any
stages {
stage('Build1'){
    steps{
        dir('app1'){
            script{
                git 'https://github.com/cloud/simple-spring.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}

}
stage('Build2'){
    steps{
        dir('app2'){
            script{
                git 'https://github.com/cloud/simple-spring-2.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring-2")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}

}
stage('Build3'){
    steps{
        dir('app3'){
            script{
                git 'https://github.com/cloud/simple-spring-3.git'
                sh 'mvn clean install'
                app = docker.build("cloud007/simple-spring-3")
                docker.withRegistry( "https://registry.hub.docker.com", "dockerhub" ) {
                // dockerImage.push()
                app.push("latest")
            }
        }
    }
}
}
stage('Orchestrate')
{
    steps{
        script{
    sh 'kubectl apply -f demo.yaml'
        }
    }
}

}
}

4. Make sure to properly configure docker and expose the dockerd in port 4243 and then change permission to allow Jenkins to use docker commands by changing permission for the /var/run/docker.sock shown.

5. Coming to integrating Kubernetes with Jenkins, it can be done using two plugins:

  • Kubernetes plugin: When using this plugin, we configure the credentials to use our local cluster/azure cluster and specify the container templates for the containers to be created in the pipeline. But since all the tasks must run in containers, it is a little confusing approach. A better approach would be to use the kuberentes-cli plugin.

Refer: https://github.com/jenkinsci/kubernetes-plugin

  • Kubernetes-cli plugin: It provides a withconfig() for pipeline support, which uses the configure credentials to connect to our cluster and perform kubectl commands. However when running the pipeline, the kubeconfig wasn’t  recognized for some reason, and kept giving the error ‘file not found’.

Refer: https://github.com/jenkinsci/kubernetes-cli-plugin/blob/master/README.md

Hence, we installed kubectl on the Jenkins host, configured the cluster manually, and ran shell commands from the Jenkins pipeline, where Jenkins was recognized as an anonymous user and was only granted get access but couldn’t create pods/deployments.

Here are some common problems faced during this process and the troubleshooting procedure.

  • Configuring Jenkins to use local minikube cluster:
    We had trouble using both the plugins to properly configure Jenkins to create deployments as required. Using shell commands to run kubectl was also not successful since Jenkins was recognized as an anonymous user, and authorization prevented anonymous users from creating deployments.
  • Permission for /var/run/docker.sock is reset to root after every restart, so make sure to change it to allow Jenkins to continue to use docker commands: choose Jenkins /var/run/docker.sock
  • Installing Minikube:
    i) Started minikube cluster using hyperv as the driver and created a virtual    switch:
    minikube start –vm-driver=hyperv  –hyperv-virtual-switch=”Primary Virtual Switch”

    ii) Installation takes a lot of time, so we have to wait patiently, and eventually, the cluster will get configured and started. If there is a problem with apiserver, then stop the machine after SSHing into minikube vm:
    minikube ssh
    sudo poweroff

    iii) Then start minikube the same way.

Here are some suggested best practices

Maintaing git repo:
  • Branching must be used while updating the source code or adding a particular file to the repository. Suppose you want to add a readme, then create a new branch from master, create the readme and commit it and then merge the branch with the origin.
  • Similarly, for adding some test files/changing source code – create a new branch for testing/modification, update and commit the code and merge with the master when finished. The purpose of this is to allow easy roll back to the original master if you run into some errors when working with the new files and to prevent any conflict in code with the master.
  • Commit only after you have tested the code properly, never commit incomplete code.
  • Write good commit messages to keep track of the changes you have made.
Versioning:
  • Follow the versioning convention X.Y.Z where X is incremented for a new major update/feature, Y is incremented for minor updates/minor features and Z for minor patches/bug fixes
  • Avoid version lock that has too many dependencies in a single version. In such a scenario the package can only be updated after releasing new versions for every dependent package.
Docker repo:
  • Use unique tags for deployment/pushing images to the repository.
  • Always use stable tags for building images but never deploy/pull images using stable tags. Stable tags are the ones that do not roll over the updates to future versions but are bound to the current version (tag). 

Welcome to Cloud View!

The last two days of 2019 feel a bit like a waiting period – it’s a tad early to start celebrating, but it’s hard to plan anything until the new year celebrations are truly behind us and we are back at work. We think a good way to use this time would be to indulge in a bit of nostalgia and look back at how the technology landscape evolved in 2019.

Recap 2019

Kubernetes Podcast in 2019

If you deal with Kubernetes, then we are sure you follow the Kubernetes Podcast. Here is the roundup of the year’s best! Enjoy!

CRN’s 2019 Year in Review

LOVE ‘Top-10’ listicles? Here is a mammoth list of technology top 10s by CRN. From top 10 cybersecurity stories to the top 10 mobile apps of 2019 – it’s all here!

Top 10 Smarter with Gartner Articles for 2019

No report, survey, or listicle is complete without Gartner. Here are the 10 best “Smarter with Gartner articles from 2019”. 


Industry insights

What’s New with AWS

A quick video to recap the latest AWS updates and announcements (there are many!) and a web link, too, in case you want to explore categories in more detail.


IBM Z Open Editor Support for LSP

Any programmer can attest to the power – and the usefulness – of Language Server Protocol (LSP). Now its integration with IBM’s Z Open Editor opens a whole new level for coders across the globe.


From CloudIQ

Creating AWS Security Groups for Kubernetes

We are going to discuss creating security groups in AWS for Kubernetes. The goal is to set up a Kubernetes cluster on AWS EC2, having provisioned your virtual machines.

Deploy a Spring-Boot Application in Kubernetes Pod using Jenkins CI/CD Pipeline

Kubernetes has become the preferred platform of choice for container orchestration. Here is a walkthrough of how Jenkins CI/CD pipeline is used to deploy a spring boot application in K8s.

Signing off now and will see you all in the new year. Party hard and bring in 2020 in style.

Kubernetes has become the preferred platform of choice for container orchestration and deliver maximum operational efficiency. To understand how K8s works, one must understand its most basic execution unit – a pod.

Kubernetes doesn’t run containers directly, rather through a higher-level structure called a pod. A pod has an application’s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.

Pods can hold multiple containers or just one. Every container in a pod will share the same resources and network. A pod is used as a replication unit in Kubernetes; hence, it is advisable to not add too many containers in one pod. Future scaling up would lead to unnecessary and expensive duplication.

To maximize the ease and speed of Kubernetes, DevOps teams like to add automation using Jenkins CI/CD pipelines. Not only does this make the entire process of building, testing, and deploying software go faster, but it also minimizes human error. Here is how Jenkins CI/CD pipeline is used to deploy a spring boot application in K8s.

TASK on Hand:

Create a Jenkins pipeline to dockerize a spring application, build docker image, push it to the dockerhub repo and then pull the image into an AKS cluster to run it in a pod.

Complete repository:

All the files required for this task are available in this repository:
https://github.com/saiachyuth5/simple-spring

Pre-Requisites:

A spring-boot application, dockerfile to containerize the application.

STEPS:

1. Install Jenkins :

2. Connect host docker daemon to Jenkins:

  • Run the command: chmod –R Jenkins:docker filename/foldername to allow Jenkins to access docker.
  • Go to manage Jenkins from browser >Configure System and scroll to the bottom
  • Click the dropdown ‘add cloud’ and add Docker. Add the docker host URI in the format tcp://hostip:4243
  • Click verify connection to check your connection. If everything was done right, the docker version is displayed.

3. Adding global credentials:

  • Go to Credentials on the Jenkins dashboard, click global credentials, and then Add credentials.
  • Select the kind as Microsoft Azure Service Principal and enter the required ids, similarly save the docker credentials under type username with password.

4. Create the Jenkinsfile :

  • Refer to the official Jenkins documentation for the pipeline syntax, usage of Jenkinsfile, and simple examples.
  • Below is the Jenkinsfile used for this task.

Jenkinsfile:

NOTE: While this example uses actual id to login to Azure, its recommended to use credentials to avoid using exact parameters.

pipeline {
environment {
registryCredential = "docker"
}
agent any
stages {
stage(‘Build’) {
    steps{
    script {
        sh 'mvn clean install'
    }
    }
}
stage(‘Load’) {
    steps{
    script {
        app = docker.build("cloud007/simple-spring")
    }
    }
}
    stage(‘Deploy’) {
    steps{
    script {
        docker.withRegistry( "https://registry.hub.docker.com", registryCredential ) {
        // dockerImage.push()
        app.push("latest")
        }
    }
    }
}
stage('Deploy to ACS'){
    steps{
        withCredentials([azureServicePrincipal('dbb6d63b-41ab-4e71-b9ed-32b3be06eeb8')]) {
        sh 'echo "logging in" '
        sh 'az login --service-principal -u **************************** -p ********************************* -t **********************************’
        sh 'az account set -s ****************************'
        sh 'az aks get-credentials --resource-group ilink --name    mycluster'
        sh 'kubectl apply -f sample.yaml'
    }
}
}
}
}

5. Create the Jenkins project:

  • Select New view>Pipeline and click ok.
  • Scroll to the bottom and select the definition as Pipeline from SCM.
  • Select the SCM as git and enter the git repo to be used, path to Jenkinsfile in Script path.
  • Click apply, and the Jenkins project has now been created.
  • Go to my views, select your view, and click on build to build your project.

6. Create and connect to Azure Kubernetes cluster:

  • Create an Azure Kubernetes cluster with 1-3 nodes and add its credentials to global credentials in Jenkins.
  • Install azure cli on the Jenkins host machine.
  • Use shell commands in the pipeline to log in, get-credentials, and then create a pod using the required yaml file.

YAML used:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: spring-helloworld
spec:
    replicas: 1
    selector:
    matchLabels:
        app: spring-helloworld
    template:
    metadata:
        labels:
        app: spring-helloworld
    spec:
        containers:
        - name: spring-helloworld
        image: cloud007/simple-spring:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80

Here are some common problems faced during this process and the troubleshooting procedure.

  • Corrupt Jenkins exec file:
    Solved by doing an apt-purge and then apt-install Jenkins.
  • Using 32-bit VM:
    Kubectl is not supported on a 32-bit machine and hence make sure the system is 64-bit.
  • Installing azure cli manually makes it inaccessible for non-root users
    Manually installing azure cli placed it in the default directories, which were not accessible by non-root users and hence by Jenkins. So, it is recommended to install azure cli using apt.
  • Installing minikube using local cluster instead of AKS:
    Virtual box does not support nested VTx-Vtx virtualization and hence cannot run minikube. It is recommended to enable Hyperv and use HyperV as the driver to run minikube.
  • Naming the stages in the Jenkinsfile:
    Jenkins did not accept when named stage as ‘Build Docker Image’ or multiple words for some reason. Use a single word like ‘Build’, ‘Load’ etc…
  • Jenkins stopped building the project when the system ran out of memory:
    Make sure the host has at least 20 GB free in the hard disk before starting the project.
  • Jenkins couldn’t execute docker commands:
    Try the command usermog –a  –G docker Jenkins
  • Spring app not accessible from external IP:
    Created a new service with type loadbalancer, assigned it to the pod, and the application was accessible from this new external ip.

In an upcoming article we will show you how to deploy a pod containing three applications using Jenkins ci/cd pipeline and update them selectively.

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2019 CloudIQ Technologies. All rights reserved.

Get in touch

Please contact us using the form below

USA

626 120th Ave NE, B102, Bellevue, WA, 98005.

+1 (206) 203-4151

sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097

+91-044-48651163