Welcome to Cloud View! This week we look at some of the technologies of the future and insights on container performance & security, connected vehicles, and chatbots. Latest in Tech Industry Insights Service Mesh Containers are dominating the software world, but despite their popularity and orchestration software like Kubernetes, they are still challenging to manage. […]
Containers are dominating the software world, but despite their popularity and orchestration software like Kubernetes, they are still challenging to manage. Service meshes come as the answer to improving container performance and security.
The world of vehicle software is heating up! BlackBerry QNX and AWS are targeting automotive OEMs to bring services, personalization, health monitoring, and advanced driver assistance (ADAS) to vehicles.
In the next couple of years, 70% of white-collar workers will chat with conversation AI platforms daily – predicts Gartner. Here is a case study of improving customer service with an intelligent virtual assistant using IBM Watson.
As more and more organizations run their business-critical applications on containers using Azure, there are new challenges in monitoring and managing them. Of course, there is the Azure dashboard, but with elaborate set-ups and such, IT teams feel the need for a more intuitive dashboard to monitor and track Azure services.
The answer, Grafana.
Grafana is an open-source dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. It is a powerful visualization application that deals effectively with large-scale measurement data and time-series data.
As compared to other dashboards, especially the native Azure dashboard, Grafana offers a wider variety of visualization options (graphs, heatmaps, tables, and more) and can collect and collate data from multiple sources. It is designed for evaluating metrics such as system CPU, memory, disk, and I/O utilization.
A Grafana dashboard will help you understand, analyze, monitor, and explore your data with flexible and fast visualization tools.
In this article we will look at using Azure Database for MySQL and Grafana to monitor Azure services
In your Azure subscription, your account must have “Microsoft.Authorization/*/Write” access to assign and AD app to a role. This action is granted through “Owner” role or “User Access Administrator” role. “Contributor” role will not have the required permissions.
Virtual machine requirements,
VM Operating System : Linux (ubuntu 18.04)
VM Size : Standard D2s v3 (2 vcpus, 8 GiB memory) is more than enough
SSH access : username and password.
Default port : 3000
NSG rule : open an inbound rule in network security group with limited access to port 3000 and 22 for SSH.
Assign a static public IP address to the VM
MySQL Creation and Linking to Grafana
1. Create an Azure database for MySQL server from
2. Select the resource group, provide Server name, admin username, password, confirm password. Take a note of the password; it is used several times throughout the set-up.
3. To select compute and storage,
a. There are three pricing tiers, (choose basic)
b. Select the appropriate sizes.
Computer generation: Gen 5
Backup retention period:
Local redundant / Geo-Redundant
For basic compute and storage,
The maximum vcore is 2, and Storage is 1024 GB. Choose as per your needs.
4. Then click Review+Create
5. Once the Azure database for MYSQL server is deployed, go to connection security and do the following changes,
Add a client IP.
Set “Allow access to Azure services” to ON
6. Do the following in the SQL server by connecting to it using the Server admin login name and password in SQL workbench. Create a new query tab. (You can use any tool to connect MySQL)
7. Run the following commands in the query tab,
create database named “grafana” ;
8. Now the SQL server-side configurations are over. We need to provide the inputs of SQL server configuration to docker containers running Grafana.
9. Login to the VM running Grafana using appropriate SSH credentials (password or access keys).
10. Note the following values and save them as environment variables as an environment list.
Type = mysql Host = <servername>:3306 (mysql server name created earlier) Name = grafana (DB name given in the earlier steps and given access) User = <Server admin login name> Password = <server login password>
11. Save the changes mentioned above as a list. As shown,
Installing Grafana as a docker container and required its plugins
1. Login to the server using appropriate credentials,
2. Get updates using, sudo apt-get update
3. Install docker using the command, sudo apt install docker.io
4. Enable and start docker,
sudo systemctl start docker
sudo systemctl enable docker
5. Verify the installation using the command,
docker –version the result will be like this,
6. Now login as root using the command,
7. Pull Grafana image; this needs an Internet connection as this will download the image from a public hub of docker
docker pull grafana/grafana
8. Run the image with saved environment variables,
docker run -d –name=grafana -p 8000:3000 –env-file ./env.list grafana/grafana
9. verify the container installation using “docker ps” command
10. The next step is to install the plugins for Grafana, which will be used in setting up the dashboard. We need to login to the container created previously to install these plugins.
11. Now create a shell inside the container using,
docker exec -it grafana /bin/bash
12. The result will be as shown,
13. By default, in Grafana dashboard there’ll be limited number of panel plugins, to use more visualization we can manually install plugins. Now copy the plugin installation commands listed below and run them one by one or everything at once.
We hope you had a joyful and fun holiday! NOW with the festivities behind us, it’s time for business again!
Let’s start with what IT Leaders are planning
CIO’s plan for 2020
What are the CIOs thinking and planning for the coming year? It seems finding talent, dealing with rising security problems, and prioritizing the acquisition of new technologies are some of the topics occupying the C-suite.
2020 marks a decade since the launch of Azure. Ever wondered what the founders think about their creation? Here’s a short interview with Microsoft’s Yousef Khalidi and Hoi Vo, key members of the original Azure ‘dream team’.
Big Data is going to dominate many a boardroom in the coming few years. We start the year by tracking some promising companies that will define the coming year with their next-generation data management, data science, and machine learning technology.
Deploying a Pod containing three applications using Jenkins CI/CD pipeline and updating them selectively
Kubernetes pod is a layer of abstraction wrapped around containers to group them together for resource allocation and efficient management. Here is how to deploy a pod containing three applications using Jenkins CI/CD pipeline and update them selectively.
Provisioning Cloud Infrastructure using AWS CloudFormation Templates
Spend less time managing cloud infrastructure and focus on building your application, thanks to AWS CloudFormation templates. Here is a quick start guide to creating the templates for provisioning cloud infrastructure.
A Kubernetes pod – incidentally, some say it is named after a whale pod because the docker logo is a whale – is the foundational unit of execution in a K8s ecosystem. While docker is the most common container runtime, pods are container agnostic and support other containers as well.
Simply put, a K8s pod is a layer of abstraction wrapped around containers to group them together to allocate resources and to manage them efficiently.
Continuous integration and delivery or CI/CD is a critical part of DevOps ecosystems, especially for cloud-native applications. DevOps teams frequently use Jenkins CI/CD pipelines to increase the speed and quality of collaborated software development ecosystems by adding automation. Thanks to Helm, deploying Jenkins server to K8s is quick and easy. The difficult bit is building the pipeline.
Here is a post that describes how to deploy a pod containing three applications using a Jenkins CI/CD pipeline and update them selectively.
Task on Hand:
Use a Jenkins pipeline to build a spring-boot application to generate jar file, dockerize the application to create a docker image, push the image to a docker repository and pull the image into the pod to create containers. The pod should contain 3 containers for the three applications, respectively. Upon git commit, only the container for which there is a change must be updated (rolling update).
Create a pipeline using groovy script to clone the respective git repo, build the project using maven, build the docker images, push it to dockerhub and pull these images to run containers in the pod.
Repeat the steps for all the three applications in separate stages. Make sure to create a separate directory in each stage to prevent conflicts when using similar files. Also, this clones the different git repos into different folders to avoid confusion.
Here is the Jenkinsfile/Pipeline script to perform the above task:
4. Make sure to properly configure docker and expose the dockerd in port 4243 and then change permission to allow Jenkins to use docker commands by changing permission for the /var/run/docker.sock shown.
5. Coming to integrating Kubernetes with Jenkins, it can be done using two plugins:
plugin: When using this plugin, we configure the credentials to use our local
cluster/azure cluster and specify the container templates for the containers to
be created in the pipeline. But since all the tasks must run in containers, it is
a little confusing approach. A better approach would be to use the
plugin: It provides a withconfig() for pipeline support, which uses the
configure credentials to connect to our cluster and perform kubectl commands. However
when running the pipeline, the kubeconfig wasn’t recognized for some reason, and kept giving
the error ‘file not found’.
Hence, we installed kubectl on the Jenkins host, configured the cluster manually, and ran shell commands from the Jenkins pipeline, where Jenkins was recognized as an anonymous user and was only granted get access but couldn’t create pods/deployments.
Here are some common problems faced during this process and the troubleshooting procedure.
Configuring Jenkins to use local minikube cluster: We had trouble using both the plugins to properly configure Jenkins to create deployments as required. Using shell commands to run kubectl was also not successful since Jenkins was recognized as an anonymous user, and authorization prevented anonymous users from creating deployments.
Permission for /var/run/docker.sock is reset to root after every restart, so make sure to change it to allow Jenkins to continue to use docker commands: choose Jenkins /var/run/docker.sock
Installing Minikube: i) Started minikube cluster using hyperv as the driver and created a virtual switch: minikube start –vm-driver=hyperv –hyperv-virtual-switch=”Primary Virtual Switch”
ii) Installation takes a lot of time, so we have to wait patiently, and eventually, the cluster will get configured and started. If there is a problem with apiserver, then stop the machine after SSHing into minikube vm: minikube ssh sudo poweroff
iii) Then start minikube the same way.
Here are some suggested best practices
Maintaing git repo:
Branching must be used while updating the source code or adding a particular file to the repository. Suppose you want to add a readme, then create a new branch from master, create the readme and commit it and then merge the branch with the origin.
Similarly, for adding some test files/changing source code – create a new branch for testing/modification, update and commit the code and merge with the master when finished. The purpose of this is to allow easy roll back to the original master if you run into some errors when working with the new files and to prevent any conflict in code with the master.
Commit only after you have tested the code properly, never commit incomplete code.
Write good commit messages to keep track of the changes you have made.
Follow the versioning convention X.Y.Z where X is incremented for a new major update/feature, Y is incremented for minor updates/minor features and Z for minor patches/bug fixes
Avoid version lock that has too many dependencies in a single version. In such a scenario the package can only be updated after releasing new versions for every dependent package.
Use unique tags for deployment/pushing images to the repository.
Always use stable tags for building images but never deploy/pull images using stable tags. Stable tags are the ones that do not roll over the updates to future versions but are bound to the current version (tag).
The last two days of 2019 feel a bit like a waiting period – it’s a tad early to start celebrating, but it’s hard to plan anything until the new year celebrations are truly behind us and we are back at work. We think a good way to use this time would be to indulge in a bit of nostalgia and look back at how the technology landscape evolved in 2019.
Kubernetes Podcast in 2019
If you deal with Kubernetes, then we are sure you follow the Kubernetes Podcast. Here is the roundup of the year’s best! Enjoy!
Kubernetes has become the preferred platform of choice for container orchestration and deliver maximum operational efficiency. To understand how K8s works, one must understand its most basic execution unit – a pod.
Kubernetes doesn’t run containers directly, rather through a higher-level structure called a pod. A pod has an application’s container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.
Pods can hold multiple containers or just one. Every container in a pod will share the same resources and network. A pod is used as a replication unit in Kubernetes; hence, it is advisable to not add too many containers in one pod. Future scaling up would lead to unnecessary and expensive duplication.
To maximize the ease and speed of Kubernetes, DevOps teams like to add automation using Jenkins CI/CD pipelines. Not only does this make the entire process of building, testing, and deploying software go faster, but it also minimizes human error. Here is how Jenkins CI/CD pipeline is used to deploy a spring boot application in K8s.
TASK on Hand:
Create a Jenkins pipeline to dockerize a spring application, build docker image, push it to the dockerhub repo and then pull the image into an AKS cluster to run it in a pod.
Here are some common problems faced during this process and the troubleshooting procedure.
Corrupt Jenkins exec file: Solved by doing an apt-purge and then apt-install Jenkins.
Using 32-bit VM: Kubectl is not supported on a 32-bit machine and hence make sure the system is 64-bit.
Installing azure cli manually makes it inaccessible for non-root users Manually installing azure cli placed it in the default directories, which were not accessible by non-root users and hence by Jenkins. So, it is recommended to install azure cli using apt.
Installing minikube using local cluster instead of AKS: Virtual box does not support nested VTx-Vtx virtualization and hence cannot run minikube. It is recommended to enable Hyperv and use HyperV as the driver to run minikube.
Naming the stages in the Jenkinsfile: Jenkins did not accept when named stage as ‘Build Docker Image’ or multiple words for some reason. Use a single word like ‘Build’, ‘Load’ etc…
Jenkins stopped building the project when the system ran out of memory: Make sure the host has at least 20 GB free in the hard disk before starting the project.
CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable.