Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery, and other functionalities to help businesses scale and grow.

It gives organizations a secure and robust platform to develop their custom cloud-based solutions and has several unique features that make it one of the most reliable and flexible cloud platform such as

  • Mobile-friendly access through AWS Mobile Hub and AWS Mobile SDK
  • Fully managed purpose-built Databases
  • Serverless cloud functions
  • Range of storage options that are affordable and scalable.
  • Unbeatable security and compliance

Following are some core services offered by AWS:

AWS Core services
  1. An EC2 instance is a virtual server in Amazon’s Elastic Compute Cloud (EC2) for running applications on the AWS infrastructure.
  2. Amazon Elastic Block Store (EBS) is a cloud-based block storage system provided by AWS that is best used for storing persistent data.
  3. Amazon Virtual Private Cloud (Amazon VPC) enables us to launch AWS resources into a virtual network that we have defined. This virtual network closely resembles a traditional network that we would operate in our own data center, with the benefits of using the scalable infrastructure of AWS.
  4. Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.
  5. AWS security groups (SGs) are associated with EC2 instances and provide security at the protocol and port access level. Each security group — working much the same way as a firewall — contains a set of rules that filter traffic coming into and out of an EC2 instance.

Let us look more deeply at one of AWS’s core services – AWS CloudFormation – that is key for managing workloads on AWS.

1.   CloudFormation

AWS CloudFormation is a service that helps us model and set up our Amazon Web Services resources so that we can spend less time managing those resources and more time focusing on our applications that run in AWS.  We create a template that describes all the AWS resources that we want (like Amazon EC2 instances or S3 buckets), and AWS CloudFormation takes care of provisioning and configuring those resources for us. We don’t need to individually create and configure AWS resources and figure out what’s dependent on what; AWS CloudFormation handles all of that.

A stack is a collection of AWS resources that you can manage as a single unit. In other words, we can create, update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack’s AWS CloudFormation template.

2.   CloudFormation template

CloudFormation templates can be written in either JSON or YAML.  The structure of the template in YAML is given below:

---
AWSTemplateFormatVersion: "version date"

Description:
  String
Metadata:
  template metadata
Parameters:
  set of parameters
Mappings:
  set of mappings
Conditions:
  set of conditions
Resources:
  set of resources
Outputs:
  set of outputs

In the above yaml file,

  1. AWSTemplateFormatVersion – The AWS CloudFormation template version that the template conforms to.
  2. Description – A text string that describes the template.
  3. Metadata – Objects that provide additional information about the template.
  4. Parameters – Values to pass to our template at runtime (when we create or update a stack). We can refer to parameters from the Resources and Outputs sections of the template.
  5. Mappings – A mapping of keys and associated values that we can use to specify conditional parameter values, like a lookup table. We can match a key to a corresponding value by using the Fn::FindInMap intrinsic function in the Resources and Outputs sections.
  6. Conditions – Conditions that control whether certain resources are created or whether certain resource properties are assigned a value during stack creation or update. For example, we can conditionally create a resource that depends on whether the stack is for a production or test environment.
  7. Resources – Specifies the stack resources and their properties, such as an Amazon Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket.  We can refer to resources in the Resources and Outputs sections of the template.
  8. Outputs – Describes the values that are returned whenever we view our stack’s properties. For example, we can declare an output for an S3 bucket name and then call the AWS cloudformation describe-stacks AWS CLI command to view the name.

Resources is the only required section in the CloudFormation template.  All other sections are optional.

3.   CloudFormation template to create S3 bucket

S3template.yml

Resources:
  HelloBucket:
    Type: AWS::S3::Bucket

In AWS Console, go to CloudFormation and click on Create Stack

Upload the template file which we created.  This will get stored in an S3 location, as shown below.

Click next and give a stack name

Click Next and then “Create stack”.  After a few minutes, you can see that the stack creation is completed.

Clicking on the Resource tab, you can see that the S3 bucket has been created with name “s3-stack-hellobucket-buhpx7oucrgn”.  AWS has provided this same since we didn’t specify the BucketName property in YAML.

Note that deleting the stack will delete the S3 bucket which it had created.

4.   Intrinsic functions

AWS CloudFormation provides several built-in functions that help you manage your stacks.

In the below example, we create two resources – a Security Group and an EC2 Instance, which uses this Security Group.  We can refer to the Security Group resource using the !Ref function.

Ec2template.yml

Resources:
  Ec2Instance:
    Type: 'AWS::EC2::Instance'
    Properties:
      SecurityGroups:
        - !Ref InstanceSecurityGroup
      KeyName: mykey
      ImageId: ''
  InstanceSecurityGroup:
    Type: 'AWS::EC2::SecurityGroup'
    Properties:
      GroupDescription: Enable SSH access via port 22
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '22'
          ToPort: '22'
          CidrIp: 0.0.0.0/0

Some other commonly used intrinsic functions are

  1. Fn::GetAtt – returns the value of an attribute from a resource in the template.
  2. Fn::Join – appends a set of values into a single value, separated by the specified delimiter. If a delimiter is an empty string, the set of values are concatenated with no delimiter.
  3. Fn::Sub – substitutes variables in an input string with values that you specify. In our templates, we can use this function to construct commands or outputs that include values that aren’t available until we create or update a stack.
5.   Parameters

Parameters enable us to input custom values to your template each time you create or update a stack.

TemplateWithParameters.yaml

Parameters: 
  InstanceTypeParameter: 
    Type: String
    Default: t2.micro
    AllowedValues: 
      - t2.micro
      - m1.small
      - m1.large
    Description: Enter t2.micro, m1.small, or m1.large. Default is t2.micro.
Resources:
  Ec2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType:
        Ref: InstanceTypeParameter
      ImageId: ami-0ff8a91507f77f867
6.   Pseudo Parameters

Pseudo parameters are parameters that are predefined by AWS CloudFormation. We do not declare them in our template. Use them the same way as we would a parameter as the argument for the Ref function.

Commonly used pseudo parameters:

  1. AWS: Region – Returns a string representing the AWS Region in which the encompassing resource is being created, such as us-west-2
  2. AWS::StackName – Returns the name of the stack as specified during cloudformation create-stack, such as teststack
7.   Mappings

The optional Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a region, we can create a mapping that uses the region name as a key and contains the values we want to specify for each specific region. We use the Fn::FindInMap intrinsic function to retrieve values in a map.

We cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section.

TemplateWithMappings.yaml

AWSTemplateFormatVersion: "2010-09-09"
Mappings: 
  RegionMap: 
    us-east-1:
      HVM64: ami-0ff8a91507f77f867
      HVMG2: ami-0a584ac55a7631c0c
    us-west-1:
      HVM64: ami-0bdb828fd58c52235
      HVMG2: ami-066ee5fd4a9ef77f1
    eu-west-1:
      HVM64: ami-047bb4163c506cd98
      HVMG2: ami-0a7c483d527806435
    ap-northeast-1:
      HVM64: ami-06cd52961ce9f0d85
HVMG2: ami-053cdd503598e4a9d
    ap-southeast-1:
      HVM64: ami-08569b978cc4dfa10
      HVMG2: ami-0be9df32ae9f92309
Resources: 
  myEC2Instance: 
    Type: "AWS::EC2::Instance"
    Properties: 
      ImageId: !FindInMap [RegionMap, !Ref "AWS::Region", HVM64]
      InstanceType: m1.small
8.   Outputs

The optional Outputs section declares output values that we can import into other stacks (to create cross-stack references), return in response (to describe stack calls), or view on the AWS CloudFormation console. For example, we can output the S3 bucket name for a stack to make the bucket easier to find.

In the below example, the output named StackVPC returns the ID of a VPC, and then exports the value for cross-stack referencing with the name VPCID appended to the stack’s name.

Outputs:
  StackVPC:
    Description: The ID of the VPC
    Value: !Ref MyVPC
    Export:
      Name: !Sub "${AWS::StackName}-VPCID"

LATEST BLOG

Kubernetes is the reigning market leader when it comes to container orchestration! Any organization working with the container ecosystem is either already using Kubernetes or considering it. However, despite the undoubted ease and speed Kubernetes bring to the container ecosystem, they also need specialized expertise to deploy and manage.

Many organizations consider the DIY approach to Kubernetes and if you have an in-house IT team with the requisite experience or if your requirements are large enough to justify the cost of hiring a dedicated Kubernetes team – then an internal Kubernetes strategy could certainly be beneficial.

However, if you don’t fall in the category mentioned above, then managed Kubernetes is the smartest and most cost-effective way ahead. With professionals in the picture, you can be assured of getting long term strategy, seamless implementation, and dedicated on-going service, which will

  • reduce deployment time
  • provide 24×7 support
  • handle all upgrades and fixes
  • troubleshoot as and when needed

Kubernetes solution providers offer a wide range of services – from fully managed to bare bone implementation to preconfigured Kubernetes environments on SaaS models to training for your in-house staff.

Look at your operational needs and your budget and explore the market for Kubernetes services options before you pick the service and the digital partner that ticks all your boxes.   

Meanwhile, do look at our tutorial on troubleshooting Kubernetes deployments.

Kubernetes deployments issues are not always easy to troubleshoot. In some cases, the errors can be resolved easily, and, in some cases, detecting errors requires us to dig deeper and run various commands to identify and resolve the issues.

The first step is to list down all pods after installing your application. The following command lists down all pods in all namespaces.

kubectl get pods -A

If you find any issues on the pod status, you can then use kubectl describe, kubectl logs, kubectl exec commands to get more detailed information.

Debugging Pods
Pod Status Shows ImagePullBackOff or ErrImagePull

This status indicates that your pod could not run because the pod could not pull the image from the container registry. To confirm this, run the kubectl describe command along with the pod identifier to display the details.

kubectl describe pod <pod-identifier>

This command will provide more information about the issue.

  • Image name or tag incorrect.
    • Check the image name and tag and try to pull the image manually on the host using docker pull to verify.
  • Authentication failure related to Container registry.
    • Check the secrets, roles, service principal related to your container registry and try to pull the image manually on the host using docker pull to verify.
docker pull <image-name:tag> 
Pod Status Shows Waiting

This status indicates your pod has been scheduled to a worker node, but it can’t run on that machine. To confirm this, run the kubectl describe command along with the pod identifier to display the details.

kubectl describe pod <pod-identifier> -n <namespace>

The most common causes related to this issue are

  • Image name or tag incorrect.
    • Check the image name and tag and try to pull the image manually on the host using docker pull to verify.
  • Authentication failure related to Container registry.
    • Check the secrets, roles, service principal related to your container registry and try to pull the image manually on the host using docker pull to verify.
Pod Status Shows Pending or CrashLoopBackOff

This status indicates your pod could not be scheduled on a node for various reasons like resource constraints (insufficient CPU or memory resources), volume mounting issues.  To confirm this, run the kubectl describe command along with the pod identifier to display the details.

kubectl describe pod <pod-identifier> -n <namespace>

This command will provide more information about the issue. Most common issues are

  • Insufficient resources
    • If resources are insufficient, clean up your existing resources or scaling your nodes (vertically or horizontally) to increase the resources.
  • Volume mounting
    • Check your volume’s mounting definition and storage classes.
  • Using hostPort
    • When you bind a Pod to a hostPort, there are a limited number of places that a pod can be scheduled. In most cases, hostPort is unnecessary, try using a Service object to expose your pod. If you do require hostPort, then you can only schedule as many Pods as there are nodes in your Kubernetes cluster
Pod is crashing or unhealthy

Sometimes the scheduled pods are crashing or unhealthy.  Run kubectl logs to find the root cause.

kubectl logs <pod_identifier> -n <namespace>

If you have multiple containers, run the following command to find the root cause.

kubectl logs <pod_identifier> -c <container_name> -n <namespace>

If your container has previously crashed, you can access the previous container’s crash log with:

kubectl logs –previous <pod_identifier> -c <container_name> -n <namespace>

If your pod is running but with 0/1 ready state or 0/2 ready state (in case if you have multiple containers in your pod), then you need to verify the readiness. Check the health check (readiness probe) in this case.

Most common issues are

  • Application issues
    • Run the below command to check the logs.
               kubectl logs <pod_identifier> -c <container_name> -n <namespace>
  • Run the below command to verify the events.
               kubectl describe <pod_identifier> -n <namespace>
  • Readiness probe health check failed
    • Check the health check (readiness probe) in this case. Also, check the READY column of the kubectl get pods output to find out if the readiness probe is executing positively.
    • Run the below command to check the logs.
         kubectl logs <pod_identifier> -c <container_name> -n <namespace>
  • Run the below command to verify the events.
         kubectl describe <pod_identifier> -n <namespace>
  • Liveness probe health check failed
    • Check the health check (liveness probe) in this case. Also, check the RESTARTS column of the kubectl get pods output. To find out if the liveness probe is executing positively.
    • Run the below command to check the logs.
         kubectl logs <pod_identifier> -c <container_name> -n <namespace>
  • Run the below command to verify the events.
         kubectl describe <pod_identifier> -n <namespace>
Pod is running but has application issues

In some cases, the pods are running, but the output of the application is incorrect. In this case, you should run the following to find the root cause.

  • Run the below command and identify the issue.
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
  • If you are interested in the last n lines of logs run
kubectl logs <pod_identifier> -c <container_name> --tail <n-lines> -n <namespace>
  • Run the commands inside the container using
kubectl exec <pod_identifier> -c <container_name> /bin/bash -n <namespace>

Run the commands like ‘curl’ or ‘ps’ ‘ls’ to troubleshoot the issue after you get into the container.

Pod is running and working but cannot access through services

In some cases, the pods are working as expected but cannot access through the services. Most common causes of this issue are

  • Service not registered properly
    • Check that the service exists and describe the service and validate the pod selectors to run the following commands.
kubectl get svc
kubectl describe svc <svc-name>
kubectl get endpoints
  • Run the following commands to verify pod selector
kubectl get pods --selector=name={name},{label-name}={label-value}
  • The service may be deployed in a different namespace.
    • Verify that the pod’s containerPort matches up with the Service’s containerPort
  • Service is registered properly but has a DNS issue
    • Get into the container using exec command and run nslookup using the following command
kubectl get endpoints
kubectl exec <pod_identifier> -c <container_name> /bin/bash
nslookup <service-name>
  • If you have any issues to run the command for curl or nslookup. Deploy debugging pod using image yauritux/busybox-curl in the same namespace to verify. Please run the following commands to verify
kubectl run --generator=run-pod/v1 -it --rm <name> --image=yauritux/busybox-curl -n <namespace>
  • Run the following to verify within the container
curl http://<servicename>
telnet <service-ip> <service-port>
nslookup <servicename>

On July 21, 2015, when Kubernetes v1.0 was released, it redefined the container technology landscape. All the bottlenecks of application deployment, scaling, and management in containers was made simpler and faster with intelligent automation.

Container technology made software development more agile and brought in resource efficiency – they made scaling smoother and faster. However, they also need to be tracked, monitored, and managed, which is where container orchestration and Kubernetes come in.

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.  It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

What does Kubernetes do?

Kubernetes allows you to leverage the full potential of your container ecosystem. With automation, it streamlines container workflow and frees up the IT team to concentrate on their core areas of application development by removing the need to manage container networking, storage, logs, alerting, etc. Overall, it automates deploying, scaling, and managing of containerized applications on a cluster of servers.

Key Benefits of Kubernetes

Flexibility for scaling – it enables horizontal infrastructure scaling by quickly adding or removing new severs. Kubernetes has the option of automating vertical scaling, too, by taking into account application provided metrics.

Health check and self-healing designed in Kubernetes allow it to maintain high availability of applications and infrastructure.

Enhanced deployment speed – with automated rollouts and rollbacks, canary deployments, and wide-ranging support for a variety of programming languages, Kubernetes speeds up the process of building, testing, and deploying new software.

Let’s understand more about Kubernetes concepts

1. Kubernetes Objects

Kubernetes contains several abstractions that represent the state of our system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what our Kubernetes cluster is doing. These abstractions are represented by objects in the Kubernetes API.  The basic Kubernetes objects include:

  • Pod
  • Service
  • Volume
  • Namespace

In this blog, we will look at the Pod and Service objects.

2. Pods

A pod is a higher level of abstraction grouping containerized components.  A pod consists of one or more containers that are guaranteed to be co-located on the host machine and can share resources.  The basic scheduling unit in Kubernetes is a pod.  The host machines on which the pods are scheduled are called Nodes.

3. Pod definition yaml

Kubernetes objects are mostly created by declaring their configuration in a yaml file.
Given below is yaml file to define a simple pod.

apiVersion: v1 
kind: Pod
metadata: 
name: nginx 
labels: 
name: nginx 
spec: 
containers: 
- name: nginx 
image: nginx 
ports: 
- containerPort: 8080

In the above yaml file,

  1. apiVersion – denotes which version of the Kubernetes API we are using to create this object.
  2. kind – specifies what kind of object we want to create.  For Pod object, the apviVersion is always v1.
  3. metadata – has data to uniquely identify the object (name) and labels.
  4. Labels are key/value pairs that are attached to objects.  Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users but do not directly imply semantics to the core system.  So, in the above example, instead of “name: nginx” we can have “appname: nginx”, “name: mynginxapp” or anything we like.
  5. Spec – defines the object specification and differs for each object type.  For Pod object, the spec has an array of containers since a pod consists of one or more containers.
  6. For each container, we provide below attributes:
  • name – name of the container.  This can be different from name of pod and is not related to it.
  • Image – name of the docker image to be used to build this container
  • ports – the ports in this container to be exposed outside the pod.  Here we are running the nginx web-server on port 8080 and exposing it.

Suppose we have the above pod definition in a file named pod-definition.yaml, the pod is created by executing the below Kubernetes command:

$ kubectl create -f pod-definition.yaml

4. Pod communication and need for services

Each pod in Kubernetes is assigned a unique Pod IP address within the cluster, which allows applications to use ports without the risk of conflict. 

Within the pod, all containers can reference each other on localhost, but a container within one pod has no way of directly addressing another container within another pod; for that, it must use the Pod IP Address.

An application developer should never use the Pod IP Address though, to reference / invoke a capability in another pod, as Pod IP addresses are ephemeral – the specific pod that they are referencing may be assigned to another Pod IP address on restart.  Instead, we should use a reference to a Service, which holds a reference to the target pod at the specific Pod IP Address.

5. Services

In Kubernetes, a Service is an abstraction that defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is usually determined by a selector.

Sample YAML for a service to expose the pod(s) which we created earlier is given below:

apiVersion: v1
 kind: Service
 metadata:
   name: my-service
 spec:
   selector:
     name: nginx
   ports:
     - protocol: TCP
       port: 80
       targetPort: 8080
       nodePort: 30230
   type: NodePort 

This yaml file defines a service named my-service which is used to access the Pods which have a label ‘name: nginx’. 

  1. The selector field of the service must match the label field of the Pods to which we want to connect.
  2. There are 3 ports defined in above YAML file:
  • Port is the port number that makes a service visible to other services running within the same Kubernetes cluster.
  • Target Port is the port on the POD where the service is running.  This is an optional field; if not provided, Kubernetes assigns the same value as Port field
  • Node port is the port on which the service can be accessed by external users.  NodePort can only have values from 30000 to 32767.  If this optional field is not provided in the definition, Kubernetes automatically assigns a value for NodePort service.

To create the service object, enter the above yaml code in a file named service-defn.yaml and execute the command given below:

$ kubectl create -f service-defn.yaml

6. Types of Services

In the above example, we have type: Nodeport for the service.  The different values allowed for the type field are:

  1. ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default Service type.
  2. NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).   A ClusterIP Service, to which the NodePort Service routes, is automatically created.  We will be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>
  3. LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
  4. ExternalName: Maps the Service to the contents of the externalName field (e.g., foo.bar.example.com), by returning a CNAME record with its value.  No proxying of any kind is set up.  We need CoreDNS version 1.7 or higher to use the ExternalName type.

Many of you are running your mission-critical applications on containers, and if you haven’t already deployed Kubernetes to manage your container ecosystem, then chances are you soon will.

If you are considering a Kubernetes implementation, then there are several ways to go about it –

  • In-house Kubernetes deployment – if you have a large enough IT team with the requisite expertise in Kubernetes architecture and deployment, then getting your Kubernetes cluster up and running in-house is certainly a possibility. Kubernetes deployment is a complex process and requires a mix of specific skill sets. Also running and monitoring a Kubernetes platform requires the full-time services of a dedicated team, and your requirement must justify this additional cost.
  • SaaS Solutions for Kubernetes– if your business needs are specific and straightforward, then you can explore the market for pre-designed Kubernetes offerings on a SaaS payment model.
  • Fully outsourced (managed) Kubernetes services – if budget permits and your business demands, then bringing in the professionals is a safe and hassle-free solution. From infrastructure assessments to building a Kubernetes strategy to engineering, deploying, and managing enterprise-wide Kubernetes solutions – you can outsource your entire project to experts.
  • Many service providers like CloudIQ also offer day-to-day management and support as well as Kubernetes training to your IT staff to set up internal management expertise.

If the last decade of cloud has taught us anything, it is that when it comes to technology, bringing in professionals to do the job always turns out to be the best option in the long run. Kubernetes is a sophisticated platform that requires specialized competencies. Here is a look at one of our tutorials on Kubernetes Networking – how it all works under the hood.

KUBERNETES NETWORKING – DATA PLANE

In Kubernetes, applications run as a set of pods with their own IP address and port. Kubernetes provides an abstract way to expose the applications/pods as a network service. Various forms of the service abstractions include ClusterIP, NodePort, Load Balancer & Ingress. When service requests enters Kubernetes cluster, the service abstractions have to be directed to individual service endpoints of Pods. This data plane function is implemented using a Linux  Kernel feature – iptables.

Iptables is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user-defined chains. Each chain is a list of rules which can match a set of packets. Each rule specifies what to do with a packet that matches. This is called a ‘target’, which may be a jump (-j) to a user-defined chain in the same table.

The service(SVC) to service endpoints(SEP) are programmed using KUBE-SERVICES user-defined chains in the NAT(Network Address Translation) table. The contents of the iptables can be extracted using “iptables-save” command

# Generated by iptables-save v1.6.0 on Mon Sep 16 08:00:17 2019
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [23:1438]
:POSTROUTING ACCEPT [10:592]
:DOCKER - [0:0]
:IP-MASQ-AGENT - [0:0]

:KUBE-SERVICES - [0:0]

-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES

Let’s consider the Services in the following example.

cloudiq@hubandspoke:~$ kubectl get svc –namespace=workshop-development

NAMEciq-ingress-workshop-development-nginx-ingress-controller
TYPELoadBalancer
CLUSTER-IPhttp://192.168.5.65
EXTERNAL-IPhttp://10.82.0.97
PORT(S)80:30512/TCP,443:31512/TCP
AGE19h

Here we have the following service abstractions that are defined.

LoadBalancerIP=10.82.0.97

NodePort=30512/31512

ClusterIP=192.168.5.65

The above services have to be translated to individual service endpoints. The rules performing matching and translation are programmed using custom chains in the NAT table of Ip Tables as below.

Lets look for LoadBalancer=10.82.0.97 service

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep 10.82.0.97
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-SXB4UOYSLPHVISJM
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-JLRSZDR3OXJ4SUA2

Let’s look at the HTTPS service available on port 443.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-FW-JLRSZDR3OXJ4SUA2

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-FW-JLRSZDR3OXJ4SUA2

:KUBE-FW-JLRSZDR3OXJ4SUA2 - [0:0]
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-MARK-MASQ
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-MARK-DROP
-A KUBE-SERVICES -d 10.82.0.97/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-JLRSZDR3OXJ4SUA2

We see below NodePort & Cluster IP translation. The service chains SVC point to two different service endpoints. In order to select between the two service endpoints, a random probability measure is calculated, and appropriate SEP service endpoints are selected.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SVC-JLRSZDR3OXJ4SUA2

:KUBE-SVC-JLRSZDR3OXJ4SUA2 - [0:0]
-A KUBE-FW-JLRSZDR3OXJ4SUA2 -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https loadbalancer IP" -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-NODEPORTS -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https" -m tcp --dport 31512 -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-SERVICES -d 192.168.5.65/32 -p tcp -m comment --comment "workshop-development/ciq-ingress-workshop-development-nginx-ingress-controller:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-JLRSZDR3OXJ4SUA2
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4R3FOXQSM5T2ZADC
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -j KUBE-SEP-PI7R3ONIYH4XJLMW

In the SEP service endpoints, the actual DNAT is performed.

cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SEP-4R3FOXQSM5T2ZADC
:KUBE-SEP-4R3FOXQSM5T2ZADC - [0:0]
-A KUBE-SEP-4R3FOXQSM5T2ZADC -s 10.82.0.10/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-4R3FOXQSM5T2ZADC -p tcp -m tcp -j DNAT --to-destination 10.82.0.10:443
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-4R3FOXQSM5T2ZADC
cloudiq@hubandspoke:~$ cat ciq-dev-aks-iptables-save.output | grep KUBE-SEP-PI7R3ONIYH4XJLMW
:KUBE-SEP-PI7R3ONIYH4XJLMW - [0:0]
-A KUBE-SEP-PI7R3ONIYH4XJLMW -s 10.82.0.82/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-PI7R3ONIYH4XJLMW -p tcp -m tcp -j DNAT --to-destination 10.82.0.82:443
-A KUBE-SVC-JLRSZDR3OXJ4SUA2 -j KUBE-SEP-PI7R3ONIYH4XJLMW

Automation Testing helps complete the entire software testing life cycle (STLC) in less time and improve efficiency of the testing process.

Test Automation enables teams to verify functionality, test for regression and run simultaneous tests efficiently. In this article we will take a detailed look at the Automation Testing Tools available, standards and best practices to be followed during Test Automation.

Following the best practices for Software Testing Life Cycle (Unit testing, Integration Testing & System Testing) ensures that the client gets the software as intended without any bugs. End-to-end testing is the methodology used to test whether the flow of an application is performing as designed from start to finish. Carrying out end-to-end tests helps identify system dependencies and ensure the flow of right information across various system components and the system.

Ultimately Automation Testing increases the speed of test execution and the test coverage.

When to Choose Automation Testing
  • There is lots of regression work
  • GUI is same, but you have lot of often functional changes
  • Requirements do not change frequently
  • Load and performance testing with many virtual users
  • Repetitive test cases that tend well to automation & saves time
  • Huge projects
  • Projects that need to test the same areas

Steps to Implement Automation Testing
  • Identify areas within software to automate
  • Choose the appropriate tool for test automation
  • Write test scripts
  • Develop test suits
  • Execute test scripts
  • Build result reports
  • Find possible bugs or performance issues
Choosing your Automation Testing Tool

The strategy to adopt test automation should clearly define when to opt for automation, its scope and selection of the right kind of tools for execution. And when it comes to tools the top ones to go for are

  • Cypress
  • Selenium
  • Protractor
  • Appium(Mobile)
Why Cypress?

Cypress is a JavaScript based testing framework built for the modern web. Cypress helps to create End-to-end tests, Integration tests and Unit tests. Cypress takes a different approach compared to other testing frameworks, since it’s executed in the same run loop as the application. It also leverages a Node.js server to handle any task that needs to happen outside of the browser. With its ability to understand everything happening inside and outside of the browser, it produces more consistent results.

Key Features of Cypress
  • Automatic Waiting – No need for adding wait and sleep.
  • Spies, Stubs, and Clocks – Verify and control the behaviour of functions, server responses, or timers.
  • Network traffic control and monitoring – Easily control, stub, and test edge cases without involving your server. You can stub network traffic however you like.
  • Consistent Results – Cypress architecture doesn’t use Selenium or WebDriver. It is fast, consistent and does reliable tests that are flake-free.
  • Screenshots and Videos – View screenshots taken automatically on failure, or videos of your entire test suite when run from the CLI.
Azure CICD Setup with Cypress

Cypress runs on most of the following CI providers.

Azure DevOps / VSTS CI / TeamFoundation
BitBucket
CircleCI
Docker
GitLab
Jenkins
TravisCI

Azure DevOps – Steps to Integrate Cypress Automation Tests
  • Pre-Build Testing
  • Install the Node module and run application in test mode
  • Run the tests
  • Publish the test results
  • Cypress Containerization
  • Build the docker container of cypress
  • Push the image to container
  • Publish the Build

Before we get started here are the basic Cypress installation commands

Clean up the old results
$ rm -rf cypress/reports/
 
Run the cypress application with required spec file.
$ cypress run –spec \”cypress/integration/**/*.spec.ts\” // mention your spec file
 
Configure the mocha reports path for publishing test results.
–reporter junit –reporter-options ‘mochaFile=cypress/reports/test-output-[hash].xml,toConsole=true’
 
Uninstall the application.
$ npm uninstall cypress-multi-reporters; npm uninstall cypress-promise; npm uninstall cypres

Pre-Build Testing

It is critical to test the application before the Build, Deployment or Release. Essentially the process involves regression and smoke testing. And don’t forget the sanity checks before the build is deployed in the staging environment.

Cypress comes in handy for testing angular / JavaScript applications before they are deployed to staging or production environment.

Install the Node module and run application in test mode

Install the required node module of the application then run the application with test mode.

$ npm install –save-dev start-server-and-test

$ start-server-and-test start http://localhost:4200

Publish the test results

The results of the Cypress test execution are stored in specified path and are added to the Azure DevOps test results. Cypress supports JUnit, Mocha, Mochawsome test results reporter formats and provides options to create customised test results and merge all the test results as well.

Cypress Containerization

Cypress supports docker containerization and that makes it easy to set it up in a cluster environment like AKS. The Cypress base images are available at the link below.

https://github.com/cypress-io/cypress-docker-images

Copy the package.json and UI source code to the app folder and run the Cypress test. The following commands are used to run the docker and execute.

  script: |
        docker run -d -it --name cypressName:cypressImageTag bash
        docker commit -p cypressName:cypressImageTag
        docker stop cypressName
        docker rm -f cypressName
    
    - script: docker tag cypressName:cypressImageTag
      displayName: Tag Cypress image 
      
    
    - task: Docker@1
    displayName: Push image To Registry
    inputs:
        command: push
        azureSubscriptionEndpoint: azureSubscriptionEndpoint
        azureContainerRegistry: $(azureContainerRegistry)
        imageName: acrImageName:BuildId
 
    - script: sudo rm -rf /test-results/*
    displayName: Removing Previous Results
 
    - task: ShellScript@2
    displayName: 'Bash Script - cypress base image post-deployment'
    inputs:
        scriptPath: ./cypress-deployment.sh
        args: $(azureRegistry) $(cypressImageName) $(azureContainerValue) $(CYPRESS_OPTIONS) 
        continueOnError: true
    - task: PublishTestResults@1
    displayName: 'Publish Test Results ./test-results-*.xml'
    inputs:
 
    cypress-base-image-post-deployment.sh
 
    docker run -v $systemSourceDirectory:/app/cypress/reports --name vca-arp-ui 
    $cypress_Latestimage npx cypress run $cypressOptions bash

Now the container should be set up on on your local machine and start running your specs.

Cypress is simple and easily integrates with your CI environment. Apart from the browser support, Cypress reduces the efforts of manual testing and is relatively faster when compared to other automation testing tools.

In this article we will discuss how to create security groups in AWS for Kubernetes. The goal is to set up a Kubernetes cluster on AWS EC2, having provisioned your virtual machines. You are going to need two security groups: one for the control plane load balancer, and another for the VMs.

Creating a Security Group through the AWS Console

Prerequisite: You should have a VPC (virtual private cloud) set up.

Log into the AWS EC2 (or VPC) console. On the left-hand menu, under Network and Security, click Security Groups.

Click on Create Security Group.

Enter a Name and a Description for your Security Group. Then select your VPC from the drop-down menu. Click Add Rule.

You will need 2 TCP ingress rules, one over port 6443, another over port 443. We are choosing to allow the Source from anywhere. In production you may want to restrict the CIDR, IP, or security group that can reach this load balancer.

We are choosing to leave the outbound rules as default, in which all outbound traffic is permitted.

Click Create and your security group is created!

Select your security group in the console. You may want to give your security group a Name (in addition to the Group Name that you specified when creating it).

But you are not done yet: you must add tags to your security group. These tags will alert AWS that this security group is to be used for Kubernetes. Click on the Tags tab at the bottom of the window. Then click Add/Edit Tags.

You will need 2 tags:
  • Name: KubernetesCluster. Value: <the name of your Kubernetes cluster>
  • Name: kubernetes.io/cluster/<the name of your Kubernetes cluster>. Value: owned

Click Save and your tags are saved!

Creating a Security Group for the Virtual Machines

Follow the steps above to create a security group for your virtual machines. Here are the ports that you will need to open for your control plane VMs:

The master node:
  1. 22 for SSH from your bastion host
  2. 6443 for the Kubernetes API Server
  3. 2379-2380 for the ETCD server
  4. 10250 for the Kubelet health check
  5. 10252 for the Kube controller manager
  6. 10255 for the read only kubelet API
The worker nodes:
  1. 22 for SSH
  2. 10250 for the kubelet health check
  3. 30000-32767 for external applications. However, it is more likely that you will expose external applications to outside the cluster via load balancers, and restrict access to these ports to within your vpc.
  4. 10255 for the read only kubelet API

We have chosen to combine the master and the worker rules into one security group for convenience. You may want to separate them into 2 security groups for extra security.

Follow the step-by-step instructions detailed above and you will have successfully created AWS Security Groups for Kubernetes.

What is Synthetic New Relic?:

New Relic Synthetics is a set of automated scriptable tools to monitor the websites, critical business transactions and API endpoints. A detailed individual results from each monitor run can also be viewed. With access to New relic Insights, in-depth queries of data can be run from Synthetics monitors. Creation of custom dashboards are also possible.

Features of Synthetic New Relic:
  • Easy to set up real time instrumentation and analytics
  • REST API functions
  • Real browsers
  • Comparative charting with Browser
  • New Relic Insights support
  • Advanced scripted monitoring
  • Global test coverage
Different types of Synthetic Monitor:

There are four types of monitor.

a) Ping monitor:

Ping monitors are the simplest type of monitor. These monitors are used to check if an application is online. The Synthetics ping monitor uses a simple Java HTTP client to make requests to your site.

b) API tests:

API tests are used to monitor API endpoints. This can ensure that the app server works in addition to the corresponding website. New Relic uses the “http request module” internally to make HTTP calls to API endpoint and validate the results.

c) Browser:

Simple browser monitors essentially are simple, pre-built scripted browser monitors. These monitors make a request to the site using an instance of Google Chrome.

d) Script_Browser:

Scripted browser monitors are used for more sophisticated, customized monitoring. A custom script can be created to navigate to the website, take specific actions and ensure that the specific resources are present.

Creation of Synthetic Monitor:

API Test Monitor:

Step 1:

  • Login to new relic monitor

Step 2 – Create synthetic monitor

  • Click “synthetic” in new relic dashboard after click on the “Add new” in the right up corner.

Step 3: Enter the Required Details

  • Select on “API Test” in monitor type.
  • Enter the monitor name under details
  • Select one location for the monitor under monitoring locations.
  • Set the Schedule – Set frequency for monitoring. For example On selecting frequency as 10 mins, The monitor would run this monitor and check for every 10 mins.
  • Set Notification – Notification to email ids can be set with help of new alert policy or can be appended to existing alert policy. In case of existing alert policy, Click on “Add to an existing alert policy” and the existing policy can be selected. In case of new policy, email address and policy name has to be given. There are three type of policy,
    1. By Policy – Only one open incident at a time for this alert policy.
    2. By Condition – Only one open incident at a time per alert condition
    3. By condition and entity – open an incident every time a condition is violated.
  • Only on completing the above steps, Script can be written by clicking on “Write Your script”
  • Click on “create monitor” after the monitor creation steps done.
PING Monitor:

Step 1:

  • Login to new relic monitor

Step 2 – Create synthetic monitor

  • Click “synthetic” in new relic dashboard after click on the “Add new” in the right up corner.

Step 3: Enter the Required Details

  • Select on “API Test” in monitor type
  • Enter the monitor name under details
  • Enter the URL and enter the response corresponding URL
  • Select one location for the monitor under monitoring locations.
  • Set the Schedule – Set frequency for monitoring. For example On selecting frequency as 10 mins, The monitor would run this monitor and check for every 10 mins.
  • Set Notification – Notification to email ids can be set with help of new alert policy or can be appended to existing alert policy. In case of existing alert policy, Click on “Add to an existing alert policy” and the existing policy can be selected. In case of new policy, email address and policy name has to be given. There are three type of policy,
    1. By Policy – Only one open incident at a time for this alert policy.
    2. By Condition – Only one open incident at a time per alert condition.
    3. By condition and entity – open an incident every time a condition is violated.
  • Only on completing the above steps, Ping monitor gets created when clicking on “ Create Monitor”

Synthetic Monitor Functionality:

API Test:
Pass Scenario:

Below script is used to store the data using Post method, then pass the value to the call back function .Call back function is nothing but it is a function is passed into another function as an argument.

Here, call back function has three arguments like error, response and body.

In the below script, comparing the value “gear” and “10” with JSON body value. Both the values are same. Hence no assertion error is triggered.

In case of value mismatch, an assertion error is thrown.

Failure scenario:

In the below script, the values do not match with the JSON body value. Hence an assertion error is thrown.

In case of assertion error, an alert will be sent to the mail id given in the notification channel. The Assertion error will not be resolved until the Value is made “10”.

Mail Alert: (Ping & API Test)

The error log can be seen as below:

After the error is fixed, an update would be sent to the notification channel

Delete a Monitor: (Ping & API Test)
  • From the Monitors list, select the monitor which needs to delete.
  • In the selected monitor, under settings click on General to view the monitor settings page.
  • Select the trash icon, it will show alert popup and click on “ok” in alert popup then monitor will delete.

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

No. 3 & 4, Venkateswara Avenue,Bazaar Main Rd, Ramnagar South, Madipakkam, Chennai – 600091


© 2019 CloudIQ Technologies. All rights reserved.