As more and more organizations run their business-critical applications on containers using Azure, there are new challenges in monitoring and managing them. Of course, there is the Azure dashboard, but with elaborate set-ups and such, IT teams feel the need for a more intuitive dashboard to monitor and track Azure services. The answer, Grafana. Grafana […]


LATEST BLOG

Welcome to Cloud View!

This week we present to you some fresh articles on the evolution of some of the most dominant technologies for the coming decade.

Predictions

Blockchain in 2020

2019 has been a year of great highs and some lows for the blockchain technology. The next year is going to see some significant growth and consolidation of large blockchain protocols and digital assets.

Cybersecurity in 2020

Everyone agrees that security is going to be ALL IMPORTANT in the coming year. Here is an article that positions 2020 as the year of the breach. We guarantee you will bump security to the top of your list after reading this.


Industry insights

New features in Azure Monitor Metrics Explorer

A few months ago, Microsoft’s Azure clients gave some feedback regarding the use of metrics in Azure Portal. Now the Microsoft team comes back with some new features which address the main concerns of the community.


The Update Framework (TUF)

The ninth to join the CNCF’s list of mature technologies – The Update Framework (TUF) is an open-source technology that secures software update systems.


From CloudIQ

Kubernetes Deployment Controller – An Inside Look

Kubernetes Deployment Controller helps monitor and manage the upgrade, downgrade, and scaling of services without any disruption or downtime. Here’s a detailed look at the inner workings of Kubernetes Deployment Controller.

Implementing Azure AD Pod Identity in AKS Cluster

Cloud-based identity and access management service becomes a necessity for connecting pods in AKS cluster to access other Azure cloud resources and services. Here is a detailed look at how Azure AD Pod Identity helps.

Container and container orchestration have become the default system for any DevOps team that wants to scale on-demand, reduce costs, and deliver faster. And to get the best out of container technology, Kubernetes is the way to go. A recommended Kubernetes practice is to manage pods through a Deployment; this way, they can be monitored and restarted if a failure occurs.

A deployment is created by using a Kubernetes Deployment Controller object. The application (in a container) is deployed to Kubernetes by declaratively passing a desired state to the Kubernetes Deployment Controller. A K8s deployment controller object is utilized for monitoring, management of upgrade, downgrade, and scaling of services (e.g., pods) without any disruption or downtime. This is made possible because the deployment controller is the single source of truth for the sizes of new and old replica sets. It maintains multiple replica sets, and when you describe a desired state, the DC changes the actual state at the correct pace.

Here’s a detailed look at the inner workings of Kubernetes Deployment Controller

K8s deployment controller is responsible for the following functions

– Managing a set of pods in the form of Replica Sets & Hash-based labels
– Rolling out new versions of application through new Replica Sets
– Rolling back to old versions of application through old Replica Sets
– Pause & Resume Rollout/Rollback functions
– Scale-Up/Down functions

“The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. Examples of controllers that ship with Kubernetes today are the replication controller, endpoints controller, namespace controller, and serviceaccounts controller.”

func NewControllerInitializers(loopMode ControllerLoopMode) map[string]InitFunc {
        controllers := map[string]InitFunc{}
        controllers["endpoint"] = startEndpointController
        controllers["endpointslice"] = startEndpointSliceController
        controllers["replicationcontroller"] = startReplicationController
        controllers["podgc"] = startPodGCController
        controllers["resourcequota"] = startResourceQuotaController
        controllers["namespace"] = startNamespaceController
        controllers["serviceaccount"] = startServiceAccountController
        controllers["garbagecollector"] = startGarbageCollectorController
        controllers["daemonset"] = startDaemonSetController
        controllers["job"] = startJobController
        controllers["deployment"] = startDeploymentController
        controllers["replicaset"] = startReplicaSetController
        controllers["horizontalpodautoscaling"] = startHPAController
        controllers["disruption"] = startDisruptionController
        controllers["statefulset"] = startStatefulSetController
        controllers["cronjob"] = startCronJobController
        controllers["csrsigning"] = startCSRSigningController
        controllers["csrapproving"] = startCSRApprovingController
        controllers["csrcleaner"] = startCSRCleanerController
        controllers["ttl"] = startTTLController
        controllers["bootstrapsigner"] = startBootstrapSignerController
        controllers["tokencleaner"] = startTokenCleanerController
        controllers["nodeipam"] = startNodeIpamController
        controllers["nodelifecycle"] = startNodeLifecycleController
 	if loopMode == IncludeCloudLoops {
                controllers["service"] = startServiceController
                controllers["route"] = startRouteController
                controllers["cloud-node-lifecycle"] = startCloudNodeLifecycleController
                // TODO: volume controller into the IncludeCloudLoops only set.
        }
        controllers["persistentvolume-binder"] = startPersistentVolumeBinderController
        controllers["attachdetach"] = startAttachDetachController
        controllers["persistentvolume-expander"] = startVolumeExpandController
        controllers["clusterrole-aggregation"] = startClusterRoleAggregrationController
        controllers["pvc-protection"] = startPVCProtectionController
        controllers["pv-protection"] = startPVProtectionController
        controllers["ttl-after-finished"] = startTTLAfterFinishedController
        controllers["root-ca-cert-publisher"] = startRootCACertPublisher

        return controllers
}

Let’s look at inside workings of “Deployment” Controller. It watches for following object updates.

func startDeploymentController(ctx ControllerContext) (http.Handler, bool, error) {
        if !ctx.AvailableResources[schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}] {
                return nil, false, nil
        }
        dc, err := deployment.NewDeploymentController(
                ctx.InformerFactory.Apps().V1().Deployments(),
                ctx.InformerFactory.Apps().V1().ReplicaSets(),
                ctx.InformerFactory.Core().V1().Pods(),
                ctx.ClientBuilder.ClientOrDie("deployment-controller"),
        )
        if err != nil {
                return nil, true, fmt.Errorf("error creating Deployment controller: %v", err)
        }
        go dc.Run(int(ctx.ComponentConfig.DeploymentController.ConcurrentDeploymentSyncs), ctx.Stop)
        return nil, true, nil
}

The “Deployment Controller” initializes the following Event handlers.

dInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
                AddFunc:    dc.addDeployment,
                UpdateFunc: dc.updateDeployment,
                // This will enter the sync loop and no-op, because the deployment has been deleted from the store.
                DeleteFunc: dc.deleteDeployment,
        })
        rsInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
                AddFunc:    dc.addReplicaSet,
                UpdateFunc: dc.updateReplicaSet,
                DeleteFunc: dc.deleteReplicaSet,
        })
        podInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
                DeleteFunc: dc.deletePod,
        })

Since Kubernetes uses asynchronous programming, the events are processed through work queues and workers.

func (dc *DeploymentController) addDeployment(obj interface{}) {
        d := obj.(*apps.Deployment)
        klog.V(4).Infof("Adding deployment %s", d.Name)
        dc.enqueueDeployment(d)
}

The items from the queue are handled by “syncDeployment” handler. Some of the functions done by the handler are shown below.

// List ReplicaSets owned by this Deployment, while reconciling ControllerRef
        // through adoption/orphaning.
        rsList, err := dc.getReplicaSetsForDeployment(d)
	
	// List all Pods owned by this Deployment, grouped by their ReplicaSet.
        // Current uses of the podMap are:
        //
        // * check if a Pod is labeled correctly with the pod-template-hash label.
        // * check that no old Pods are running in the middle of Recreate Deployments.
        podMap, err := dc.getPodMapForDeployment(d, rsList)

	// Update deployment conditions with an Unknown condition when pausing/resuming
        // a deployment. In this way, we can be sure that we won't timeout when a user
        // resumes a Deployment with a set progressDeadlineSeconds.
        if err = dc.checkPausedConditions(d); err != nil {
                return err
        }

	// rollback is not re-entrant in case the underlying replica sets are updated with a new
        // revision so we should ensure that we won't proceed to update replica sets until we
        // make sure that the deployment has cleaned up its rollback spec in subsequent enqueues.
        if getRollbackTo(d) != nil {
                return dc.rollback(d, rsList)
        }

        scalingEvent, err := dc.isScalingEvent(d, rsList)
        if err != nil {
                return err
        }
        if scalingEvent {
                return dc.sync(d, rsList)
        }

        switch d.Spec.Strategy.Type {
        case apps.RecreateDeploymentStrategyType:
                return dc.rolloutRecreate(d, rsList, podMap)
        case apps.RollingUpdateDeploymentStrategyType:
                return dc.rolloutRolling(d, rsList)
        }

Sync is responsible for reconciling deployments on scaling events or when they are paused.

func (dc *DeploymentController) sync(d *apps.Deployment, rsList []*apps.ReplicaSet) error {
        newRS, oldRSs, err := dc.getAllReplicaSetsAndSyncRevision(d, rsList, false)
        if err != nil {
                return err
        }
        if err := dc.scale(d, newRS, oldRSs); err != nil {
                // If we get an error while trying to scale, the deployment will be requeued
                // so we can abort this resync
                return err
        }

        // Clean up the deployment when it's paused and no rollback is in flight.
        if d.Spec.Paused && getRollbackTo(d) == nil {
                if err := dc.cleanupDeployment(oldRSs, d); err != nil {
                        return err
                }
        }

        allRSs := append(oldRSs, newRS)
        return dc.syncDeploymentStatus(allRSs, newRS, d)
}


// scale scales proportionally in order to mitigate risk. Otherwise, scaling up can increase the size
// of the new replica set and scaling down can decrease the sizes of the old ones, both of which would
// have the effect of hastening the rollout progress, which could produce a higher proportion of unavailable
// replicas in the event of a problem with the rolled out a template. Should run only on scaling events or
// when a deployment is paused and not during the normal rollout process.

func (dc *DeploymentController) scale(deployment *apps.Deployment, newRS 
*apps.ReplicaSet, oldRSs []*apps.ReplicaSet) error {

 	// If there is only one active replica set then we should scale that up to the full count of the
        // deployment. If there is no active replica set, then we should scale up the newest replica set.
        if activeOrLatest := deploymentutil.FindActiveOrLatest(newRS, oldRSs); activeOrLatest != nil {


	// If the new replica set is saturated, old replica sets should be fully scaled down.
        // This case handles replica set adoption during a saturated new replica set.
        if deploymentutil.IsSaturated(deployment, newRS) {

 // There are old replica sets with pods, and the new replica set is not saturated. 
        // We need to proportionally scale all replica sets (new and old) in case of a
        // rolling deployment.
        if deploymentutil.IsRollingUpdate(deployment) {

		// Number of additional replicas that can be either added or removed from the total
                // replicas count. These replicas should be distributed proportionally to the active
                // replica sets.
                deploymentReplicasToAdd := allowedSize - allRSsReplicas

                // The additional replicas should be distributed proportionally amongst the active
                // replica sets from the larger to the smaller in size replica set. Scaling direction
                // drives what happens in case we are trying to scale replica sets of the same size.
                // In such a case when scaling up, we should scale up newer replica sets first, and
                // when scaling down, we should scale down older replica sets first.

We hope this article helped you understand the inner workings of Kubernetes deployment controller. If you would like to learn more about Kubernetes and get certified, join our 2-day Kubernetes workshop.

Welcome to Cloud View!

The last couple of weeks we have been curating predictions for the coming year (and decade) from well regarded sources. Now it’s time to drill down deeper into specific areas and find out what experts in the field see in store for the future.

Predictions

Cybersecurity: Mitigating cyber-attacks and risks

Forbes has put out a really exhaustive list of predictions (141 to be exact!) in the cyber security realm. These are all from key players and professionals in the digital arena – CIOs, CEOs, CFOs and security heads from across the digital spectrum weigh in with what they think is crucial to mitigate cyber attacks and risks in the coming few years.

Future of DevOps

DevOps is all about bringing the power of collaboration to executing business ideas; turning organizational visions into applications that drive growth and profits. So what does the future hold for the DevOps community?


Industry Speak

AT&T integrating 5G with Microsoft cloud

5G has been in news for all the wrong reasons, but finally we see some interesting news emerging from the industry. A strategic partnership between Microsoft and AT&T announces that AT&T’s 5G core will run on Azure!


Kubernetes for exponential growth

Containers and container orchestration with Kubernetes are vital for any tech-based business looking to deliver more features – faster and more affordably. Here is a look at how AlphaSense, one of the top AI start-ups leveraged Kubernetes to accelerate growth.


From CloudIQ

Optimizing Azure Cosmos DB Performance

Azure Cosmos DB allows Azure platform users to elastically and independently scale throughput and storage across any number of Azure regions worldwide. Here is an article on how to optimize Cosmos DB performance.

How to Debug and Troubleshoot Common Problems in Kubernetes Deployments

Kubernetes deployment issues are not always easy to troubleshoot. In some cases, the errors can be resolved easily, and, in some cases, detecting errors requires us to dig deeper and run various commands to identify and resolve the issues. Here is a guided tutorial to debug applications that are deployed into Kubernetes.

End-to-end front-end testing has always been a bit of a pain for developers. Testing is one of the critical final steps of any development project, however web testing has tested the patience of all developers at some time or another. The modern web testing ecosystem comes with its own set of challenges – from data security to additional time and expense to managing the dynamic behavior of the contemporary development frameworks. Hence, the need to bring automation to the testing process!

Benefits of Automation Testing
  • Automation increases the speed of test execution
  • Automation helps increase Test Coverage
  • One can do automation testing at the time of regression work
  • Automation testing works when GUI is the same, but you will have a lot of functional changes

When to use Automation Testing?

Here are some scenarios where Automation testing is highly recommended

  • Requirements do not change frequently
  • Access to the application for load and performance with many virtual users
  • Steady software with respect to manual testing
  • Obtainability of time
  • Huge and business-critical projects
  • Projects that need to test the same areas often
Automation testing step by step

There are lots of helpful tools to write automation scripts, however, before using those tools it’s important to identify the process for test automation.

  • Identify areas within the software to automate
  • Choose the appropriate tool for test automation
  • Write test scripts
  • Develop test suits
  • Execute test scripts
  • Build result reports
  • Find possible bugs or performance issue
List of automation tools:

Automation of testing frameworks helps us to improve the quality, speed, and accuracy of the testing processes. Here is a list of automation tools,

  • Cypress
  • Selenium
  • Protractor
  • Appium(Mobile)
Why choose Cypress:

Cypress solves many of the main testing bottlenecks developers face regularly. It is a JavaScript-based end-to-end testing framework that doesn’t use Selenium (most widely used for testing) at all. It is built on top of Mocha and Mocha’s features is the javascript test framework running on the browser, which makes asynchronous testing simple. Cypress automatically waits for loading DOM element, elements to be visible, AJAX calls to be finished, etc. Hence, we don’t need to use implicit and explicit waits.

Another advantage Cypress offers to developers is that it runs directly in the browser with no network communication. The architecture makes testing and development happen simultaneously. It allows developers access to tools, and they can make changes and see them reflected in real-time. Naturally, this lends more precision and speed to the whole process.

Features of Cypress:
  • Time travel: Cypress takes snapshots as your test runs.
  • Debuggability: Cypress can guess why a test case has failed.
  • Automatic waiting: There is no need to use wait or sleep because it automatically waits for your commands.
  • Spies, stubs, and clocks: verify and control the behavior of functions, server response, and timer.
  • Screenshot and video: Cypress testing automatically takes screenshots when your test case fails and makes a video of the complete result when it is run from the CLI.
Features of Mocha:

Mocha provides the below benefits,

  • Browser support
  • Async & promises support
  • Test coverage reporting
Advantage of Cypress:
  • Open-source
  • It has Promise Support
  • Java script testing framework
  • Easy and reliable testing
  • Fast, free and open-source
  • Easy to control our response, headers, and status.
  • Helps you in finding the locator
Installing Cypress:

Installing Cypress is an easy task compared to a Selenium installation. There are two commands used to install Cypress on machines. These are,

  1. npm init
  2. npm install Cypress

The first command is used to create a “package.json” file, and the second command is used to install all Cypress dependencies.

Project Folder Structure Details:

Project folder structure details as below,

*node_modules folder – It is the directory for build tools.

*package.json file – It is the file in the app root, which defines where libraries will be installed into node_modules when you run “npm install”.

*cypress folder – It contains folder like fixtures, integration, plugins, screenshot, support, and video. These folder features are below,

a. Fixture – This folder is used as external pieces of static data and can be used for your test.

b. Integration – This folder is used to write the testcase of your app.

 c. Screenshot – This folder is used to store screenshots of your test.

 d. Video – It is used to store videos in your test.

 e. Support – This folder is used to write the common commands file.

Write your sample program in Cypress:

Step 1: Open your visual studio code in your machine

Step 2: Create a new Cypress project folder and name it as “cypresse2e”

Step 3: Open the command line and go to the above-created project path.

Step 4: Type first command under “Installing cypress” heading then wait for it to create package.json file.

Step 5: After that, type the second command

Step 6: The above task will finish within 2-3 minutes after creating the Cypress and node_modules folder inside the “cypresse2e” folder. This folder will also contain a “json” file.

Step 7: Click the Cypress folder under “cypresse2e” in vs code.

Step 8: Automation page details are as below,

We will use the CloudIQ home page link for this automation

Step 9: Create the “cypressAutomation.spec.ts” file under integration folder and write the program as seen below in the screenshot,

Program Explanation:

Here is what the test script given above does.

  • Navigate to “cloudiqtech” site.
  • Wait for 10 seconds for the page to load
  • Next click on “AWS” ref link
  • Then navigate to “AWS” page
  • Finally, validate the current page as the AWS page.

Step 10: Open the command terminal then go to your Cypress project path then run the below-mentioned command,

Step 11: After waiting for 1-2 minutes, it will open the Cypress Terminal app, as shown below in the screenshot. It contains all the tests – like the ones you wrote in your automation test and default tests.

Step 12: Click your “cypressAutomation.spec.js” file, this automatically opens the default chrome browser to run your test and makes a test coverage report in your browser like below,

Test Result:

*Three tests are passed successfully.

*No tests failed here.

*In total, these three tests ran within 30.44 seconds.

*Screenshots were automatically taken during your tests. If you hover above the testcase in the test, it will display the screenshot image for every separate testcase.

Welcome to Cloud View!

With the new year 2020 coming closer, the industry is firmly looking towards the future. Cloud news is full of predictions for the year ahead and here’s a quick selection we picked for this week’s reading.

Predictions

Forrester’s cloud computing predictions for 2020

Forrester has an excellent track record of predicting the right cloud trends. And that makes their 2020 cloud computing predictions a MUST-READ. Here is a breakdown from TechRepublic.

Gartner’s top strategic predictions for 2020 and beyond

Last week we put the spotlight on Granter’s strategic trends, this week we delve further into these to understand how they would affect the people and their lives and work. Unsurprisingly, AI takes center-stage again.


Industry Insights

Telcos embrace containers

Gartner predicts that over 75% of global companies will run containerized applications by 2022. Kubernetes is the leading container orchestration platform for managing these containers. Here is a look at how Telcos are planning to use it to deploy cloud-native 5G networks.


AWS IoT Day – Eight Powerful New Features

AWS regularly puts out bundled themed announcements, which make it easy for us to find relevant information in one place. Here is the one related to AWS IoT Day. Check out 8 powerful AWS features, from secret tunneling to Alexa voice service integration and more.


Interesting announcements from KubeCon

Over 100 announcements were made at KubeCon, here’s a quick read of the 10 most important ones.


This week at CloudIQ

Kubernetes on Azure: A 2-day workshop for AKS developers

Container technology has revolutionized the DevOps landscape and offers organizations the chance to develop and test applications faster and more cost-effectively. CloudIQ’s 2-day hands-on workshop is designed to give DevOps team members the opportunity to skill-up and learn Kubernetes design, deployment, and management.

Configuring Palo Alto Networks Next-Generation Firewall (NGFW) – A Detailed Guide

Today organizations require an enterprise cyber-security platform, which provides network security, cloud security, endpoint protection, & various related cloud-delivered security services. Palo Alto Networks Next-Generation Firewall (NGFW) fits the bill and here is a detailed guide on configuring it.

As organizations start to create and maintain clusters in AKS (Azure Kubernetes Service), they also need to use cloud-based identity and access management service to access other Azure cloud resources and services. The Azure Active Directory (AAD) pod identity is a service that gives users this control by assigning identities to individual pods.  

Without these controls, accounts may get access to resources and services they don’t require. And it can also become hard for IT teams to track which set of credentials were used to make changes.

Azure AD Pod identity is just one small part of the container and Kubernetes management process and as you delve deeper, you will realize the true power that Kubernetes and Containers bring to your DevOps ecosystem.

Here is a more detailed look at how to use AAD pod identity for connecting pods in AKS cluster with Azure Key Vault.

Pod Identity

Integrate your key management system with Kubernetes using pod identity. Secrets, certificates, and keys in a key management system become a volume accessible to pods. The volume is mounted into the pod, and its data is available directly in the container file system for your application.

On an existing AKS cluster –

Deploy Key Vault FlexVolume to your AKS cluster with this command:

  • kubectl create -f https://raw.githubusercontent.com/Azure/kubernetes-keyvault-flexvol/master/deployment/kv-flexvol-installer.yaml
1. Create the Deployment

Run this command to create the aad-pod-identity deployment on an RBAC-enabled cluster:

  • kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml

Or run this command to deploy to a non-RBAC cluster:

  • kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment.yaml
2. Create an Azure Identity

Create azure managed identity

Command:- az identity create -g ResourceGroupNameOfAKsService -n aks-pod-identity(ManagedIdentity)

Output:-  

{
"clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ",
"clientSecretUrl": "https://control-westus.identity.azure.net/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/aks_dev_rg_wu/providers/Microsoft.ManagedIdentity/userAssignedIdentities/aks-pod-identity/credentials?tid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&oid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx&aid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ",
"id": "/subscriptions/xxxxxxxx-xxxx-XXXX-XXXX-XXXXXXXXXXXX/resourcegroups/aks_dev_rg_wu/providers/Microsoft.ManagedIdentity/userAssignedIdentities/aks-pod-identity",
"location": "westus",
"name": "aks-pod-identity",
"principalId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"resourceGroup": "au10515_aks_dev_rg_wu",
"tags": {},
"tenantId": XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX ",
"type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}

Assign Cluster SPN Role

Command for Getting AKSServicePrincipalID:- az aks show -g <resourcegroup> -n <name> –query servicePrincipalProfile.clientId -o tsv

Command:-az role assignment create –role “Managed Identity Operator” –assignee <AKSServicePrincipalId> –scope < ID of Managed identity>

Assign Azure Identity Roles

Command:- az role assignment create –role Reader –assignee <Principal ID of Managed identity> –scope <KeyVault Resource ID>

Set policy to access keys in your Key Vault

Command:- az keyvault set-policy -n dev-kv –key-permissions get –spn  <Client ID of Managed identity>

Set policy to access secrets in your Key Vault

Command:- az keyvault set-policy -n dev-kv –secret-permissions get –spn <Client ID of Managed identity>

Set policy to access certs in your Key Vault

Command:- az keyvault set-policy -n dev-kv –certificate-permissions get –spn <Client ID of Managed identity>

3. Install the Azure Identity

Save this Kubernetes manifest to a file named aadpodidentity.yaml:

apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: <a-idname>
spec:
type: 0
ResourceID: /subscriptions/<subid>/resourcegroups/<resourcegroup>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<name>
ClientID: <clientId>

Replace the placeholders with your user identity values. Set type: 0 for user-assigned MSI or type: 1 for Service Principal.

Finally, save your changes to the file, then create the AzureIdentity resource in your cluster:

kubectl apply -f aadpodidentity.yaml

4. Install the Azure Identity Binding

Save this Kubernetes manifest to a file named aadpodidentitybinding.yaml:

apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
  name: demo1-azure-identity-binding
spec:
  AzureIdentity: <a-idname>
  Selector: <label value to match>

Replace the placeholders with your values. Ensure that the AzureIdentity name matches the one in aadpodidentity.yaml.

Finally, save your changes to the file, then create the AzureIdentityBinding resource in your cluster:

kubectl apply -f aadpodidentitybinding.yaml

Sample Nginx Deployment for accessing key vault secret using Pod Identity

Save this sample nginx pod manifest file named nginx-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: nginx-flex-kv-podid
    aadpodidbinding: 
  name: nginx-flex-kv-podid
spec:
  containers:
  - name: nginx-flex-kv-podid
    image: nginx
    volumeMounts:
    - name: test
      mountPath: /kvmnt
      readOnly: true
  volumes:
  - name: test
    flexVolume:
      driver: "azure/kv"
      options:
        usepodidentity: "true"         # [OPTIONAL] if not provided, will default to "false"
        keyvaultname: ""               # the name of the KeyVault
        keyvaultobjectnames: ""        # list of KeyVault object names (semi-colon separated)
        keyvaultobjecttypes: secret    # list of KeyVault object types: secret, key or cert (semi-colon separated)
        keyvaultobjectversions: ""     # [OPTIONAL] list of KeyVault object versions (semi-colon separated), will get latest if empty
        resourcegroup: ""              # the resource group of the KeyVault
        subscriptionid: ""             # the subscription ID of the KeyVault
        tenantid: ""            # the tenant ID of the KeyVault
Azure AD Pod Identity points to remember when implementing in cluster
  • Azure AD Pod Identity is currently bound to the default namespace. Deploying an Azure Identity and it’s binding to other namespaces, will not work!
  • Pods from all namespaces can be executed in the context of an Azure Identity deployed to the default namespace (related to point 1)
  • Every Pod Developer can add the aadpodidbinding label to his/her pod and use your Azure Identity
  • Azure Identity Binding is not using default Kubernetes label selection mechanism

CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable. 

US

626 120th Ave NE, B102, Bellevue,

WA, 98005.

 sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097


© 2019 CloudIQ Technologies. All rights reserved.

Get in touch

Please contact us using the form below

USA

626 120th Ave NE, B102, Bellevue, WA, 98005.

+1 (206) 203-4151

sales@cloudiqtech.com

INDIA

Chennai One IT SEZ,

Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097

+91-044-48651163