Kubernetes is the reigning market leader when it comes to container orchestration! Any organization working with the container ecosystem is either already using Kubernetes or considering it. However, despite the undoubted ease and speed Kubernetes bring to the container ecosystem, they also need specialized expertise to deploy and manage.
Many organizations consider the DIY approach to Kubernetes and if you have an in-house IT team with the requisite experience or if your requirements are large enough to justify the cost of hiring a dedicated Kubernetes team – then an internal Kubernetes strategy could certainly be beneficial.
However, if you don’t fall in the category mentioned above, then managed Kubernetes is the smartest and most cost-effective way ahead. With professionals in the picture, you can be assured of getting long term strategy, seamless implementation, and dedicated on-going service, which will
Kubernetes solution providers offer a wide range of services – from fully managed to bare bone implementation to preconfigured Kubernetes environments on SaaS models to training for your in-house staff.
Look at your operational needs and your budget and explore the market for Kubernetes services options before you pick the service and the digital partner that ticks all your boxes.
Meanwhile, do look at our tutorial on troubleshooting Kubernetes deployments.
Kubernetes deployments issues are not always easy to troubleshoot. In some cases, the errors can be resolved easily, and, in some cases, detecting errors requires us to dig deeper and run various commands to identify and resolve the issues.
The first step is to list down all pods after installing your application. The following command lists down all pods in all namespaces.
kubectl get pods -A
If you find any issues on the pod status, you can then use kubectl describe, kubectl logs, kubectl exec commands to get more detailed information.
This status indicates that your pod could not run because the pod could not pull the image from the container registry. To confirm this, run the kubectl describe command along with the pod identifier to display the details.
kubectl describe pod <pod-identifier>
This command will provide more information about the issue.
docker pull <image-name:tag>
This status indicates your pod has been scheduled to a worker node, but it can’t run on that machine. To confirm this, run the kubectl describe command along with the pod identifier to display the details.
kubectl describe pod <pod-identifier> -n <namespace>
The most common causes related to this issue are
This status indicates your pod could not be scheduled on a node for various reasons like resource constraints (insufficient CPU or memory resources), volume mounting issues. To confirm this, run the kubectl describe command along with the pod identifier to display the details.
kubectl describe pod <pod-identifier> -n <namespace>
This command will provide more information about the issue. Most common issues are
Sometimes the scheduled pods are crashing or unhealthy. Run kubectl logs to find the root cause.
kubectl logs <pod_identifier> -n <namespace>
If you have multiple containers, run the following command to find the root cause.
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs –previous <pod_identifier> -c <container_name> -n <namespace>
If your pod is running but with 0/1 ready state or 0/2 ready state (in case if you have multiple containers in your pod), then you need to verify the readiness. Check the health check (readiness probe) in this case.
Most common issues are
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
kubectl describe <pod_identifier> -n <namespace>
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
kubectl describe <pod_identifier> -n <namespace>
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
kubectl describe <pod_identifier> -n <namespace>
In some cases, the pods are running, but the output of the application is incorrect. In this case, you should run the following to find the root cause.
kubectl logs <pod_identifier> -c <container_name> -n <namespace>
kubectl logs <pod_identifier> -c <container_name> --tail <n-lines> -n <namespace>
kubectl exec <pod_identifier> -c <container_name> /bin/bash -n <namespace>
Run the commands like ‘curl’ or ‘ps’ ‘ls’ to troubleshoot the issue after you get into the container.
In some cases, the pods are working as expected but cannot access through the services. Most common causes of this issue are
kubectl get svc
kubectl describe svc <svc-name>
kubectl get endpoints
kubectl get pods --selector=name={name},{label-name}={label-value}
kubectl get endpoints
kubectl exec <pod_identifier> -c <container_name> /bin/bash
nslookup <service-name>
kubectl run --generator=run-pod/v1 -it --rm <name> --image=yauritux/busybox-curl -n <namespace>
curl
http://<servicename
>
telnet <service-ip> <service-port>
nslookup <servicename>
Share this:
CloudIQ is a leading Cloud Consulting and Solutions firm that helps businesses solve today’s problems and plan the enterprise of tomorrow by integrating intelligent cloud solutions. We help you leverage the technologies that make your people more productive, your infrastructure more intelligent, and your business more profitable.
LATEST THINKING
INDIA
Chennai One IT SEZ,
Module No:5-C, Phase ll, 2nd Floor, North Block, Pallavaram-Thoraipakkam 200 ft road, Thoraipakkam, Chennai – 600097
© 2023 CloudIQ Technologies. All rights reserved.
Get in touch
Please contact us using the form below
USA
INDIA