
The Ultimate Kubernetes Interview Guide
- Published on
- Authors
- Author
- Ram Simran G
- twitter @rgarimella0124
Kubernetes has become the de facto standard for container orchestration in modern cloud-native environments. As organizations increasingly adopt this powerful platform, the demand for professionals with strong Kubernetes knowledge continues to grow. Whether you’re a DevOps engineer, SRE, or platform engineer, understanding Kubernetes is now essential.
This guide organizes key interview questions by complexity and topic, providing detailed explanations and practical examples to help you demonstrate both theoretical understanding and hands-on experience.
Kubernetes Fundamentals
What is Kubernetes and why is it essential for container orchestration?
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, management, and networking of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes has become essential because it:
- Provides automated scheduling of containerized workloads across clusters
- Ensures high availability through self-healing capabilities
- Enables horizontal scaling based on demand
- Manages application lifecycle and updates
- Abstracts away infrastructure complexity
- Offers declarative configuration and desired state management
Key Components of Kubernetes Architecture
Kubernetes follows a distributed architecture with distinct components that work together:
Control Plane Components:
- API Server: The gateway for all interactions, exposing the Kubernetes API, validating requests, and updating cluster state
- etcd: A highly available key-value store holding the cluster’s configuration and state data, acting as the single source of truth
- Scheduler: Assigns pods to nodes based on resource availability, policy constraints, and workload requirements
- Controller Manager: Runs controllers (e.g., node, replication) to monitor and adjust the cluster state
- Cloud Controller Manager (optional): Integrates with cloud-specific APIs for resources like load balancers and storage
Node-Level (Worker) Components:
- Kubelet: Agent on each node ensuring containers run as specified in pods
- Kube-proxy: Maintains network rules on nodes for pod communication
- Container Runtime: Software (e.g., Docker, containerd) that runs containers
Additional Components:
- Networking plugins (CNI): Implement pod networking
- Storage drivers (CSI): Provide persistent storage capabilities
What is a Node in Kubernetes?
A node is a worker machine in Kubernetes that runs containerized applications managed by the control plane. Nodes can be physical servers or virtual machines, and each runs several components:
- Kubelet: Communicates with the API server and manages containers
- Container runtime: Runs the containers (Docker, containerd, etc.)
- Kube-proxy: Maintains network rules for service communication
To view nodes in your cluster:
kubectl get nodes
kubectl describe node <node-name>
Core Kubernetes Resources
What is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes, representing one or more containers scheduled together on the same host. Key characteristics include:
- Shared network namespace (same IP address and port space)
- Shared storage volumes
- Co-located and co-scheduled containers
- Ephemeral by design (not meant to be persistent)
Here’s a simple example of a Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
What is a Deployment in Kubernetes?
A Deployment is a higher-level resource that manages ReplicaSets and provides declarative updates for Pods. It’s ideal for stateless applications and enables:
- Rolling updates and rollbacks
- Scaling capabilities
- Self-healing through replica management
- Version control for application rollouts
Example Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
limits:
cpu: '0.5'
memory: '512Mi'
requests:
cpu: '0.2'
memory: '256Mi'
What is a ReplicaSet?
A ReplicaSet ensures a specified number of pod replicas are running at all times. It monitors pod health and creates replacements if pods fail. While you can use ReplicaSets directly, they’re typically managed via Deployments for better update strategies.
What is a StatefulSet?
A StatefulSet manages stateful applications, providing:
- Stable network identities (unique hostnames)
- Ordered deployment and scaling
- Stable persistent storage
- Orderly updates and deletions
StatefulSets are ideal for applications like databases or any system requiring stable network identities and persistent storage.
Example StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: 'postgres'
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: ['ReadWriteOnce']
resources:
requests:
storage: 10Gi
What is a DaemonSet?
A DaemonSet ensures a specific pod runs on every node (or a subset of nodes) in the cluster. It’s used for system-level tasks like:
- Log collection (Fluentd, Logstash)
- Monitoring agents (Prometheus Node Exporter)
- Network proxies or plugins
- Node-level storage providers
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-agent
namespace: monitoring
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: agent
image: monitoring-agent:latest
Networking in Kubernetes
How does Kubernetes handle container networking?
Kubernetes networking follows several key principles:
- Each Pod gets a unique IP address
- Pods on the same node can communicate directly
- Pods on different nodes can communicate without NAT
- Agents on a node can communicate with all pods on that node
This is implemented through Container Network Interface (CNI) plugins like Calico, Flannel, or Cilium, which create an overlay network or use the underlying network infrastructure.
What is a Service in Kubernetes?
A Service is an abstraction that provides a stable endpoint to access pods, regardless of their individual IP addresses or locations. Services offer:
- Load balancing across pods
- Service discovery via DNS
- Internal connectivity for scalable communication
- Stable IP addresses and DNS names
Types of Services:
- ClusterIP: Internal-only service, accessible within the cluster
- NodePort: Exposes the service on a static port on each node’s IP
- LoadBalancer: Provisions an external load balancer (typically cloud provider)
- ExternalName: Maps the service to a DNS name
Example Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
type: ClusterIP
What is the purpose of an Ingress in Kubernetes?
An Ingress manages external HTTP/S access to services, providing:
- URL-based routing
- SSL/TLS termination
- Name-based virtual hosting
- Load balancing
- Custom rules for traffic routing
Example Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
tls:
- hosts:
- myapp.example.com
secretName: tls-secret
How does Kubernetes handle service discovery?
Kubernetes provides service discovery primarily through DNS:
- Each Service gets a DNS entry in the format:
<service-name>.<namespace>.svc.cluster.local
- Pods can access services using just the service name within the same namespace
- Environment variables are also injected into pods with service details
Storage and Configuration
What is a Persistent Volume in Kubernetes?
A Persistent Volume (PV) is a cluster-level storage resource that persists beyond the lifecycle of any individual pod. PVs are provisioned by administrators or dynamically via Storage Classes.
What is a PVC in Kubernetes?
A Persistent Volume Claim (PVC) is a request for storage by a user or application. It can specify size, access mode, and storage class requirements. The system will find and bind an appropriate PV to fulfill the claim.
Example PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
Explain the concept of a ConfigMap
A ConfigMap stores non-sensitive configuration data as key-value pairs, separately from application code. This enables:
- Environment-specific configuration
- Configuration updates without rebuilding containers
- Separation of configuration from code
ConfigMaps can be mounted as volumes or injected as environment variables.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_URL: 'postgres://user:password@postgres:5432/db'
APP_ENV: 'production'
config.json: |
{
"apiVersion": "v1",
"endpoint": "api.example.com",
"timeout": 30
}
Using the ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
envFrom:
- configMapRef:
name: app-config
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: app-config
items:
- key: config.json
path: config.json
What are Kubernetes Secrets?
Secrets are used to store sensitive information like API keys, passwords, and tokens. They’re base64 encoded (not encrypted by default) and can be:
- Mounted as volumes
- Used as environment variables
- Used in ImagePullSecrets for private registries
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: YWRtaW4= # base64 encoded "admin"
password: cGFzc3dvcmQxMjM= # base64 encoded "password123"
Application Management
How do you scale applications in Kubernetes?
Applications can be scaled:
Horizontally (scaling out):
kubectl scale deployment my-app --replicas=5
Or with a HorizontalPodAutoscaler:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
Vertically (scaling up): By modifying resource requests and limits in the Deployment specification.
What is the purpose of a HorizontalPodAutoscaler (HPA)?
An HPA automatically scales pod replicas based on metrics like:
- CPU/memory utilization
- Custom metrics from applications
- External metrics from monitoring systems
This ensures applications can handle varying traffic loads efficiently.
Explain Kubernetes Rolling Updates and Rollbacks
Rolling Updates: Gradually replace old Pods with new ones to ensure zero downtime. The deployment controller creates a new ReplicaSet with the updated template and scales it up while scaling down the old one.
Update a deployment:
kubectl set image deployment/my-app my-container=my-image:v2
Rollbacks: Revert to a previous version if issues arise:
kubectl rollout undo deployment/my-app
Or to a specific revision:
kubectl rollout undo deployment/my-app --to-revision=2
View rollout history:
kubectl rollout history deployment/my-app
What are blue-green and canary deployments?
Blue-Green Deployment: Maintain two identical environments (blue = current, green = new). Once the green environment is tested and ready, traffic is switched from blue to green.
Canary Deployment: Gradually route a small percentage of traffic to the new version. If successful, gradually increase traffic until the new version handles 100%.
Example of a basic canary deployment using two deployments:
# Stable version (90% of traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-stable
spec:
replicas: 9
selector:
matchLabels:
app: my-app
version: stable
template:
metadata:
labels:
app: my-app
version: stable
spec:
containers:
- name: my-app
image: my-app:v1
---
# Canary version (10% of traffic)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-canary
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: canary
template:
metadata:
labels:
app: my-app
version: canary
spec:
containers:
- name: my-app
image: my-app:v2
---
# Service to route to both deployments
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
Health Checks and Reliability
What is the purpose of a readiness probe?
A readiness probe checks if a pod is ready to receive traffic. If it fails, Kubernetes excludes the pod from service load balancing until it passes. This prevents premature traffic routing to initializing pods.
Example readiness probe:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: myapp:1.0
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
failureThreshold: 3
What is a liveness probe?
A liveness probe checks if a container is still running correctly. If it fails, Kubernetes will restart the container to recover from issues like application deadlocks.
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
What is the purpose of a Pod Disruption Budget (PDB)?
A PDB defines the minimum number of pods that must remain available during voluntary disruptions (like node maintenance). This ensures application availability during cluster maintenance.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
minAvailable: 2 # or maxUnavailable: 1
selector:
matchLabels:
app: my-app
Security and Access Control
How do you secure access to the Kubernetes API server?
The API server can be secured through:
- Authentication mechanisms:
- X.509 client certificates
- Bearer tokens
- OpenID Connect tokens
- Service account tokens
- Authentication webhooks
- Authorization methods:
- RBAC (Role-Based Access Control)
- ABAC (Attribute-Based Access Control)
- Node authorization
- Webhook authorization
Explain RBAC in Kubernetes
Role-Based Access Control defines permissions through:
- Roles/ClusterRoles: Define permissions for resources
- RoleBindings/ClusterRoleBindings: Associate roles with users or groups
Example RBAC configuration:
# Role for pod viewing in default namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: ['']
resources: ['pods']
verbs: ['get', 'watch', 'list']
---
# Binding the role to a user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
How do you handle secrets in DevOps workflows?
Best practices for managing Kubernetes secrets:
- Use a secrets management solution (HashiCorp Vault, AWS Secrets Manager, etc.)
- Encrypt secrets at rest (enable etcd encryption)
- Use RBAC to restrict access to secrets
- Consider external secret operators for integration with external secret stores
- Rotate secrets regularly
- Avoid exposing secrets in logs or environment variables when possible
Advanced Topics
What are Labels and Selectors in Kubernetes?
Labels are key-value pairs attached to resources, while selectors are used to filter and group resources based on labels.
Example use cases:
- Service routing to specific pods
- Deployment targeting specific pods
- Node selection for pod scheduling
- Resource organization and queries
# Pod with labels
apiVersion: v1
kind: Pod
metadata:
name: app-pod
labels:
app: myapp
tier: frontend
environment: production
spec:
containers:
- name: app-container
image: myapp:1.0
# Service selecting pods with those labels
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: myapp
tier: frontend
ports:
- port: 80
targetPort: 8080
Explain pod anti-affinity in Kubernetes
Pod anti-affinity defines rules to prevent pods from co-locating on the same node. This improves fault tolerance by distributing pods across nodes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
replicas: 3
template:
metadata:
labels:
app: redis
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: 'kubernetes.io/hostname'
containers:
- name: redis
image: redis:6.2
How do you troubleshoot a Pod that is stuck in the “Pending” state?
When troubleshooting a pending pod:
- Check events with
kubectl describe pod <pod-name>
- Look for resource constraints (insufficient CPU/memory)
- Check if nodes have taints that prevent scheduling
- Verify PVC binding status if pods use persistent storage
- Check node capacity and conditions
- Examine scheduler logs for detailed information
Common issues include:
- Insufficient cluster resources
- PVC not binding to a PV
- Node selectors or affinity rules that can’t be satisfied
- Pod quota limits reached
What is GitOps, and how does it improve deployments?
GitOps is a deployment methodology that uses Git as the single source of truth for declarative infrastructure and applications. It improves deployments by:
- Providing version control for infrastructure changes
- Enabling automatic reconciliation between Git state and cluster state
- Creating an audit trail of all changes
- Supporting rollbacks through Git history
- Improving collaboration through pull requests and code reviews
Tools like ArgoCD and Flux implement GitOps for Kubernetes.
Conclusion
Mastering Kubernetes requires understanding its architecture, components, and how they work together. This guide has covered the most common interview questions, but the technology is constantly evolving. Keep learning and experimenting with new features and best practices.
Remember that interviewers are often more interested in your problem-solving approach and understanding of fundamental concepts than memorized answers. Be prepared to discuss real-world scenarios, challenges you’ve faced, and how you’ve applied Kubernetes principles to solve them.
Cheers,
Sim