DevOps Interview Questions

Misto

Prepare for DevOps interviews with questions on CI/CD pipelines, Docker, Kubernetes, monitoring, infrastructure as code, and cloud operations.

15 domande|
4 facile
6 medio
5 difficile

Continuous Integration (CI) is the practice of automatically building and testing code every time a developer pushes changes, catching issues early. Continuous Delivery (CD) extends this by automatically deploying validated changes to staging or production environments. Together, CI/CD reduces manual errors, shortens feedback loops, and enables teams to release reliable software faster and more frequently.

ci-cdbasics

A Docker image is a read-only template that contains the application code, runtime, libraries, and dependencies needed to run an application. A container is a running instance of an image, with its own writable filesystem layer and isolated process space. You can run multiple containers from the same image, each with its own state. Images are built from Dockerfiles and stored in registries.

dockercontainers

Kubernetes has a control plane (API server for all communication, etcd for cluster state storage, scheduler for pod placement, controller manager for maintaining desired state) and worker nodes (kubelet agent, container runtime, kube-proxy for networking). Pods are the smallest deployable unit containing one or more containers. Services provide stable networking, and Deployments manage pod replicas and rolling updates.

kubernetesarchitecture

Infrastructure as Code manages and provisions infrastructure through machine-readable configuration files rather than manual processes. Tools like Terraform, CloudFormation, and Pulumi allow you to version control infrastructure, ensure consistency across environments, enable reproducible deployments, and support peer review of infrastructure changes. IaC eliminates configuration drift and makes disaster recovery straightforward by recreating environments from code.

iacautomation

Docker Compose defines and runs multi-container applications on a single host using a YAML file, ideal for local development and simple deployments. Kubernetes orchestrates containers across multiple hosts (a cluster), providing auto-scaling, self-healing, service discovery, rolling updates, and load balancing. Docker Compose is simpler but lacks production-grade features; Kubernetes is more complex but designed for large-scale, highly available deployments.

dockerkubernetes

Blue-green deployment maintains two identical production environments: blue (current) and green (new version). Traffic is routed to the blue environment while the green is deployed and tested. Once verified, traffic is switched to green, making it the new production. If issues arise, traffic can be instantly switched back to blue. This provides zero-downtime deployments and fast rollback, at the cost of running two full environments simultaneously.

deploymentstrategies

A canary deployment gradually rolls out a new version to a small percentage of users or servers while the majority continues using the old version. Metrics like error rates, latency, and business KPIs are monitored during the rollout. If the canary performs well, traffic is progressively shifted; if problems are detected, it is rolled back with minimal user impact. This approach limits the blast radius of potential issues.

deploymentstrategies

The four golden signals are latency (response time), traffic (request volume), errors (failure rate), and saturation (resource utilization). Additionally, monitor CPU, memory, disk I/O, network throughput, and application-specific metrics like queue depth and cache hit rates. Use tools like Prometheus for metrics collection, Grafana for visualization, and set up alerting with escalation policies for critical thresholds.

monitoringobservability

Container orchestration automates the deployment, scaling, networking, and management of containerized applications across clusters of machines. Without orchestration, managing hundreds or thousands of containers manually becomes impractical. Orchestrators like Kubernetes handle service discovery, load balancing, storage orchestration, automated rollouts and rollbacks, self-healing (restarting failed containers), and secret and configuration management.

containersorchestration

GitOps uses Git as the single source of truth for declarative infrastructure and application configurations. Changes to the desired state are made through pull requests, and automated agents (like ArgoCD or Flux) continuously reconcile the actual system state with the state defined in Git. This provides a full audit trail, peer review for infrastructure changes, easy rollback via git revert, and consistent deployment processes.

gitopsautomation

Horizontal Pod Autoscaler (HPA) adds or removes pod replicas based on CPU, memory, or custom metrics, distributing load across more instances. Vertical Pod Autoscaler (VPA) adjusts the CPU and memory requests/limits of existing pods, giving each pod more or fewer resources. HPA is preferred for stateless applications that scale out easily, while VPA is useful for applications that cannot be easily replicated or need optimized resource allocation.

kubernetesscaling

A service mesh is an infrastructure layer that handles service-to-service communication through sidecar proxies deployed alongside each service instance. It provides features like mutual TLS encryption, traffic management, circuit breaking, retries, observability, and access control without requiring application code changes. Service meshes like Istio and Linkerd are beneficial in complex microservice architectures but add operational complexity and resource overhead.

networkingmicroservices

Never store secrets in source code or environment variables in plaintext. Use dedicated tools like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault that provide encryption, access control, audit logging, and automatic rotation. In Kubernetes, use sealed secrets or external secrets operators to sync secrets from a vault. CI/CD pipelines should inject secrets at runtime and mask them in logs.

securitysecrets

A Deployment manages stateless applications with interchangeable pods, supporting rolling updates and scaling. A StatefulSet manages stateful applications where pods need stable network identities, persistent storage, and ordered deployment and scaling (like databases). A DaemonSet ensures exactly one pod runs on every (or selected) node, used for cluster-wide services like log collectors, monitoring agents, and network plugins.

kubernetesworkloads

Monitoring tracks predefined metrics and alerts when known thresholds are exceeded, answering questions you anticipated. Observability is the ability to understand a system's internal state from its external outputs (logs, metrics, and traces), enabling you to diagnose novel issues you did not anticipate. The three pillars of observability are metrics (quantitative measurements), logs (discrete events), and distributed traces (request flow across services).

observabilitymonitoring