Develop on EKC
Kubernetes is built to handle, scale, and manage containers. Its design centers around a "what you see is what you get" approach. You define the desired state of your application, and Kubernetes works to keep it that way. The architecture uses layers to make complex systems easier to handle.
Kubernetes architecture
The diagram below shows the key parts of Kubernetes and how they relate to each other.
The arrows show the cardinality of the relationships. For example, the arrow from Container
to Pod
means that a Container
can only be part of a single Pod
, but a Pod
can have multiple Containers
.
Furthermore, a Volume
can be mounted by multiple Containers
(in the same Pod
), and a Container
can mount multiple Volumes
.
Note that the groupings are not part of the architecture, they are just there to make the diagram easier to read.
Comparison with Docker
In Docker, you build and run containers. In Kubernetes, you deploy and manage containers. Kubernetes is a container orchestrator, which means it can manage containers across multiple hosts. It can also manage other resources, such as storage and networking.
Important concepts
As you delve into Kubernetes, understanding some key concepts can significantly ease your journey. Here is a quick rundown of the terms and ideas you will encounter frequently.
-
Pods: In Kubernetes, the Pod is the smallest unit you can deploy. Unlike traditional container setups where each container is isolated, a Pod can host one or more containers that work together closely.
-
Declarative Configuration: Kubernetes uses YAML or JSON files for configuration. You define what you want your application's state to be, and Kubernetes figures out how to maintain it. This is known as a declarative approach, and it is core to how Kubernetes operates.
-
Self-Healing: If a container fails, Kubernetes can restart it. If a Pod goes down, Kubernetes replaces it and reschedules it to another Node. It is a system built to recover from failure automatically.
-
Scaling: Kubernetes offers auto-scaling based on various metrics like CPU usage. You can also manually scale your applications up or down. This flexibility in scaling helps manage workloads effectively.
-
Service Discovery: Networking is simplified in Kubernetes. It automatically routes traffic to the appropriate containers, even if they are moved around or rescheduled, making it easier to connect different parts of an application.
-
Immutable Infrastructure: Rather than updating existing containers, Kubernetes replaces them with new ones. This approach simplifies the processes of rolling back and updating components, making the system more reliable.
-
Stateful and Stateless Applications: While stateless applications are easy to manage in Kubernetes, stateful applications require specific resources like StatefulSets and Persistent Volumes.
-
Init Containers: Specialized containers that run before the main containers in a Pod. They are used for setup tasks that should run before the app container starts, such as setting up a database on a Persistent Volume.
-
ConfigMaps and Secrets: These resources allow you to manage configuration data and sensitive information separately from the container image, improving security and flexibility. They can be easily edited in Rancher.
-
Readiness and Liveness Probes: Used to control the health checks for your applications. A readiness probe checks if an app is ready to receive traffic, and a liveness probe checks if an app is alive and running as expected.
-
Logging and Monitoring: Although not a part of Kubernetes itself, it is crucial to understand how to collect logs and metrics. Effective logging and monitoring can help you debug issues and optimize performance.