This guide covers the entire process of installing and configuring a Kubernetes cluster in an entreprise environment, with a particular focus on security and best practice.
1. Architecture and prerequisites
Target architecture
- 1 master node
- 2 worker nodes
- Calico overlay network
- OS: Debian 12 (Bookworm)
- Environment: VirtualBox
Minimum recommended hardware configuration
Master node
- CPU: 2vCPU minimum
- RAM: 4Go minimun
- Disque: 50 Go mimimum
Worker nodes
- CPU: 4vCPU minimum
- RAM: 8Go minimun
- Disque: 100 Go mimimum
Initial system configuration (on all nodes)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Initial system update
sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release systemd-timesyncd
# Enabling the NTP service
sudo systemctl enable systemd-timesyncd
sudo systemctl start systemd-timesyncd
# Swap deactivation (mandatory for Kubernetes)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Loading the necessary kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Setting up the network for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
2. Installation of basic components
On all nodes, install containerd.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
sudo apt install -y containerd
# Generation of the default configuration file
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Activate Systemd mode to manage cgroups
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
# Addition of the public key and the official repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Installing kubernetes packages
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
3. Cluster initialization
Initialize the cluster on master node by specifying the Calico pod network and the control endpoint.
1
2
3
4
5
6
7
8
9
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint="MASTER_IP:6443" --upload-certs
# Configuration for the non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Installing Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
On master node, use the kubeadm join command supplied when the master was initialised.
1
sudo kubeadm join MASTER_IP:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
4. Post-installation configuration
On the master node, check that the cluster is operating correctly.
1
kubectl get nodes
1
kubectl get pods -A
To set up a local storage solution, deploy the local-path-provisioner
1
2
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
5. Securing the cluster
On the master, create a limited-access user by generating and signing a certificate.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# Generation of the key and certificate request (CSR)
openssl genrsa -out mbogning.key 2048
openssl req -new -key mbogning.key -out mbogning.csr -subj "/CN=mbogning/O=team1"
# Signing the certificate with the cluster CA
sudo openssl x509 -req -in mbogning.csr \
-CA /etc/kubernetes/pki/ca.crt \
-CAkey /etc/kubernetes/pki/ca.key \
-CAcreateserial \
-out mbogning.crt -days 365
# Create a dedicated namespace (e.g. production) and define restricted roles
kubectl create namespace production
# Creation of a role limiting actions on pods
kubectl create role pod-reader --verb=get,list,watch --resource=pods --namespace=production
kubectl create rolebinding pod-reader-binding --role=pod-reader --user=mbogning --namespace=production
# Create a kubeconfig context for the user
kubectl config set-credentials mbogning \
--client-certificate=mbogning.crt \
--client-key=mbogning.key
kubectl config set-context mbogning-context \
--cluster=kubernetes \
--namespace=production \
--user=mbogning
# Implementation of Network Policies
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
EOF
6. Monitoring and logging
On master node. Helm makes it easy to install and manage applications on Kubernetes.
1
2
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
Setting up monitoring with Prometheus and Grafana
Add the dedicated Helm repository and deploy the monitoring stack in the monitoring namespace.
1
2
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
1
2
3
4
5
6
kubectl create namespace monitoring
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--set prometheusOperator.createCustomResource=true \
--set grafana.enabled=true
Setting up logging with Loki
For a lightweight logging solution integrated with Grafana, install Loki and Promtail.
1
2
3
4
5
6
7
8
9
10
11
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
kubectl create namespace logging
helm install loki grafana/loki-stack \
--namespace logging \
--set promtail.enabled=true \
--set loki.persistence.enabled=true \
--set loki.persistence.size=10Gi \
--set grafana.enabled=false
Configuring Grafana to use Loki as a Data Source
Create a ConfigMap to integrate Loki into Grafana.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: loki-datasource
namespace: monitoring
labels:
grafana_datasource: "1"
data:
loki-datasource.yaml: |-
apiVersion: 1
datasources:
- name: Loki
type: loki
url: http://loki.logging.svc.cluster.local:3100
access: proxy
isDefault: false
EOF
Restart the Grafana deployment to apply the changes.
1
kubectl rollout restart deployment -n monitoring prometheus-grafana
Access to monitoring interfaces
Temporary method (Port-Forward)
For quick access, use the following command.
1
2
3
4
5
# Access to Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Access to Prometheus
kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090
Then go to :
- Grafana: http://localhost:3000 (default credentials: admin/prom-operator)
- Prometheus: http://localhost:9090
Permanent access via Ingress and HTTPS
- Installing the NGINX Ingress Controller For a production solution, deploy the NGINX Ingress controller and cert-manager to manage TLS certificates.
1
2
3
4
5
6
7
8
9
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=30080 \
--set controller.service.nodePorts.https=30443
2. Installing cert-manager
1
2
3
4
5
6
7
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
3. Creating a Let’s Encrypt ClusterIssuer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@votredomaine.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
4. Deployment of Ingress Rules for Grafana and Prometheus
- Ingress for Grafana
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat <<-EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- grafana.cluster.local
secretName: grafana-tls
rules:
- host: grafana.cluster.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-grafana
port:
number: 80
EOF
- Ingress for Prometheus
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- prometheus.cluster.local
secretName: prometheus-tls
rules:
- host: prometheus.cluster.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-kube-prometheus-prometheus
port:
number: 9090
EOF
DNS configuration
Modify the /etc/hosts file on the machines that will access the cluster.
Copy : 192.168.X.X grafana.cluster.local prometheus.cluster.local
Replace 192.168.X.X with the IP address of the node where the Ingress controller is exposed.
Access Grafana at :
1
https://grafana.cluster.local:30443
Log in using the default logins (admin/prom-operator)
Access Prometheus at :
1
https://prometheus.cluster.local:30443
7. Maintenance and backup
On the master node, make sure you install the etcd-client tool and configure regular backup of the cluster.
1
2
3
4
5
6
7
8
sudo apt install -y etcd-client
sudo mkdir -p /backup
sudo ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /backup/etcd-snapshot-$(date +%Y%m%d).db
8. Validation and testing
Run the following commands to check health and configuration.
1
2
3
kubectl get nodes
kubectl get pods -A
kubectl get componentstatuses
Verification of Monitoring and Logging Services
1
2
kubectl get pods -n monitoring
kubectl get pods -n logging
Documentation
- Documentation officielle Kubernetes
- Documentation Calico
- Documentation Grafana and Prometheus
- Guide CIS Kubernetes Benchmark
Conclusion
This comprehensive guide has enabled you to:
- Deploy a robust Kubernetes cluster under Debian 12 with a master/worker architecture suited to enterprise environments.
- Ensure inter-node communication thanks to a careful VirtualBox configuration and the use of Calico.
- Secure the cluster by setting up restricted access via certificates, RBAC and network policies.
- Implement a monitoring and logging solution integrated with Prometheus, Grafana and Loki, as well as secure access to interfaces via Ingress and HTTPS.
- Plan maintenance with regular Etcd backups.