4 Different Ways to Deploy Private Kubernetes Clusters

Private Kubernetes clusters are becoming more popular because they offer better security, control, and compliance compared to public cloud options.

This means companies can keep their data safer and have more say in everything. 

However, setting up and managing these private clusters can be tricky, especially when they are used in big, busy environments. 

In this blog, we will help you understand the best ways to set up your own private Kubernetes cluster.

How to Deploy a Private Kubernetes Cluster for Enhanced Control

DIY Approach

If you prefer full control over your Kubernetes environment, a DIY approach to deploying private Kubernetes clusters offers greater flexibility. These steps will involve infrastructure provisioning, component installation, networking configuration, and security considerations.

 Infrastructure Provisioning Options

  • Bare-Metal Servers:

– Set up physical servers with necessary hardware and network configurations.

– Install a base operating system (e.g., Ubuntu, CentOS).

– Ensure SSH access and network connectivity.

– Set up a firewall and configure IP tables.

– Use tools like PXE for network booting to streamline OS installations.

  • Virtual Machines:

   – Use a hypervisor like VMware, VirtualBox, or cloud providers (AWS, GCP, Azure) to create virtual machines.

   – Allocate CPU, memory, and storage resources to each VM.

   – Install a base operating system and ensure SSH access.

   – Recommend using Infrastructure as Code (IaC) tools like Terraform for managing VM provisioning.

 Kubernetes Component Installation:

  1. Install Dependencies:

   – Disable swap on all nodes, as Kubernetes requires it:

  sudo swapoff -a

   – Install Docker:

  sudo apt-get update && sudo apt-get install -y docker.io

   – Install kubeadm, kubelet, and kubectl:

sudo apt-get update
  sudo apt-get install -y apt-transport-https ca-certificates curl
  sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

  sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

  sudo apt-get update

  sudo apt-get install -y kubelet kubeadm kubectl

  sudo apt-mark hold kubelet kubeadm kubectl
  • Initialize the Control Plane:

   – On the master node, initialize the Kubernetes control plane:

  sudo kubeadm init --pod-network-cidr=10.244.0.0/16

   – Configure kubectl for the root user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

– Save the `kubeadm join` command output as it is required to join worker nodes.

   – Suggest setting up a non-root user for Kubernetes administration for better security practices.

  • Join Worker Nodes:

   – On each worker node, join the cluster using the command provided by `kubeadm init`:

  sudo kubeadm join <master-node-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Networking Configuration:

  • Install a CNI Plugin:

   – For Flannel:

  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

   – For Calico:

  kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

 – Verify the CNI plugin installation:

kubectl get pods --all-namespaces
  • Configure Service Mesh (Optional):

   – Install Istio or Linkerd for advanced traffic management and observability.

 Security Considerations:

  • Enable RBAC:

   – Kubernetes RBAC is enabled by default in Kubernetes 1.6 and later.

   – Create roles and role bindings to control access to resources:

  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role

  metadata:

    namespace: default

    name: pod-reader

  rules:

  - apiGroups: [""]

    resources: ["pods"]

    verbs: ["get", "watch", "list"]

 Pod Security Policies:

   – Define and enforce security policies for pod deployments:

  apiVersion: policy/v1beta1
  kind: PodSecurityPolicy

  metadata:

    name: restricted

  spec:

    privileged: false

    seLinux:

      rule: RunAsAny

    runAsUser:

      rule: MustRunAsNonRoot

    fsGroup:

      rule: RunAsAny

    supplementalGroups:

      rule: RunAsAny

    volumes:

      - 'configMap'

      - 'emptyDir'

      - 'persistentVolumeClaim'

      - 'projected'

      - 'secret'

      - 'downwardAPI'

      - 'gitRepo'

Note: Pod Security Policies (PSPs) are deprecated in Kubernetes 1.21 and are planned for removal in Kubernetes 1.25. Use Open Policy Agent (OPA) Gatekeeper or Pod Security Admission as alternatives.

2. Deploying Private Clusters Through Managed Kubernetes Services on Public Cloud Platforms

Managed Kubernetes Services (MKS) on public cloud platforms helps you run your apps without worrying too much about the behind-the-scenes stuff.

These services make it easier to control your apps while letting the cloud provider take care of the hard parts.

Here’s how you can deploy private K8s clusters on Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).

Google Kubernetes Engine (GKE)

Google Kubernetes Engine (GKE) is a service from Google Cloud that helps you run your apps easily.

Key Features:

  • Auto-scaling: GKE can automatically add or remove resources to handle more or fewer users.
  • Logging and Monitoring: GKE works well with Google Cloud’s tools to help you see what’s going on with your apps.
  • Security: GKE has features to keep your apps safe.

Example Deployment:

Here’s how you can set up a GKE cluster using the command line:

# Set variables
PROJECT_ID=my-gcp-project
CLUSTER_NAME=my-gke-cluster
ZONE=us-central1-a
# Authenticate gcloud
gcloud auth login # Set project gcloud config set project $PROJECT_ID # Create GKE cluster gcloud container clusters create $CLUSTER_NAME \ --zone $ZONE \ --num-nodes 3 \ --enable-autoscaling --min-nodes=1 --max-nodes=5 \ --enable-ip-alias \ --enable-private-nodes --master-ipv4-cidr 172.16.0.0/28

Amazon Elastic Kubernetes Service (EKS)

Amazon Elastic Kubernetes Service (EKS) is a service from AWS that helps you run apps in the cloud.

Key Features:

  • AWS Integration: EKS works well with other AWS tools.
  • Managed Control Plane: AWS takes care of the hard parts of running Kubernetes for you.
  • EKS Fargate: Lets you run apps without worrying about servers.

Example Deployment:

Here’s how you can set up an EKS cluster:

# Install AWS CLI and eksctl
pip install awscli --upgrade
curl --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Configure AWS CLI
aws configure

# Create EKS cluster
eksctl create cluster \
--name my-eks-cluster \
--region us-west-2 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--managed

Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) is a service from Microsoft Azure that helps you run your apps in the cloud.

Key Features:

  • Azure Active Directory Integration: AKS works with Azure AD to manage user access.
  • Developer-Friendly: Works well with tools like Azure DevOps and GitHub.
  • Security: AKS has features to keep your apps safe.

Example Deployment:

Here’s how you can set up an AKS cluster:

# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Login to Azure
az login

# Set variables
RESOURCE_GROUP=myResourceGroup
CLUSTER_NAME=myAKSCluster

# Create resource group
az group create --name $RESOURCE_GROUP --location eastus

# Create AKS cluster
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--node-count 3 \
--enable-addons monitoring \
--generate-ssh-keys \
--enable-aad

Bare-Metal Deployment for Private Kubernetes Clusters

Infrastructure Considerations:

  1. Hardware Selection:
    • Pick servers with enough CPU, memory, and storage for your plan.
    • Make sure your hardware works with the operating system and Kubernetes.
  2. Network Fabric Design:
    • Design a strong network layout to keep latency low and throughput high.
    • Set up redundant network paths for failover and high availability.
    • Use network segmentation and VLANs for better security and traffic management.

Practical Steps:

  1. Prepare Bare-Metal Servers:
    • Set up your physical servers with the right hardware (CPU, memory, storage, network interfaces).
    • Install an operating system (e.g., Ubuntu, CentOS).
    • Ensure you can access your servers via SSH and they are network-connected.
    • Set up IP addresses and make sure hostnames can be resolved.
  2. Install Required Software:

Disable Swap:

sudo swapoff -a

Install Docker:

sudo apt-get update && sudo apt-get install -y docker.io

Install Kubernetes Components:

Install kubeadm, kubelet, and kubectl:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Set Up the Kubernetes Control Plane:

  • On the main server (master node), initialize the control plane:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Configure kubectl for the root user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join Worker Nodes:

  • On each worker node, join the cluster using the command provided by kubeadm init:
sudo kubeadm join <master-node-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Install a CNI Plugin:

  • Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Enable Role-Based Access Control (RBAC):

  • Use RBAC to control who can do what in your Kubernetes cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]

Container Orchestration Platforms for Private Kubernetes Clusters

If you’re looking to deploy private Kubernetes clusters with additional functionalities, tools like OpenShift and Rancher can provide enhanced management features and capabilities. These platforms offer benefits such as integrated CI/CD pipelines, service catalogs, and multi-cluster management, making it easier to handle complex Kubernetes environments.

Recommended Tools:

  • OpenShift:
    • Service Catalogs
    • CI/CD Pipelines
    • Developer Tools
    • Enhanced Security
  • Rancher:
    • Multi-Cluster Management
    • Service Mesh Integration
    • App Catalog
    • Access Control

These tools can streamline your Kubernetes deployments and provide the necessary features to manage and scale your clusters effectively.

Here’s how you can do it on Rancher:

Rancher is an open-source platform for managing Kubernetes clusters across any infrastructure, whether it be on-premises, cloud, or hybrid environments. It provides a full-stack Kubernetes solution with a user-friendly interface and powerful tools for cluster management.

Steps:

1. Install the Rancher Control Plane:

    • Deploy Rancher on a single node or a highly available Kubernetes cluster.
    • Use the Rancher Helm chart to install Rancher:
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=<rancher.yourdomain.com>

2. Configure Nodes and Join Them to the Cluster:

  • Access the Rancher UI via the installed hostname.
  • Create a new cluster from the Rancher UI and select the infrastructure provider (e.g., AWS, Azure, GCP, or custom nodes).
  • For custom nodes, use the provided Docker command to install Rancher agents and join nodes to the cluster:
sudo docker run -d --privileged --restart=unless-stopped --net=host \
-v /etc/kubernetes:/etc/kubernetes \
-v /var/run:/var/run rancher/rancher-agent:v2.x.x \
--server https://<rancher-server> --token <cluster-token> --ca-checksum <checksum> --address <node-ip> --internal-address <internal-ip> --etcd --controlplane --worker

3. Manage the Cluster:

  • Use the Rancher UI to deploy applications, manage workloads, and configure Kubernetes resources.
  • Utilize built-in tools for monitoring, logging, and security.
  • Rancher integrates with CI/CD pipelines and provides automated application deployments.

Here’s how you can do it on Openshift:

1. Install the OpenShift Control Plane:

  • Provision infrastructure for OpenShift nodes (e.g., using Ansible or the OpenShift Installer).
  • Use the OpenShift installer to deploy the cluster:
openshift-install create cluster --dir=<installation-directory>

2. Configure Nodes and Join Them to the Cluster:

  • OpenShift automatically configures nodes during the installation process.
  • Define the number and types of nodes (e.g., masters, workers) in the install configuration file (install-config.yaml).

3. Manage the Cluster:

  • Access the OpenShift web console for a user-friendly interface to manage applications, monitor cluster health, and configure resources.
  • Use oc CLI for command-line management of the cluster:

oc login –server=https://<api-server>:6443

And you can deploy applications using OpenShift Templates or Operators.

 

Facebook
Twitter
LinkedIn

Read more blogs