Kubeadm setup on an offline server

The following steps can be used to set up Kubeadm on an offline server

Ubuntu Server 22.04.02 LTS

Two ubuntu servers are required, one is connected to the internet (online machine) and the other has no access to external networks including Internet (offline machine)

This document describes the steps to install Containerd, Kubernetes, Calico, Linkerd, Helm, Nginx, LumenVox, External Services and the MRCP-API on an offline server



Validate prerequisites for Kubernetes:

swap off
ip_tables, br_netfilter, overlay
firewall disabled
apparmor disabled
sysctl.d/k8s.conf in place



Containerd and Docker Installation:

If you can’t use Docker’s apt repository to install Docker Engine, you can download the deb file for your release and install it manually. You need to download a new file each time you want to upgrade Docker Engine.

Online machine:

Go to https://download.docker.com/linux/ubuntu/dists/ 

Select your Ubuntu version in the list.

Go to pool/stable/ and select the applicable architecture (amd64, armhf, arm64, or s390x).

Download the following deb files for the Docker Engine, CLI, containerd, and Docker Compose packages:

containerd.io_<version><arch>.deb
docker-ce<version><arch>.deb
docker-ce-cli<version><arch>.deb
docker-buildx-plugin<version><arch>.deb
docker-compose-plugin<version>_<arch>.deb

Copy the package files to offline machine

Offline machine:
Install containerd .deb package. Update the path in the following example to where you downloaded the package files.

sudo dpkg -i ./containerd.io_<version>_<arch>.deb 

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
 


After this, Docker can be installed for external services and MRCP-API:

Install docker on offline machine:
sudo dpkg -i ./docker-ce_<version><arch>.deb \   ./docker-ce-cli<version><arch>.deb \   ./docker-buildx-plugin<version><arch>.deb \   ./docker-compose-plugin<version>_<arch>.deb
sudo usermod -aG docker $USER
newgrp docker
sudo systemctl start docker
sudo systemctl enable docker


Kubernetes:


Online machine:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
apt-get download kubeadm kubelet kubectl kubernetes-cni cri-tools conntrack ebtables socat

copy the .deb files to offline machine

Offline machine:

sudo dpkg -i conntrack_1.4.6-2build2_amd64.deb
 sudo dpkg -i cri-tools 1.28.0-1.1_amd64.deb
 sudo dpkg -i ebtables_2.0.11-4build2_amd64.deb
sudo dpkg -i socat_1.7.4.1-3ubuntu4_amd64.deb
sudo dpkg -i kubernetes-cni 1.2.0-2.1_amd64.deb
 sudo dpkg -i kubelet_1.28.12-1.1_amd64.deb
sudo dpkg -i kubectl_1.28.12-1.1_amd64.deb
sudo dpkg -i kubeadm_1.28.12-1.1_amd64.deb

Online machine:

kubeadm config images list
example output:

registry.k8s.io/kube-apiserver:v1.28.12 

registry.k8s.io/kube-controller-manager:v1.28.12 

registry.k8s.io/kube-scheduler:v1.28.12

egistry.k8s.io/kube-proxy:v1.28.12 

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.12-0 

registry.k8s.io/coredns/coredns:v1.10.1


Pull each of these images on online machine, example:

docker pull registry.k8s.io/kube-apiserver:v1.28.12
docker save registry.k8s.io/kube-apiserver:v1.28.12> kube-apiserver:v1.28.12.tar

copy these 8 tar files to the offline machine...

Offline machine:

sudo ctr -n k8s.io images import etcd_3.5.12-0.tar

and repeat for the other 7 tar files (apiserver,coredns,scheduler,controller,pause_3.9 and proxy)

confirm they are loaded:

sudo ctr -n k8s.io images ls

Initialize control plane:

sudo kubeadm init --apiserver-advertise-address=<offline_machine-ip> --kubernetes-version=v1.28.12
 mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config

Untaint node:

kubectl taint node appletester node-role.kubernetes.io/control-plane-


Calico:


Online machine

 curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml > calico.yaml

open yaml file, find cni image version, pull and save it.

docker pull docker.io/calico/cni:v3.28.0
docker save  docker.io/calico/cni:v3.28.0 > cni_v3.28.0.tar

do the same for the kube-controllers and node images:

docker pull docker.io/calico/kube-controllers:v3.28.0
docker pull docker.io/calico/node:v3.28.0
docker save docker.io/calico/kube-controllers:v3.28.0 > kube-controllers_v3.28.0.tar
docker save docker.io/calico/node:v3.27.0 > node_v3.27.0.tar

Copy these 3 tar files to offline machine:

Offline machine:

sudo ctr -n k8s.io images import kube-controllers_v3.28.0.tar node_v3.28.0.tar
sudo ctr -n k8s.io images import node_v3.28.0.tar
sudo ctr -n k8s.io images ls | grep calico

Install calico:

kubectl apply -f calico.yaml


Linkerd edge-24.8.2:

 

Online machine:

Pull linkerd images:

docker pull cr.l5d.io/linkerd/controller:edge-24.8.2
docker pull cr.l5d.io/linkerd/metrics-api:edge-24.8.2
docker pull cr.l5d.io/linkerd/policy-controller:edge-24.8.2
docker pull cr.l5d.io/linkerd/proxy:edge-24.8.2
docker pull cr.l5d.io/linkerd/tap:edge-24.8.2
docker pull cr.l5d.io/linkerd/web:edge-24.8.2
docker pull docker.io/prom/prometheus:v2.48.1

save each of the images to a tar file

 docker save cr.l5d.io/linkerd/controller:edge-24.8.2 > controller_edge-24.8.2.tar

copy the 7 tar files to offline machine ...

load each of these 7 images to containerd k8s.io  namespace:

sudo ctr -n k8s.io images import controller_edge-24.8.2.tar.

Online machine:
download CLI binary file (...validate sha256 checksum)
https://github.com/linkerd/linkerd2/releases/download/edge-24.8.2/linkerd2-cli-edge-24.8.2-linux-arm64 

download Linkerd installer:

curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install > linkerd_cli_installer

Edit Linkerd installer script to work without checksum validations, urls, saved it as:
linkerd_cli_installer_offline

then made it executable:

chmod +x linkerd_cli_installer_offline
./linkerd_cli_installer_offline

here is an example of the linkerd installer script for offline server:

some lines removed from original file,
important, dstfile must be set to linkerd2-cli-edge-24.8.2-linux-arm64  file:
dstfile="/home/sysadmin/containerd-k8s-offline/linkerd/linkerd2-cli-edge-24.8.2-linux-arm64 "


#!/bin/sh
set -eu
LINKERD2_VERSION=${LINKERD2_VERSION:-edge-24.8.2}
INSTALLROOT=${INSTALLROOT:-"${HOME}/.linkerd2"}
happyexit() {
  echo ""
  echo "Add the linkerd CLI to your path with:"
  echo ""
  echo "  export PATH=\$PATH:${INSTALLROOT}/bin"
  echo ""
  echo "Now run:"
  echo ""
  echo "  linkerd check --pre                     # validate that Linkerd can be installed"
  echo "  linkerd install --crds | kubectl apply -f - # install the Linkerd CRDs"
  echo "  linkerd install | kubectl apply -f -    # install the control plane into the 'linkerd' namespace"
  echo "  linkerd check                           # validate everything worked!"
  echo ""
  echo "You can also obtain observability features by installing the viz extension:"
  echo ""
  echo "  linkerd viz install | kubectl apply -f -  # install the viz extension into the 'linkerd-viz' namespace"
  echo "  linkerd viz check                         # validate the extension works!"
  echo "  linkerd viz dashboard                     # launch the dashboard"
  echo ""
  echo "Looking for more? Visit https://linkerd.io/2/tasks"
  echo ""
  exit 0
}
OS=$(uname -s)
arch=$(uname -m)
cli_arch=""
case $OS in
  CYGWIN* | MINGW64*)
    OS=windows.exe
    ;;
  Darwin)
    case $arch in
      x86_64)
         cli_arch=""
        ;;
      arm64)
        cli_arch=$arch
        ;;
      )
        echo "There is no linkerd $OS support for $arch. Please open an issue with your platform details."
        exit 1
        ;;
    esac
    ;;
  Linux)
    case $arch in
      x86_64)
        cli_arch=amd64
        ;;
      armv8)
        cli_arch=arm64
        ;;
      aarch64*)
        cli_arch=arm64
        ;;
      armv*)
        cli_arch=arm
        ;;
      amd64|arm64)
        cli_arch=$arch
        ;;
      *)
        echo "There is no linkerd $OS support for $arch. Please open an issue with your platform details."
        exit 1
        ;;
    esac
    ;;
  *)
    echo "There is no linkerd support for $OS/$arch. Please open an issue with your platform details."
    exit 1
    ;;
esac
dstfile="/home/sysadmin/containerd-k8s-offline/linkerd/linkerd2-cli-edge-24.8.2-linux-amd64"
(
  mkdir -p "${INSTALLROOT}/bin"
  chmod +x "${dstfile}"
  rm -f "${INSTALLROOT}/bin/linkerd"
  ln -s "${dstfile}" "${INSTALLROOT}/bin/linkerd"
)
rm -r "$tmpdir"
echo "Linkerd ${LINKERD2_VERSION} was successfully installed 🎉"
echo ""
happyexit
finally,
linkerd check --pre                     
linkerd install --crds | kubectl apply -f -
linkerd install --set proxyInit.runAsRoot=true | kubectl apply -f -    
linkerd check                           
linkerd viz install | kubectl apply -f -
linkerd viz check  

Linkerd is now installed and ready, since there is no internet connection, it is normal to see up to 7 linkerd heartbeat pods showing errors.
These pods are created to reach an https linkerd server to validate if a new version of Linkerd is available, but it does not affect the overall operation of Linkerd and the cluster.


Helm

Online machine:

Download Helm binary file from github,

curl -O https://get.helm.sh/helm-v3.15.3-linux-amd64.tar.gz

Helm will need to be installed on online machine to fetch nginx and lumenvox helm charts:

de-compress tar file:

tar -zxvf helm-v3.15.3-linux-amd64.tar.gz

move helm binary to /usr/local/bin

sudo mv linux-amd64/helm /usr/local/bin/helm

Helm is now installed on online machine.

copy tar file to offline machine, and repeat the same process to install helm on offline machine:

Offline machine:
de-compress tar file:

tar -zxvf helm-v3.15.3-linux-amd64.tar.gz

move helm binary to /usr/local/bin

sudo mv linux-amd64/helm /usr/local/bin/helm


Docker Private Registry

A private docker registry needs to be configured to store and install NGINX and LumenVox helm charts
In this sample configuration, the private registry was setup on the online machine, and it was assigned the http://my-docker-registry.com:5000 endpoint.

Registry 

Online machine:

Deploy a registry server:
the following command creates a registry container listening on port 5000:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

In this sample configuration an insecure/plain HTTP registry was configured by following docker's instructions found here:

Registry 

Step 1:

The following lines must be added to the /etc/docker/daemon.json file, if it does not exist it must be created, be sure to use your private docker registry name and port:

 {   "insecure-registries" : ["http://my-docker-registry.com:5000"]  }

 

Step 2:

 Save the file and restart docker:

sudo systemctl restart docker

 

Step 3:

Edit the /etc/hosts file to include an entry for the private docker registry, for example:

10.0.0.121  my-docker-registry.com

Offline machine:

Steps 1, 2, and 3 must be performed on the offline machine, too.



Setup Containerd to use the Private Docker Registry

Containerd by default uses docker.io  as its main registry. To allow access to a private docker registry, the private registry must be
defined as a registry mirror and its http url must be configured as its endpoint:

The example below shows the changes made to the /etc/containerd/config.toml: (line 154 was added)
144 [plugins."io.containerd.grpc.v1.cri".registry]
145 config_path = ""
146
147 [plugins."io.containerd.grpc.v1.cri".registry.auths]
148
149 [plugins."io.containerd.grpc.v1.cri".registry.configs]
150
151 [plugins."io.containerd.grpc.v1.cri".registry.headers]
152
153 [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
154         [plugins."io.containerd.grpc.v1.cri".registry.mirrors."my-docker-registry.com:5000"] endpoint = ["http://my-docker-registry.com:5000", ]

Save the file and restart containerd:

sudo systemctl restart containerd


Nginx - Helm

Online machine:
add nginx repo to helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 

pull/fetch ingress helm charts decompressed:

helm fetch ingress-nginx/ingress-nginx --untar

copy the decompressed folder (ingress-nginx) to the offline machine

Pull docker images for Nginx on online machine, too:

docker pull registry.k8s.io/ingress-nginx/controller:v1.11.1
docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1

Tag them, and push them to the private registry (in the example below, the original versioning was kept):

docker tag registry.k8s.io/ingress-nginx/controller:v1.11.1 my-docker-registry.com:5000/controller:v1.11.1
docker push my-docker-registry.com:5000/controller:v1.11.1

docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1 my-docker-registry.com:5000/kube-webhook-certgen:kube-webhook-certgen:v1.4.1
docker push my-docker-registry.com:5000/kube-webhook-certgen:v1.4.1

Delete the images from the online server registry, both originals and tagged ones: 

docker image rm registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
docker image rm my-docker-registry.com:5000/kube-webhook-certgen:v1.4.1
docker image rm registry.k8s.io/ingress-nginx/controller:v1.11.1
 docker image rm my-docker-registry.com:5000/controller:v1.11.1

confirm the images are now in the private registry:

curl my-docker-registry.com:5000/v2/_catalog

the output should look something like this:
{"repositories":["controller","kube-webhook-certgen"]}
which means the controller and kube-webhook-certgen images are now in the private registry.

confirm the tags added to these images using the following command:

curl my-docker-registry.com:5000/v2/controller/tags/list


output:
{"name":"controller","tags":["v1.11.1"]}

$ curl my-docker-registry.com:5000/v2/kube-webhook-certgen/tags/list
{"name":"kube-webhook-certgen","tags":["v1.4.1"]}

Edit yaml files to include the private docker registry configured in the previous steps instead of the default registry: 

ingress-nginx/values.yaml file edits:

for the values.yaml,  make sure the right image, repository, tag and digest are defined for both the controller, and the kube-webhook-certgen containers. Also, set HostNetwork = true

 


values.yaml:

controller:
name: controller
image:
## Keep false as default for now!
chroot: false
##registry: GitHub - kubernetes/registry.k8s.io: This project is the repo for registry.k8s.io, the production OCI registry service for Kubernetes' container image artifacts
##image: ingress-nginx/controller
## for backwards compatibility consider setting the full image url via the repository value below
## use either current default registry/image or repository format or installing chart by providing the values.yaml will fail
repository: "my-docker-registry.com:5000/controller"     tag: "v.1.11.1"     digest: sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a
digestChroot: sha256:e84ef3b44c8efeefd8b0aa08770a886bfea1f04c53b61b4ba9a7204e9f1a7edc
pullPolicy: IfNotPresent
# www-data -> uid 101
    runAsUser: 101
.
.
.
.
   patchWebhookJob:
securityContext:
allowPrivilegeEscalation: false
resources: {}
patch:
enabled: true
image:
##registry: GitHub - kubernetes/registry.k8s.io: This project is the repo for registry.k8s.io, the production OCI registry service for Kubernetes' container image artifacts
##image: ingress-nginx/kube-webhook-certgen
## for backwards compatibility consider setting the full image url via the repository value below
## use either current default registry/image or repository format or installing chart by providing the values.yaml will fail
repository: "my-docker-registry.com:5000/kube-webhook-certgen"         tag: "v1.4.1"         digest: sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366
pullPolicy: IfNotPresent
.
.
.
.
  hostNetwork: true

Use host ports 80 and 443

Disabled by default


The following lines were commented out in the ingress-nginx/templates/_helpers.tpl file to avoid validating the ingress-nginx controller's minimum release version

#{{/* #Check the ingress controller version tag is at most three versions behind the last release #*/}} #{{- define "isControllerTagValid" -}} #{{- if not (semverCompare ">=0.27.0-0" .Values.controller.image.tag) -}} #{{- fail "Controller container image tag should be 0.27.0 or higher" -}} #{{- end -}} #{{- end -}}

LUMENVOX and EXTERNAL SERVICES IMAGE LOAD TO PRIVATE REGISTRY:

LumenVox containers images must be added to the private docker registry using the same procedure as described earlier for ingress-nginx

  1. docker pull <image_original_name>:<original_tag>

  2. docker tag <image_original_name>:<original_tag> private-registry-name/<private_image_name>:<private_tag>

  3. docker push private-registry-name/<private_image_name>:<private_tag>
    (optional steps to remove images from host once they have been pushed to private registry):

  4. docker image rm <image_original_name>:<original_tag>

  5. docker image rm private-registry-name/<private_image_name>:<private_tag>

for the purpose of the apple test server, both image name and tags were kept unchanged, as seen in the exampe below for the lumenvox-api container:

  1. docker pull lumenvox/lumenvox-api:5.3

  2. docker tag lumenvox/lumenvox-api:5.3 my-docker-registry.com:5000/lumenvox-api:5.3

  3. docker push my-docker-registry.com:5000/lumenvox-api:5.3
    (optional steps to remove images from host once they have been pushed to private registry):

  4. docker image rm lumenvox/lumenvox-api:5.3

  5. docker image rm my-docker-registry.com:5000/lumenvox-api:5.3

then, repeat these five steps for the 19 LumenVox images:

LumenVox stack images:

lumenvox/lumenvox-api:5.3
lumenvox/session:5.3
lumenvox/reporting-api:5.3
lumenvox/archive:5.3
lumenvox/deployment-portal:5.3
lumenvox/admin-portal:5.3
lumenvox/tts:5.3
lumenvox/license:5.3
lumenvox/vad:5.3
lumenvox/cloud-init-tools:5.3
lumenvox/configuration:5.3
lumenvox/storage:5.3
lumenvox/grammar:5.3
lumenvox/itn:5.3
lumenvox/management-api:5.3
lumenvox/deployment:5.3
lumenvox/asr:5.3
lumenvox/resource:5.3
lumenvox/cloud-logging-sidecar:5.3

and repeat the same procedure for the external services, and mrcp-api images, if required:
MRCP-API:
lumenvox/mrcp-api:5.3

MRCP-CLIENT:
lumenvox/simple_mrcp_client:latest

EXTERNAL-SERVICES:
bitnami/rabbitmq:3.9.16-debian-10-r0
bitnami/redis:7.0.13-debian-11-r26
bitnami/mongodb:5.0.16-debian-11-r2
bitnami/postgresql:13

LumenVox DIAG-TOOLS

lumenvox/diag-tools:jammy-4.2.0

Confirm the images have been pushed to the private docker registry, using the following command:

curl my-docker-registry.com:5000/v2/_catalog

example of output showing all LumenVox container images, as well as the mrcp-api, external-services and ingress-nginx:

sysadmin@appletester:~/backup-appletester/containerd-k8s-offline/lumenvox/images$ curl my-docker-registry.com:5000/v2/_catalog
{"repositories":["admin-portal","archive","asr","grammar","binary-storage","cloud-init-tools","cloud-logging-sidecar","configuration","controller","deployment","deployment-portal","kube-webhook-certgen","license","lumenvox-api","management-api","mongodb","mrcp-api","postgresql","rabbitmq","redis","itn","reporting-api","resource","session","tts","vad"]}

Also, confirm the offline machine can access it, by running the same command on the offline machine.



LUMENVOX - HELM INSTALL

Online machine:

Add the lumenvox helm repository:

helm repo add lumenvox https://lumenvox.github.io/helm-charts
helm repo update

Fetch LumenVox Helm charts, the following command will create a lumenvox folder on the current directory with all helm charts:

helm fetch lumenvox/lumenvox --untar

edit the lumenvox/values.yaml file to include the private docker registry and the right tag for pulling images from it, the tag must match the tag that was defined when the images were pushed to the private registry, for example, the tag is ":5.1"

  image:
repository: "my-docker-registry.com:5000"
pullPolicy: IfNotPresent
    tag: ":5.3"

Copy the lumenvox directory including the edited values.yaml file to the offline machine:

Offline machine:

At this point the rest of the lumenvox installation is almost the same as it is on an online server using helm.
Make sure you create the lumenvox namespace, add the tls secret, the external services secrets, and the required options such as languages and TTS voices to the values.yaml file.

The main difference is that we are not installing the helm charts from the helm repository, so the helm installation will be done locally, from the directory that contains the recently added lumenvox folder copied from the online machine:

helm install lumenvox lumenvox -n lumenvox -f values.yaml


External Services and MRCP-API:

Online machine:

Clone git repositories for external services and mrcp-api, and copy the folders and contents to the offline machine.

Offline machine:

External Services:

Edit docker-compose.yaml file to pull the private docker registry images instead of the http://docker.io/bitnami  repository:
for example:

  mongodb:
# https://github.com/bitnami/bitnami-docker-mongodb
    image: my-docker-registry.com:5000/mongodb:5.0.16-debian-11-r2

  postgresql:
# https://github.com/bitnami/bitnami-docker-postgresql
    image: my-docker-registry.com:5000/postgresql:13

  rabbitmq:
# https://github.com/bitnami/bitnami-docker-rabbitmq
    image: my-docker-registry.com:5000/rabbitmq:3.9.16-debian-10-r0

  redis:
# https://github.com/bitnami/bitnami-docker-redis
    image: my-docker-registry.com:5000/redis:7.0.13-debian-11-r26

Configure the rest of the .env and docker-compose files as usual, and bring up the containers using:

docker compose up -d

MRCP-API:

Copy the tls certificate to a certs folder in the mrcp-api/docker directory,
edit the .env file to pull the mrcp-api container image from the private docker registry instead of docker-hub:

DOCKER_REGISTRY=my-docker-registry.com:5000/

PRODUCT_VERSION=5.3

Configure the rest of the .env and docker-compose files as usual, and bring up the containers using:

docker compose up -d

Licensing

For more information on licensing set up go to: https://hub.docker.com/r/lumenvox/license-reporter-tool


Was this article helpful?
Copyright (C) 2001-2024, Ai Software, LLC d/b/a LumenVox