Kubeadm setup on an offline server
Introduction
Typically, modern software deployment relies on active internet connectivity. The containerized version of LumenVox is designed to utilize standard open-source tools, including Kubernetes and Docker, which typically require an internet connection to download and pull necessary images and dependencies for a cluster setup.
However, many enterprise environments operate under strict security policies that block external internet access. To support these air-gapped environments, an alternative installation strategy is required to deploy a functional LumenVox server.
Scope of Installation
This document provides a comprehensive procedure for installing the following components in an isolated network:
Runtimes & Orchestration: Docker, Containerd, and Kubernetes
Networking & Service Mesh: Calico, Linkerd, and Ingress-nginx
Package Management: Helm
Infrastructure Services: Docker Private Registry and External Services (MongoDB, PostgreSQL, RabbitMQ, and Redis)
LumenVox Stack: LumenVox, MRCP-API, and MRCP-Client
Environment Requirements
To facilitate this offline installation, the procedure utilizes a two-server approach:
Online Server: A Linux system connected to the internet to download and stage all required assets.
Offline Server: A secured Linux system with no external network access where the production environment is installed.
While this guide is compatible with Red Hat or Ubuntu, the examples provided are based on Red Hat Enterprise Linux (RHEL) 9.5.
Server Information
Server Type | Server Name | Server IP |
|---|---|---|
Online Server | rhel-online | 172.18.2.75 |
Offline Server | rhel-offline | 172.18.2.76 |
Please ensure that you have curl and rsync installed on both servers.
Online Server Preparation
System Prerequisites
Kubernetes requires specific system settings to manage container networking and memory efficiently.
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
Configure Kernel Modules
The following modules are required for the Kubernetes pod network (calico) to function correctly.
sudo tee /etc/modules-load.d/k8s.conf <<EOF ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilterNetwork & Security Settings
Adjust the system’s network filtering and disable the firewall to allow inter-pod communication within the cluster.
# Enable bridged traffic and IP forwarding sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl settings without reboot sudo sysctl --system # Disable Firewall and set SELinux to Permissive sudo systemctl stop firewalld sudo systemctl disable firewalld sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configOnline Asset Staging
Docker and Containerd
Create a lumenvox directory to store the files. In this example, we are saving the files in /lumenvox.
mkdir /lumenvox && cd /lumenvox mkdir docker-offline && cd docker-offline # Add Docker Repository sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo # Install Docker sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Start and Enable Docker sudo systemctl enable --now docker sudo usermod -aG docker $USER newgrp docker # Download all packages and dependencies sudo dnf download --resolve --alldeps \ docker-ce \ docker-ce-cli \ containerd.io \ docker-buildx-plugin \ docker-compose-plugin
Before installing Kubernetes, you must point your package manager to the official community repositories hosted at pkgs.k8s.io.
Note: This configuration specifically targets v1.31. Replace v1.31 in the URL if you require a different version.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key EOF
Download RPMs and Dependencies
Use the following command to download the Kubernetes binaries and their necessary support tools into a local directory without installing them:
# Create and enter the staging directory mkdir -p /lumenvox/k8s-offline && cd /lumenvox/k8s-offline # Download packages and all required dependencies sudo dnf install --downloadonly --downloaddir=. \ kubeadm kubelet kubectl cri-tools socat conntrack ebtables ethtool
Downloading the required images for Kubernetes v1.31 and save them as .tar archives.
mkdir -p /lumenvox/k8s-images && cd /lumenvox/k8s-images docker pull registry.k8s.io/kube-apiserver:v1.31.14 docker save registry.k8s.io/kube-apiserver:v1.31.14 > kube-apiserver:v1.31.14.tar docker pull registry.k8s.io/kube-controller-manager:v1.31.14 docker save registry.k8s.io/kube-controller-manager:v1.31.14 > kube-controller-manager:v1.31.14.tar docker pull registry.k8s.io/kube-scheduler:v1.31.14 docker save registry.k8s.io/kube-scheduler:v1.31.14 > kube-scheduler:v1.31.14.tar docker pull registry.k8s.io/kube-proxy:v1.31.14 docker save registry.k8s.io/kube-proxy:v1.31.14 > kube-proxy:v1.31.14.tar docker pull registry.k8s.io/coredns/coredns:v1.11.3 docker save registry.k8s.io/coredns/coredns:v1.11.3 > coredns:v1.11.3.tar docker pull registry.k8s.io/pause:3.10 docker save registry.k8s.io/pause:3.10 > pause:3.10.tar docker pull registry.k8s.io/etcd:3.5.24-0 docker save registry.k8s.io/etcd:3.5.24-0 > etcd:3.5.24-0.tar
Downloading the required images for Calico
mkdir -p /lumenvox/calico-offline && cd /lumenvox/calico-offline
# Downloading the instllation manifest
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# List the image information
grep image: calico.yaml | awk '{print $2}' | sort -u
# Pull the required images and save them as .tar archive
docker pull docker.io/calico/cni:v3.27.0
docker save calico/cni:v3.27.0 > cni:v3.27.0.tar
docker pull docker.io/calico/kube-controllers:v3.27.0
docker save calico/kube-controllers:v3.27.0 > kube-controllers:v3.27.0.tar
docker pull docker.io/calico/node:v3.27.0
docker save calico/node:v3.27.0 > node:v3.27.0.tarDownloading crictl
mkdir -p /lumenvox/crictl-offline && cd /lumenvox/crictl-offline curl -LO https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.31.0/crictl-v1.31.0-linux-amd64.tar.gz sudo tar zxvf crictl-v1.31.0-linux-amd64.tar.gz -C /usr/local/bin
Downloading linkerd
mkdir -p /lumenvox/linkerd-offline && cd /lumenvox/linkerd-offline curl -O https://assets.lumenvox.com/kubeadm/linkerd.tar tar -xvf linkerd.tar
Downloading helm
mkdir -p /lumenvox/helm-offline && cd /lumenvox/helm-offline curl -O https://get.helm.sh/helm-v3.19.2-linux-amd64.tar.gz tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
helm repo add lumenvox https://lumenvox.github.io/helm-charts helm repo update
helm fetch lumenvox/lumenvox --untar
cd /lumenvox/helm-offline/lumenvox vi values.yaml
cd /lumenvox curl -O https://raw.githubusercontent.com/lumenvox/containers-quick-start/master/values.yaml
Below is a sample shell script to pull and save the LumenVox v6.3 images and the external services. Save the script as “download_lv_images.sh” and execute the script to pull the LumenVox images. This will save the images to a single lv_images-offline.tar.gz file.
#!/bin/bash
IMAGES=(
"lumenvox/lumenvox-api:6.3"
"lumenvox/session:6.3"
"lumenvox/reporting-api:6.3"
"lumenvox/archive:6.3"
"lumenvox/deployment-portal:6.3"
"lumenvox/admin-portal:6.3"
"lumenvox/neural-tts:6.3"
"lumenvox/license:6.3"
"lumenvox/vad:6.3"
"lumenvox/cloud-init-tools:6.3"
"lumenvox/configuration:6.3"
"lumenvox/storage:6.3"
"lumenvox/grammar:6.3"
"lumenvox/itn:6.3"
"lumenvox/management-api:6.3"
"lumenvox/deployment:6.3"
"lumenvox/asr:6.3"
"lumenvox/resource:6.3"
"lumenvox/cloud-logging-sidecar:6.3"
"lumenvox/mrcp-api:6.3"
"lumenvox/simple_mrcp_client:latest"
"lumenvox/diag-tools:jammy-4.2.0"
"lumenvox/license-reporter-tool:latest"
"docker.io/rabbitmq:4.1.1"
"docker.io/redis:8.0.3"
"docker.io/mongo:8.0.17"
"docker.io/postgres:17.4-alpine3.21"
)
SAVE_DIR="/lumenvox/lv_images-offline"
mkdir -p "$SAVE_DIR"
for IMAGE in "${IMAGES[@]}"; do
echo "----------------------------------------"
echo "Processing: $IMAGE"
if docker pull "$IMAGE"; then
# Sanitize filename: replace / and : with _
FILE_NAME=$(echo "$IMAGE" | tr '/:' '_')
echo "Saving to $SAVE_DIR/${FILE_NAME}.tar"
docker save -o "$SAVE_DIR/${FILE_NAME}.tar" "$IMAGE"
else
echo "ERROR: Failed to pull $IMAGE. Skipping..."
fi
done
echo "----------------------------------------"
echo "Compressing all images into one bundle..."
tar czvf lv_images-offline.tar.gz -C /lumenvox lv_images-offline
echo "Done! Final bundle: lv_images-offline.tar.gz"Below is a sample shell script to pull and save the LumenVox model files. Save the script as “download_lv_models.sh” and execute the script to pull the LumenVox model files. This will save the files to the /lumenvox/lv_models-offline folder.
#!/bin/bash
# Directory to save files
DOWNLOAD_DIR="/lumenvox/lv_models-offline"
mkdir -p "$DOWNLOAD_DIR"
# List of URLs to download
URLS=(
"https://assets.lumenvox.com/model-files/asr/asr_encoder_model_en.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_gb.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lib_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/asr/multilingual_confidence_model.manifest"
"https://assets.lumenvox.com/model-files/asr/dist_package_model_asr.manifest"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.manifest"
"https://assets.lumenvox.com/model-files/dnn/backend_dnn_model_p.manifest"
"https://assets.lumenvox.com/model-files/dnn/dist_package_model_en.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/neural_tts_en_us_megan-4.0.0.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/dist_package_model_neural_tts.manifest"
"https://assets.lumenvox.com/model-files/itn/itn_dnn_model_en.manifest"
"https://assets.lumenvox.com/model-files/asr/4.1.0/multilingual_confidence_model-4.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.0/backend_dnn_model_p-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.1.1/asr_encoder_model_en-4.1.1.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/4.0.0/neural_tts_en_us_megan-4.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.1.0/asr_decoder_model_en_us-4.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.1.0/asr_decoder_model_en_gb-4.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/2.0.0/dist_package_model_neural_tts-2.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.2.0/dist_package_model_asr-4.2.0.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.3/dist_package_model_en-1.0.3.tar.gz"
"https://assets.lumenvox.com/model-files/asr/1.0.0/asr_lib_model_en_us-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/itn/3.0.1/itn_dnn_model_en-3.0.1.tar.gz"
)
# Download each file
for URL in "${URLS[@]}"; do
FILE_NAME=$(basename "$URL")
echo "Downloading $FILE_NAME..."
curl -fLo "${DOWNLOAD_DIR}/${FILE_NAME}" "$URL" || echo "Failed to download $URL"
done
echo "✅ All downloads attempted. Files are in: $DOWNLOAD_DIR"Downloading the external services, mrcp-api and simple_mrcp_client
mkdir -p /lumenvox/services-offline && cd /lumenvox/services-offline git clone https://github.com/lumenvox/mrcp-api.git git clone https://github.com/lumenvox/mrcp-client.git git clone https://github.com/lumenvox/external-services.git cd external-services curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/docker-compose.yaml curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/rabbitmq.conf curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/.env
mkdir -p /lumenvox/ingress-nginx-offline && cd /lumenvox/ingress-nginx-offline helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm fetch ingress-nginx/ingress-nginx --untar docker pull registry.k8s.io/ingress-nginx/controller:v1.12.4 docker save registry.k8s.io/ingress-nginx/controller:v1.12.4 > controller:v1.12.4.tar docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 docker save registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 > kube-webhook-certgen:v1.5.2.tar
A Docker private registry is a container image server that your organization controls. Instead of pulling and pushing images to a public service like Docker Hub, you store them in your own registry, allowing only authorized users and systems to access them. We will use docker compose to set up a Docker private registry to store and install the LumenVox and Ingress-Nginx Helm charts.
cd /lumenvox/docker-offline docker pull registry:2 docker save registry:2 -o registry.tar mkdir registry cd registry mkdir data
./data foldersudo tee /lumenvox/docker-offline/registry/docker-compose.yaml<<EOF services: registry: image: registry:2 container_name: private-registry ports: - "5000:5000" volumes: - ./data:/var/lib/registry restart: always EOF
docker compose up -d
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries" : ["my-docker-registry.com:5000"]
}
EOFsudo systemctl daemon-reload sudo systemctl restart docker
/etc/hosts file. Replace 172.18.2.75 with the actual IP address of the online server.172.18.2.75 my-docker-registry.com
cd /lumenvox/ingress-nginx-offline docker tag registry.k8s.io/ingress-nginx/controller:v1.12.4 my-docker-registry.com:5000/controller:v1.12.4 docker push my-docker-registry.com:5000/controller:v1.12.4 docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2 docker push my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2
curl my-docker-registry.com:5000/v2/_catalog
cd /lumenvox
#!/bin/bash
# Registry config
REGISTRY="my-docker-registry.com:5000"
IMAGE_DIR="/lumenvox/lv_images-offline"
# Ensure the registry string doesn't end with a slash for clean concatenation
REGISTRY="${REGISTRY%/}"
for TAR in "$IMAGE_DIR"/*.tar; do
echo "----------------------------------------------------------"
echo "Processing $TAR..."
# Capture the full name:tag from the docker load output
IMAGE_FULL_NAME=$(docker load -i "$TAR" | awk '/Loaded image:/ { print $3 }')
if [ -z "$IMAGE_FULL_NAME" ]; then
echo "Error: Failed to extract image name from $TAR"
continue
fi
echo "Found image: $IMAGE_FULL_NAME"
# 1. Remove 'docker.io/' prefix if it exists
CLEAN_NAME="${IMAGE_FULL_NAME#docker.io/}"
# 2. Remove 'lumenvox/' prefix if it exists
# This turns 'lumenvox/mrcp-api:6.3' into 'mrcp-api:6.3'
CLEAN_NAME="${CLEAN_NAME#lumenvox/}"
TARGET_IMAGE="${REGISTRY}/${CLEAN_NAME}"
echo "Tagging as: $TARGET_IMAGE"
docker tag "$IMAGE_FULL_NAME" "$TARGET_IMAGE"
echo "Pushing: $TARGET_IMAGE"
docker push "$TARGET_IMAGE"
done
echo "----------------------------------------------------------"
echo "Done."curl my-docker-registry.com:5000/v2/_catalog
Offline Server Preparation
System Prerequisites
Kubernetes requires specific system settings to manage container networking and memory efficiently.
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
Configure Kernel Modules
The following modules are required for the Kubernetes pod network (calico) to function correctly.
sudo tee /etc/modules-load.d/k8s.conf <<EOF ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilter
Network & Security SettingsAdjust the system’s network filtering and disable the firewall to allow inter-pod communication within the cluster.
# Enable bridged traffic and IP forwarding sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl settings without reboot sudo sysctl --system # Disable Firewall and set SELinux to Permissive sudo systemctl stop firewalld sudo systemctl disable firewalld sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo vi /etc/hosts 172.18.2.76 rhel-offline 172.18.2.75 my-docker-registry.com
We are using rsync to copy folders and files between the servers. It (rsync) must be installed on both servers. An alternative is to use scp to copy the files.
sudo mkdir /lumenvox rsync -avzP user@remote_host:/path/to/remote/folder /path/to/local/destination/
Docker and Containerd
Installing Docker and Containerd
cd /lumenvox/docker-offline sudo dnf install *.rpm --disablerepo=* --skip-broken sudo systemctl enable --now docker sudo systemctl enable --now containerd
containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml sudo sed -i "s|sandbox = 'registry.k8s.io/pause:3.10.1'|sandbox = 'registry.k8s.io/pause:3.10'|g" /etc/containerd/config.toml sudo sed -i "s|config_path = '/etc/containerd/certs.d:/etc/docker/certs.d'|config_path = '/etc/containerd/certs.d'|g" /etc/containerd/config.toml sudo mkdir -p /etc/containerd/certs.d/my-docker-registry.com:5000 #create the host.toml file# cat <<EOF | sudo tee /etc/containerd/certs.d/my-docker-registry.com:5000/hosts.toml server = "http://my-docker-registry.com:5000" [host."http://my-docker-registry.com:5000"] capabilities = ["pull", "resolve"] skip_verify = true EOF sudo systemctl restart containerd sudo usermod -aG docker $USER newgrp docker
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries": ["my-docker-registry.com:5000"]
}
EOF
sudo systemctl restart dockerTo List the Content of the private docker registry on the online server.curl my-docker-registry.com:5000/v2/_catalog
Kubernetes
cd /lumenvox/k8s-offline/ sudo dnf install *.rpm --disablerepo=* --skip-broken sudo systemctl enable --now kubelet sudo systemctl enable --now containerd
cd /lumenvox/k8s-images sudo ctr -n k8s.io images import coredns:v1.11.3.tar sudo ctr -n k8s.io images import etcd:3.5.24-0.tar sudo ctr -n k8s.io images import kube-apiserver:v1.31.14.tar sudo ctr -n k8s.io images import kube-controller-manager:v1.31.14.tar sudo ctr -n k8s.io images import kube-proxy:v1.31.14.tar sudo ctr -n k8s.io images import kube-scheduler:v1.31.14.tar sudo ctr -n k8s.io images import pause:3.10.tar
sudo kubeadm init --apiserver-advertise-address=172.18.2.76 --kubernetes-version=v1.31.14
Setup the kubectl cli for the user
mkdir -p $HOME/.kube sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node kubectl taint node <node-name> node-role.kubernetes.io/control-plane-
The NotReady status is perfectly normal at this stage. This is because the Container Network Interface (Calico) has not been installed yet.
Calico
cd /lumenvox/calico-offline sudo ctr -n k8s.io images import kube-controllers:v3.27.0.tar sudo ctr -n k8s.io images import node:v3.27.0.tar sudo ctr -n k8s.io images import cni:v3.27.0.tar kubectl apply -f calico.yamlThe node should now be in Ready status
Crictl
cd /lumenvox/crictl-offline sudo tar zxvf crictl-v1.31.0-linux-amd64.tar.gz -C /usr/local/bin sudo chmod +x /usr/local/bin/crictlLinkerd
cd /lumenvox/linkerd-offline sudo chmod +x linkerd_cli_installer_offline.sh sudo ctr -n k8s.io images import controller:edge-24.8.2.tar sudo ctr -n k8s.io images import metrics-api:edge-24.8.2.tar sudo ctr -n k8s.io images import policy-controller:edge-24.8.2.tar sudo ctr -n k8s.io images import prometheus:v2.48.1.tar sudo ctr -n k8s.io images import proxy:edge-24.8.2.tar sudo ctr -n k8s.io images import proxy-init:v2.4.1.tar sudo ctr -n k8s.io images import tap:edge-24.8.2.tar sudo ctr -n k8s.io images import web:edge-24.8.2.tar ./linkerd_cli_installer_offline.sh export PATH=$PATH:~/.linkerd2/bin linkerd check --pre linkerd install --crds | kubectl apply -f - linkerd install | kubectl apply -f - linkerd check linkerd viz install | kubectl apply -f - kubectl delete cronjob linkerd-heartbeat -n linkerd
cd /lumenvox/helm-offline tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
Create the lumenvox namespace
kubectl create ns lumenvox
# Copy the external-services folder to the home directory cp -r /lumenvox/services-offline/external-services/ ~ cd ~/external-services vi docker-compose.yaml ### mongodb image: my-docker-registry.com:5000/mongo:8.0.17 ### postgresql image: my-docker-registry.com:5000/postgres:17.4-alpine3.21 ### rabbitmq image: my-docker-registry.com:5000/rabbitmq:4.1.1 ### redis image: my-docker-registry.com:5000/redis:8.0.3Edit the .env file with the appropriate password.
vi .env #-------------------------# # MongoDB Configuration #-------------------------# MONGO_INITDB_ROOT_USERNAME=lvuser MONGO_INITDB_ROOT_PASSWORD=mongo1234 #-------------------------# # PostgreSQL Configuration #-------------------------# # Password for the root 'postgres' user #POSTGRESQL__POSTGRES_PASSWORD=postgresroot1234 # Credentials for new user POSTGRES_USER=lvuser POSTGRES_PASSWORD=postgres1234 #-------------------------# # RabbitMQ Configuration #-------------------------# RABBITMQ_USERNAME=lvuser RABBITMQ_PASSWORD=rabbit1234 #-------------------------# # Redis Configuration #-------------------------# REDIS_PASSWORD=redis1234 #-------------------------# # Restart Configuration #-------------------------# RESTART_POLICY=always
docker compose up -d
docker ps
kubectl create secret generic mongodb-existing-secret --from-literal=mongodb-root-password=$MONGO_INITDB_ROOT_PASSWORD -n lumenvox kubectl create secret generic postgres-existing-secret --from-literal=postgresql-password=$POSTGRES_PASSWORD -n lumenvox kubectl create secret generic rabbitmq-existing-secret --from-literal=rabbitmq-password=$RABBITMQ_PASSWORD -n lumenvox kubectl create secret generic redis-existing-secret --from-literal=redis-password=$REDIS_PASSWORD -n lumenvox
cd /lumenvox vi values.yaml
Create Self-Signed Certificate key
openssl genrsa -out server.key 2048
Make sure the subjectAltName matches the hostnameSuffix in the values.yaml file.
openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650 \ -addext "subjectAltName = DNS:lumenvox-api.rhel-offline.testmachine.com, \ DNS:biometric-api.rhel-offline.testmachine.com, \ DNS:management-api.rhel-offline.testmachine.com, \ DNS:reporting-api.rhel-offline.testmachine.com, \ DNS:admin-portal.rhel-offline.testmachine.com, \ DNS:deployment-portal.rhel-offline.testmachine.com"
cd /lumenvox kubectl create secret tls speech-tls-secret --key server.key --cert server.crt -n lumenvox
cd /lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml watch kubectl get po -A
Loading the model files
helm uninstall lumenvox -n lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml
Copy the .manifest to/data/lang/manifest
cd /lumenvox/lv_models-offline cp -p *.manifest /data/lang/manifests/
cd /lumenvox/lv_models-offline cp -p *.manifest /data/lang/manifests/
kubectl rollout restart deployment -n lumenvox
Ingress-Nginx
Configure ingress-nginx to pull from the docker private repository
cd /lumenvox/ingress-nginx-offline vi ingress-nginx/values.yaml
image: "controller"
repository: "my-docker-registry.com:5000/controller"
tag: "v1.12.4"
digest: null
digestChroot: null
Search for “kube-webhook” in the file and set the image, repository, tag and digest values as shown below:
image: kube-webhook-certgen
repository: "my-docker-registry.com:5000/kube-webhook-certgen"
tag: v1.5.2
digest: null
Search for “hostNetwork” in the file and set it to “true”.
hostNetwork: true
Installing ingress-nginx
helm upgrade --install ingress-nginx ./ingress-nginx -n ingress-nginx --create-namespace --set controller.hostNetwork=true --version 4.12.1 -f ./ingress-nginx/values.yaml
Copy the mrcp-api to the home directory and configure docker to pull the mrcp-api from the private Docker registry and set all other values the usual way for a regular installation.
cd /lumenvox/services-offline/ cp -r mrcp-api ~ cd ~/mrcp-api/docker/ vi .env docker compose up -d
Copy the server.crt certificate to the mrcp-api
cd ~/mrcp-api/docker docker compose down sudo cp /lumenvox/server.crt certs docker compose up -d
Copy the mrcp-client to the home directory and configure docker to pull the simple_mrcp_client from the private Docker registry
cd /lumenvox/services-offline/ cp -r mrcp-client ~ cd ~/mrcp-client vi docker-compose.yml docker compose up -d
Creating a deployment - Please reference the Access the Admin Portal to Create a Deployment section in the Setup via quick start (kubeadm) guide - https://lumenvox.capacity.com/article/863702/setup-via-quick-start--kubeadm-
Licensing the server - Please reference the Setting up the license reporter tool to license a server in an air-gap environment -
https://lumenvox.capacity.com/article/631630/setting-up-the-license-reporter-tool
