Installing LumenVox in Air-Gapped Environments: Ubuntu
Introduction
Typically, modern software deployment relies on active internet connectivity. The containerized version of LumenVox is designed to utilize standard open-source tools, including Kubernetes and Docker, which typically require an internet connection to download and pull necessary images and dependencies for a cluster setup.
However, many enterprise environments operate under strict security policies that block external internet access. To support these air-gapped environments, an alternative installation strategy is required to deploy a functional LumenVox server.
Scope of Installation
This document provides a comprehensive procedure for installing the following components in an isolated network:
Runtimes & Orchestration: Docker, Containerd, and Kubernetes
Networking & Service Mesh: Calico, Linkerd, and Ingress-nginx
Package Management: Helm
Infrastructure Services: Docker Private Registry and External Services (MongoDB, PostgreSQL, RabbitMQ, and Redis)
LumenVox Stack: LumenVox, MRCP-API, and MRCP-Client
Environment Requirements
To facilitate this offline installation, the procedure utilizes a two-server approach:
Online Server: A Linux system connected to the internet to download and stage all required assets.
Offline Server: A secured Linux system with no external network access where the production environment is installed.
While this guide is compatible with Red Hat or Ubuntu, the examples provided are based on Ubuntu 24.04.4 LTS.
Server Information
Server Type | Server Name | Server IP |
|---|---|---|
Online Server | ubuntu-online | 172.18.2.71 |
Offline Server | ubuntu-offline | 172.18.2.72 |
Please ensure that you have curl and rsync installed on both servers.
Online Server Preparation
System Prerequisites
Kubernetes requires specific system settings to manage container networking and memory efficiently.
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
The following modules are required for the Kubernetes pod network (calico) to function correctly.
sudo tee /etc/modules-load.d/k8s.conf <<EOF
ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilter
Adjust the system’s network filtering and disable the firewall to allow inter-pod communication within the cluster.
# Enable bridged traffic and IP forwarding sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl settings without reboot sudo sysctl --system # Disable Firewall and AppArmor sudo systemctl stop ufw sudo systemctl disable ufw sudo systemctl stop apparmor sudo systemctl disable apparmor
Docker and Containerd
Create a lumenvox directory to store the files. In this example, we are saving the files in /lumenvox.
mkdir /lumenvox && cd /lumenvox mkdir docker-offline && cd docker-offline # Add GPG key sudo apt update sudo apt install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Setup the repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Run Docker without sudo sudo usermod -aG docker $USER newgrp docker
The Docker core components can be downloaded from https://download.docker.com/linux/ubuntu/dists
You must download the deb file for your release. Versions change frequently; if these 404, browse the URL above and update the filenames.
cd docker-offline
# Define the base URL for Ubuntu 24.04 (Noble)
BASE_URL="https://download.docker.com/linux/ubuntu/dists/noble/pool/stable/amd64/"
curl -LO "${BASE_URL}containerd.io_1.7.25-1_amd64.deb"
curl -LO "${BASE_URL}docker-ce_27.5.1-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-ce-cli_27.5.1-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-buildx-plugin_0.20.0-1~ubuntu.24.04~noble_amd64.deb"
curl -LO "${BASE_URL}docker-compose-plugin_2.32.4-1~ubuntu.24.04~noble_amd64.deb"Note: This configuration specifically targets v1.33. Replace v1.33 in the URL if you require a different version.
# Add the Kubernetes repository sudo apt install -y apt-transport-https ca-certificates curl sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Use the following command to download the Kubernetes binaries and their necessary support tools into a local directory without installing them:
# Create and enter the staging directory mkdir -p /lumenvox/k8s-offline && cd /lumenvox/k8s-offline # Download packages and all required dependencies sudo apt update apt-get download kubelet kubeadm kubectl kubernetes-cni conntrack socat ebtables
Note: To identify the required images, run the kubeadm config images list command on a running Kubernetes system.
mkdir -p /lumenvox/k8s-images && cd /lumenvox/k8s-images docker pull registry.k8s.io/kube-apiserver:v1.33.8 docker save registry.k8s.io/kube-apiserver:v1.33.8 > kube-apiserver:v1.33.8.tar docker pull registry.k8s.io/kube-controller-manager:v1.33.8 docker save registry.k8s.io/kube-controller-manager:v1.33.8 > kube-controller-manager:v1.33.8.tar docker pull registry.k8s.io/kube-scheduler:v1.33.8 docker save registry.k8s.io/kube-scheduler:v1.33.8 > kube-scheduler:v1.33.8.tar docker pull registry.k8s.io/kube-proxy:v1.33.8 docker save registry.k8s.io/kube-proxy:v1.33.8 > kube-proxy:v1.33.8.tar docker pull registry.k8s.io/coredns/coredns:v1.12.0 docker save registry.k8s.io/coredns/coredns:v1.12.0 > coredns:v1.12.0.tar docker pull registry.k8s.io/pause:3.10 docker save registry.k8s.io/pause:3.10 > pause:3.10.tar docker pull registry.k8s.io/etcd:3.5.24-0 docker save registry.k8s.io/etcd:3.5.24-0 > etcd:3.5.24-0.tar
Download the essential container images for the Calico CNI. These components are critical for establishing the Kubernetes pod network and managing inter-service communication and security firewalls.
mkdir -p /lumenvox/calico-offline && cd /lumenvox/calico-offline
# Downloading the instllation manifest
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# List the image information
grep image: calico.yaml | awk '{print $2}' | sort -u
# Pull the required images and save them as .tar archive
docker pull docker.io/calico/cni:v3.27.0
docker save calico/cni:v3.27.0 > cni:v3.27.0.tar
docker pull docker.io/calico/kube-controllers:v3.27.0
docker save calico/kube-controllers:v3.27.0 > kube-controllers:v3.27.0.tar
docker pull docker.io/calico/node:v3.27.0
docker save calico/node:v3.27.0 > node:v3.27.0.tarDownload and install the crictl utility. This tool is required for inspecting and managing your container runtime environment during the installation process.
cd /lumenvox/crictl-offline curl -LO https://pkgs.k8s.io/core:/stable:/v1.33/deb/amd64/cri-tools_1.33.0-1.1_amd64.deb
In a Kubernetes environment, Linkerd is used to manage the complex networking between microservices.
mkdir -p /lumenvox/linkerd-offline && cd /lumenvox/linkerd-offline curl -O https://assets.lumenvox.com/kubeadm/linkerd.tar tar -xvf linkerd.tar
Download and initialize the Helm binary. Helm is the required package manager used to install, upgrade, and configure the LumenVox Kubernetes charts.
mkdir -p /lumenvox/helm-offline && cd /lumenvox/helm-offline curl -O https://get.helm.sh/helm-v3.19.2-linux-amd64.tar.gz tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
mkdir -p /lumenvox/helm-offline && cd /lumenvox/helm-offline curl -O https://get.helm.sh/helm-v3.19.2-linux-amd64.tar.gz tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helmDownloading LumenVox Helm charts, the following command will create a lumenvox folder on the current directory with all helm charts:
helm fetch lumenvox/lumenvox --untar
lumenvox/values.yaml file and update the image repository settings to point to your private Docker registry. Ensure that the image tag matches the one used during the initial push.Note: In this example for v7.0, the tag should be set to
:7.0.
cd /lumenvox/helm-offline/lumenvox vi values.yaml
Downloading the values.yaml from Github
cd /lumenvox curl -O https://raw.githubusercontent.com/lumenvox/containers-quick-start/master/values.yaml
Use the sample script below to pull and archive the LumenVox v7.0 and external service images. Save the script as download_lv_images.sh in the /lumenvox directory. When executed, it will pull the necessary images and compress them into a single portable file: lv_images-offline.tar.gz.
#!/bin/bash
IMAGES=(
"lumenvox/admin-portal:7.0"
"lumenvox/archive:7.0"
"lumenvox/asr:7.0"
"lumenvox/cloud-init-tools:7.0"
"lumenvox/configuration:7.0"
"lumenvox/deployment:7.0"
"lumenvox/deployment-portal:7.0"
"lumenvox/file-store:7.0"
"lumenvox/grammar:7.0"
"lumenvox/itn:7.0"
"lumenvox/license:7.0"
"lumenvox/lumenvox-api:7.0"
"lumenvox/management-api:7.0"
"lumenvox/neural-tts:7.0"
"lumenvox/reporting-api:7.0"
"lumenvox/resource:7.0"
"lumenvox/session:7.0"
"lumenvox/storage:7.0"
"lumenvox/vad:7.0"
"lumenvox/cloud-logging-sidecar:7.0"
"lumenvox/mrcp-api:7.0"
"lumenvox/simple_mrcp_client:latest"
"lumenvox/diag-tools:jammy-4.2.0"
"lumenvox/license-reporter-tool:latest"
"docker.io/rabbitmq:4.1.8-management"
"docker.io/redis:8.2.4-alpine"
"docker.io/mongo:8.2"
"docker.io/postgres:17.5"
)
SAVE_DIR="/lumenvox/lv_images-offline"
mkdir -p "$SAVE_DIR"
for IMAGE in "${IMAGES[@]}"; do
echo "----------------------------------------"
echo "Processing: $IMAGE"
if docker pull "$IMAGE"; then
# Sanitize filename: replace / and : with _
FILE_NAME=$(echo "$IMAGE" | tr '/:' '_')
echo "Saving to $SAVE_DIR/${FILE_NAME}.tar"
docker save -o "$SAVE_DIR/${FILE_NAME}.tar" "$IMAGE"
else
echo "ERROR: Failed to pull $IMAGE. Skipping..."
fi
done
echo "----------------------------------------"
echo "Compressing all images into one bundle..."
tar czvf lv_images-offline.tar.gz -C /lumenvox lv_images-offline
echo "Done! Final bundle: lv_images-offline.tar.gz"Use the sample script below to pull and archive the LumenVox model files. Save this script as download_lv_models.sh in the /lumenvox directory. Upon execution, the script will download the required files and save them to the /lumenvox/lv_models-offline directory.
#!/bin/bash
# Directory to save files
DOWNLOAD_DIR="/lumenvox/lv_models-offline"
mkdir -p "$DOWNLOAD_DIR"
# List of URLs to download
URLS=(
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_gb-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_decoder_model_en_us-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_encoder_model_en-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/asr/asr_lib_model_en_us.manifest"
"https://assets.lumenvox.com/model-files/dnn/backend_dnn_model_7-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/dnn/backend_dnn_model_p.manifest"
"https://assets.lumenvox.com/model-files/asr/dist_package_model_asr-7.0.0.manifest"
"https://assets.lumenvox.com/model-files/dnn/dist_package_model_en.manifest"
"https://assets.lumenvox.com/model-files/dnn/dist_package_model_itn-7.0.3.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/dist_package_model_neural_tts.manifest"
"https://assets.lumenvox.com/model-files/itn/itn_dnn_model_en.manifest"
"https://assets.lumenvox.com/model-files/asr/multilingual_confidence_model.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/neural_tts_en_us_aurora-8.manifest"
"https://assets.lumenvox.com/model-files/neural_tts/neural_tts_en_us_caspian-8.manifest"
"https://assets.lumenvox.com/model-files/nlu/nlu_model_en.manifest"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.manifest"
"https://assets.lumenvox.com/model-files/asr/1.0.0/asr_lib_model_en_us-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/4.1.0/multilingual_confidence_model-4.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_decoder_model_en_gb-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_decoder_model_en_us-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/asr_encoder_model_en-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/7.0.0/dist_package_model_asr-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/asr/asr_lang_model_en_us.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.0/backend_dnn_model_p-1.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/1.0.3/dist_package_model_en-1.0.3.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/7.0.0/backend_dnn_model_7-7.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/dnn/7.0.3/dist_package_model_itn-7.0.3.tar.gz"
"https://assets.lumenvox.com/model-files/itn/3.0.1/itn_dnn_model_en-3.0.1.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/2.0.0/dist_package_model_neural_tts-2.0.0.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/8.1.0/neural_tts_en_us_aurora-8.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/neural_tts/8.1.0/neural_tts_en_us_caspian-8.1.0.tar.gz"
"https://assets.lumenvox.com/model-files/nlu/1.0.4/nlu_model_en-1.0.4.tar.gz"
"https://assets.lumenvox.com/model-files/tts/tts_base_lang_data.tar.gz"
)
# Download each file
for URL in "${URLS[@]}"; do
FILE_NAME=$(basename "$URL")
echo "Downloading $FILE_NAME..."
curl -fLo "${DOWNLOAD_DIR}/${FILE_NAME}" "$URL" || echo "Failed to download $URL"
done
echo "✅ All downloads attempted. Files are in: $DOWNLOAD_DIR"Media Server and External Services Downloading the external services, mrcp-api and simple_mrcp_client
mkdir -p /lumenvox/services-offline && cd /lumenvox/services-offline git clone https://github.com/lumenvox/mrcp-api.git git clone https://github.com/lumenvox/mrcp-client.git git clone https://github.com/lumenvox/external-services.git cd external-services curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/docker-compose.yaml curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/rabbitmq.conf curl -O https://raw.githubusercontent.com/lumenvox/external-services/master/.env
mkdir -p /lumenvox/ingress-nginx-offline && cd /lumenvox/ingress-nginx-offline helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm fetch ingress-nginx/ingress-nginx --untar docker pull registry.k8s.io/ingress-nginx/controller:v1.14.1 docker save registry.k8s.io/ingress-nginx/controller:v1.14.1 > controller:v1.14.1.tar docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 docker save registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 > kube-webhook-certgen:v1.5.2.tar
A Docker private registry is a container image server that your organization controls. Instead of pulling and pushing images to a public service like Docker Hub, you store them in your own registry, allowing only authorized users and systems to access them. We will use docker compose to set up a Docker private registry to store and install the LumenVox and Ingress-Nginx Helm charts.
cd /lumenvox/docker-offline docker pull registry:2 docker save registry:2 -o registry.tar mkdir registry cd registry mkdir data
./data foldersudo tee /lumenvox/docker-offline/registry/docker-compose.yaml<<EOF services: registry: image: registry:2 container_name: private-registry ports: - "5000:5000" volumes: - ./data:/var/lib/registry restart: always EOF
docker compose up -d
Docker, by default, refuses to push to a registry that doesn’t use HTTP. The following lines must be added to the /etc/docker/daemon.json file; if it does not exist, it must be created. Be sure to use your private docker registry name and port:
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries" : ["my-docker-registry.com:5000"]
}
EOFReload and Restart dockersudo systemctl daemon-reload sudo systemctl restart docker
/etc/hosts file. Replace 172.18.2.71 with the actual IP address of the online server.172.18.2.71 my-docker-registry.com
cd /lumenvox/ingress-nginx-offline docker tag registry.k8s.io/ingress-nginx/controller:v1.14.1 my-docker-registry.com:5000/controller:v1.14.1 docker push my-docker-registry.com:5000/controller:v1.14.1 docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.2 my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2 docker push my-docker-registry.com:5000/kube-webhook-certgen:v1.5.2Confirm the images are now in the private registry
curl my-docker-registry.com:5000/v2/_catalog
Use the sample shell script below to tag and push the LumenVox and external-service images to your local registry. Save this content as load_push_local_registry.sh in the /lumenvox directory, then execute the script to initiate the pull of the LumenVox model files.
#!/bin/bash
# Registry config
REGISTRY="my-docker-registry.com:5000"
IMAGE_DIR="/lumenvox/lv_images-offline"
# Ensure the registry string doesn't end with a slash for clean concatenation
REGISTRY="${REGISTRY%/}"
for TAR in "$IMAGE_DIR"/*.tar; do
echo "----------------------------------------------------------"
echo "Processing $TAR..."
# Capture the full name:tag from the docker load output
IMAGE_FULL_NAME=$(docker load -i "$TAR" | awk '/Loaded image:/ { print $3 }')
if [ -z "$IMAGE_FULL_NAME" ]; then
echo "Error: Failed to extract image name from $TAR"
continue
fi
echo "Found image: $IMAGE_FULL_NAME"
# 1. Remove 'docker.io/' prefix if it exists
CLEAN_NAME="${IMAGE_FULL_NAME#docker.io/}"
# 2. Remove 'lumenvox/' prefix if it exists
# This turns 'lumenvox/mrcp-api:7.0' into 'mrcp-api:7.0'
CLEAN_NAME="${CLEAN_NAME#lumenvox/}"
TARGET_IMAGE="${REGISTRY}/${CLEAN_NAME}"
echo "Tagging as: $TARGET_IMAGE"
docker tag "$IMAGE_FULL_NAME" "$TARGET_IMAGE"
echo "Pushing: $TARGET_IMAGE"
docker push "$TARGET_IMAGE"
done
echo "----------------------------------------------------------"
echo "Done."curl my-docker-registry.com:5000/v2/_catalog
Offline Server Preparation
System Prerequisites
Kubernetes requires specific system settings to manage container networking and memory efficiently.
Disable Swap Space
sudo swapoff -a sudo sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
The following modules are required for the Kubernetes pod network (calico) to function correctly.
sudo tee /etc/modules-load.d/k8s.conf <<EOF ip_tables overlay br_netfilter EOF sudo modprobe ip_tables sudo modprobe overlay sudo modprobe br_netfilter
Adjust the system’s network filtering and disable the firewall to allow inter-pod communication within the cluster.
# Enable bridged traffic and IP forwarding sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Apply sysctl settings without reboot sudo sysctl --system # Disable Firewall and AppArmor sudo systemctl stop ufw sudo systemctl disable ufw sudo systemctl stop apparmor sudo systemctl disable apparmor
/etc/hosts: Add entries for the offline server and the private repository (hosted on the online server) to ensure proper name resolution across your environment.sudo vi /etc/hosts 172.18.2.72 ubuntu-offline 172.18.2.71 my-docker-registry.com
We use rsync to synchronize folders and files between servers. Please ensure rsync is installed on both the source and destination machines. Alternatively, scp may be used if rsync is unavailable in your environment.
sudo mkdir /lumenvox rsync -avzP user@remote_host:/path/to/remote/folder /path/to/local/destination/
Docker and Containerd
Installing Docker and Containerd
cd /lumenvox/docker-offline sudo dpkg -i *.deb sudo systemctl enable --now docker sudo systemctl enable --now containerd
containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml sudo sed -i 's|sandbox_image = "registry.k8s.io/pause:3.8"|sandbox_image = "registry.k8s.io/pause:3.10"|g' /etc/containerd/config.toml sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry\]/,/\[/ s|config_path = .*|config_path = "/etc/containerd/certs.d"|' /etc/containerd/config.toml sudo mkdir -p /etc/containerd/certs.d/my-docker-registry.com:5000 #create the host.toml file# cat <<EOF | sudo tee /etc/containerd/certs.d/my-docker-registry.com:5000/hosts.toml server = "http://my-docker-registry.com:5000" [host."http://my-docker-registry.com:5000"] capabilities = ["pull", "resolve"] skip_verify = true EOF sudo systemctl restart containerd sudo usermod -aG docker $USER newgrp dockerConfigure Insecure Registries
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries": ["my-docker-registry.com:5000"]
}
EOF
sudo systemctl restart dockerTo List the Content of the private docker registry on the online server.curl my-docker-registry.com:5000/v2/_catalog
Crictl
cd /lumenvox/crictl-offline sudo dpkg -i cri-tools_1.33.0-1.1_amd64.deb sudo crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock
cd /lumenvox/k8s-offline/ sudo dpkg -i *.deb sudo systemctl enable --now containerd
cd /lumenvox/k8s-images sudo ctr -n k8s.io images import coredns:v1.12.0.tar sudo ctr -n k8s.io images import etcd:3.5.24-0.tar sudo ctr -n k8s.io images import kube-apiserver:v1.33.8.tar sudo ctr -n k8s.io images import kube-controller-manager:v1.33.8.tar sudo ctr -n k8s.io images import kube-proxy:v1.33.8.tar sudo ctr -n k8s.io images import kube-scheduler:v1.33.8.tar sudo ctr -n k8s.io images import pause:3.10.tarInitialize the Control Plane
sudo kubeadm init --apiserver-advertise-address=172.18.2.72 --kubernetes-version=v1.33.8You should see the following if the control plane initializes successfully. This could take up to 5 minutes.
Setup the kubectl cli for the user
mkdir -p $HOME/.kube sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node kubectl taint node <node-name> node-role.kubernetes.io/control-plane-
The NotReady status is perfectly normal at this stage. This is because the Container Network Interface (Calico) has not been installed yet.
Calico
cd /lumenvox/calico-offline sudo ctr -n k8s.io images import kube-controllers:v3.27.0.tar sudo ctr -n k8s.io images import node:v3.27.0.tar sudo ctr -n k8s.io images import cni:v3.27.0.tar kubectl apply -f calico.yaml
kubectl get node
Linkerd
cd /lumenvox/linkerd-offline sudo chmod +x linkerd_cli_installer_offline.sh sudo ctr -n k8s.io images import controller:edge-24.8.2.tar sudo ctr -n k8s.io images import metrics-api:edge-24.8.2.tar sudo ctr -n k8s.io images import policy-controller:edge-24.8.2.tar sudo ctr -n k8s.io images import prometheus:v2.48.1.tar sudo ctr -n k8s.io images import proxy:edge-24.8.2.tar sudo ctr -n k8s.io images import proxy-init:v2.4.1.tar sudo ctr -n k8s.io images import tap:edge-24.8.2.tar sudo ctr -n k8s.io images import web:edge-24.8.2.tar ./linkerd_cli_installer_offline.sh export PATH=$PATH:~/.linkerd2/bin linkerd check --pre linkerd install --crds | kubectl apply -f - linkerd install | kubectl apply -f - linkerd check linkerd viz install | kubectl apply -f - kubectl delete cronjob linkerd-heartbeat -n linkerd
cd /lumenvox/helm-offline tar -zxvf helm-v3.19.2-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm
Create the lumenvox namespace
kubectl create ns lumenvox
docker-compose.yaml file and update the image: tags to point to your private registry instead of the default repository.# Copy the external-services folder to the home directory cp -r /lumenvox/services-offline/external-services/ ~ cd ~/external-services vi docker-compose.yaml ### mongodb image: my-docker-registry.com:5000/mongo:8.2 ### postgresql image: my-docker-registry.com:5000/postgres:17.5 ### rabbitmq image: my-docker-registry.com:5000/rabbitmq:4.1.8-management ### redis image: my-docker-registry.com:5000/redis:8.2.4-alpine
.env file and update the password field with your specific credentials.vi .env #-------------------------# # MongoDB Configuration #-------------------------# MONGO_INITDB_ROOT_USERNAME=lvuser MONGO_INITDB_ROOT_PASSWORD=mongo1234 #-------------------------# # PostgreSQL Configuration #-------------------------# # Password for the root 'postgres' user #POSTGRESQL__POSTGRES_PASSWORD=postgresroot1234 # Credentials for new user POSTGRES_USER=lvuser POSTGRES_PASSWORD=postgres1234 #-------------------------# # RabbitMQ Configuration #-------------------------# RABBITMQ_USERNAME=lvuser RABBITMQ_PASSWORD=rabbit1234 #-------------------------# # Redis Configuration #-------------------------# REDIS_PASSWORD=redis1234 #-------------------------# # Restart Configuration #-------------------------# RESTART_POLICY=always
docker compose up -dCheck if the external services are running
docker ps
Follow the steps below to create your Kubernetes secrets. Before running the commands, ensure you replace all $PASSWORD placeholders with the actual values defined in your .env file.
kubectl create secret generic mongodb-existing-secret --from-literal=mongodb-root-password=$MONGO_INITDB_ROOT_PASSWORD -n lumenvox kubectl create secret generic postgres-existing-secret --from-literal=postgresql-password=$POSTGRES_PASSWORD -n lumenvox kubectl create secret generic rabbitmq-existing-secret --from-literal=rabbitmq-password=$RABBITMQ_PASSWORD -n lumenvox kubectl create secret generic redis-existing-secret --from-literal=redis-password=$REDIS_PASSWORD -n lumenvoxEdit the values.yaml with the clusterGUID, IP address of the external services ASR language(s) and TTS voice(s) to install.
cd /lumenvox vi values.yaml
You must create a self-signed TLS certificate key pair. This pair will be used to create the required Kubernetes Secrets that enable secure communication for the speech components.
Create Self-Signed Certificate key
openssl genrsa -out server.key 2048
Make sure the subjectAltName matches the hostnameSuffix in the values.yaml file.
openssl req -new -x509 -sha256 -key server.key -out server.crt -days 3650 \ -addext "subjectAltName = DNS:lumenvox-api.ubuntu12.testmachine.com, \ DNS:biometric-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:management-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:reporting-api.lumenvox-api.ubuntu12.testmachine.com, \ DNS:admin-portal.lumenvox-api.ubuntu12.testmachine.com, \ DNS:deployment-portal.lumenvox-api.ubuntu12.testmachine.com"
cd /lumenvox kubectl create secret tls speech-tls-secret --key server.key --cert server.crt -n lumenvox
cd /lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml watch kubectl get po -A
Wait for resource pods to come online, the itn-en, asr-en and neural-tts-en-us will fail and it’s expected because we have not loaded the model files.
Loading the model files
helm uninstall lumenvox -n lumenvox helm install lumenvox helm-offline/lumenvox -n lumenvox -f values.yaml
This is needed for the persistent volume job to set the appropriate permssion for the /data directory.
Copy the .manifest to/data/lang/manifest
cd /lumenvox/lv_models-offline cp -p *.manifest /data/lang/manifests/Copy the .tar.gz to /data/lang/downloads
cd /lumenvox/lv_models-offline cp -p *.manifest /data/lang/manifests/
kubectl rollout restart deployment -n lumenvoxAll pods should now be running
Ingress-Nginx
Configure ingress-nginx to pull from the docker private repository
cd /lumenvox/ingress-nginx-offline vi ingress-nginx/values.yamlSearch for “controller” in the file and set the image, repository, tag, digest and digestChroot values as shown below:
image: "controller"
repository: "my-docker-registry.com:5000/controller"
tag: "v1.14.1"
digest: null
digestChroot: null
Search for “kube-webhook” in the file and set the image, repository, tag and digest values as shown below:
image: “kube-webhook-certgen”
repository: "my-docker-registry.com:5000/kube-webhook-certgen"
tag: v1.5.2
digest: null
Search for “hostNetwork” in the file and set it to “true”.
hostNetwork: true
Installing ingress-nginx
helm upgrade --install ingress-nginx ./ingress-nginx -n ingress-nginx --create-namespace --set controller.hostNetwork=true --version 4.14.1 -f ./ingress-nginx/values.yaml
Move the mrcp-api to your home directory then configure Docker to pull the mrcp-api image from the specified private registry.
cd /lumenvox/services-offline/ cp -r mrcp-api ~ cd ~/mrcp-api/docker/ vi .env docker compose up -d
Copy the server.crt certificate to the mrcp-api
cd ~/mrcp-api/docker docker compose down sudo cp /lumenvox/server.crt certs docker compose up -d
Move the mrcp-client to your home directory, then configure Docker to pull the simple_mrcp_client image from the specified private registry.
cd ~/mrcp-api/docker docker compose down sudo cp /lumenvox/server.crt certs docker compose up -d
Creating a deployment - Please reference the Access the Admin Portal to Create a Deployment section in the Setup via quick start (kubeadm) guide - https://lumenvox.capacity.com/article/863702/setup-via-quick-start--kubeadm-
Licensing the server - Please reference the Setting up the license reporter tool to license a server in an air-gap environment -
https://lumenvox.capacity.com/article/631630/setting-up-the-license-reporter-tool
