Upgrade NGINX ingress versions
The ingress controller you use as part of your Kubernetes configuration is largely controlled by customers, so managing version upgrades to these is something typically handled by customers, however doing so needs to be performed in a controlled way to minimize any downtime in connectivity to your cluster.
When you first install your cluster, many things are configured, including the ingress controller. The ingress controller is responsible for routing external traffic to various components in the cluster, such as the Lumenvox API.
An example of when you might wish to upgrade your NGINX ingress controller is if a security vulnerability has been discovered.
A practical example of this can be seen here: https://github.com/advisories/GHSA-mgvx-rpfc-9mpv, where a critical severity vulnerability was discovered, which impacted versions older than 1.11.5
NOTE: The versions mentioned in this article may be different than the versions you are running, so please check to make sure that you adjust your versions accordingly. Vulnerabilities and updates appear regularly, and we recommend following best practices for updating software.
Containers Quick Start
Many customers use our https://github.com/lumenvox/containers-quick-start to help configure their self-managed Kubernetes cluster, and part of this installation process is running a script containing the following in lumenvox-control-install.sh (wrapping added for readability):
############################################# # Step 9: Install nginx ingress controller ############################################# printf "9. Installing nginx ingress controller...\n" | $TEE -a helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx \ -n ingress-nginx --create-namespace --set controller.hostNetwork=true \ --version 4.11.3 --set controller.allowSnippetAnnotations=true 1>>$MAIN_LOG 2>>$ERR_LOG if [ $? -ne 0 ]; then printf "\t\tFailed to install nginx ingress controller\n" | $TEE -a exit 1 fi
You can see from the above that the version mentioned is 4.11.3, and this impacted by the vulnerability, so needs to be updated.
Updating Ingress
Fortunately, you do not need to tear down your entire Kubernetes cluster and reinstall everything to simply upgrade the NGINX ingress controller.
For this specific update, we want to upgrade from the installed version (4.11.3, which contains NGINX 1.11.3, per the ingress-nginx documentation) to the version containing the path (4.11.5, which contains NGINX 1.11.5) by following these steps, using kubectl on your cluster:
1 - Perform a helm upgrade
This applies the change to the version, as described above
# upgrade ingress-nginx to 1.11.5/4.11.5 helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx --create-namespace --set controller.hostNetwork=true --version 4.11.5 --set controller.allowSnippetAnnotations=true
You can see, towards the end of the line, that the new version (4.11.5) is specified.
Running this step will upgrade the helm chart, download and prepare any new images and update any configmaps, etc.
Note that it may take a few seconds after running this step for the new pod to show up in step 2.
Also, please bear in mind that the parameters shown here may vary from version to version. For example, the following option is only applicable to versions prior to 1.12 of nginx, so it is worth checking the latest scripts that we publish to our github repo, but also perform your own research too.
--set controller.allowSnippetAnnotations=true
2 - List the ingress pods
You will need a list of the running ingress pods, so that you can determine which one need to be deleted in the next step
# list the ingress-nginx pods kubectl get po -n ingress-nginx
When the upgrade from step 1 is complete, you will see something like the following when you list your ingress pods
Here, the "new" pod is shown in a non-started, or pending state, with the "old" pod continuing to run until it is replaced in step 3
3 - Delete old pod
Once you have a list of the pods, you can identify the unique ID of the one that needs to be deleted, in order to have the new version replace it.
Use the following command to delete the old (running) version of the ingress-nginx-controller:
# the new pod won't start until we delete the old pod, so delete the old pod kubectl delete po <old-pod-name> -n ingress-nginx
After running this, you should see Kubernetes replace the old (running) ingress pod with a new one (using the upgraded version). The new one should not take long to start.
When the new pod starts, your system should be checked to ensure connectivity is working as expected. Running diagnostics may be a convenient way to do this, but ensuring you have connectivity from outside the cluster should be checked too.
Rolling Back
If you decide that the upgrade is not something you want, and you decide you would prefer to go back to the previous version (and any vulnerabilities it may have), you can perform a rollback operation to reverse the above steps.
To do this, you should first run the following command to get a list of the recent helm history, so that you have an understanding of where you would like to roll back to:
helm history ingress-nginx -n ingress-nginx
This provides a list of the helm changes to your system, and may look something like this:
Assuming you would like to roll back to one of the earlier revisions shown, you can use the following command:
# The release number to be used in the rollback command will be the revision # that was superseded by the failed deployment. shell helm rollback ingress-nginx <revision> -n ingress-nginx
It is important to note that following such a rollback operation, you will need to wait for the new pod to prepare, and then delete the old ingress pod again (as described in steps 2 and 3 above) before the old version of the ingress is in place again.