AKO on vSphere with Tanzu with VDS
Overview
The Avi Kubernetes Operator (AKO) is an operator which works as an ingress Controller and performs Avi-specific functions in the Kubernetes environment with the Avi Controller. It remains in sync with the necessary Kubernetes objects and calls the Avi Controller APIs to configure the virtual services.
AKO in vSphere with Tanzu
When using VDS as the networking option for vSphere with Tanzu (TKGs), AKO will automatically be deployed into the Supervisor cluster to handle L4 workloads in Avi. This para-virtualized AKO will manage the L4 workloads in both the Supervisor cluster and each of the workload clusters.
Additionally, AKO can be manually deployed through the helm into the workload cluster to support L7 Ingress workloads.
For more information, see Install NSX Advanced Load Balancer.
AKO Compatibility
When deploying AKO through helm in the workload cluster, the Avi version must be compatible with the AKO release. For more information, see Compatibility Guide for AKO.
Deployment Guide
AKO can be installed on any workload cluster through helm to handle the L7 Ingress workloads.
Deploying the Avi Controller
To deploy an Avi controller (or Cluster of Avi controllers), see Installing Avi Vantage for VMware vCenter for more details.
Configuring vCenter Cloud on Avi
The point of integration with Avi and vCenter is called a cloud. For the vCenter environment, a vCenter cloud must be configured. For more information on configuring vCenter cloud, see Installing Avi Vantage for VMware vCenter.
Configuring Avi IPAM Profile
Avi allocates IP addresses from a pool of IP addresses within the subnet configured. After creating the profile, modify the vCenter cloud and add the profile as shown below.
For more information, see IPAM Provider (Avi Vantage).
Configuring Avi DNS Profile
AKO uses FQDN and path-based routing. It must be authoritative in the specified domain. After creating the profile, modify the vCenter cloud, and add the profile as shown below.
For more information, see IPAM Provider (Avi Vantage).
Installing Helm on the Workload Cluster
Helm is an application manager that facilitates the installation of packages in a Kubernetes environment. AKO requires a helm for installation. For more information on install commands, see Installing Helm.
Pod Routing
Kubernetes Ingress traffic can be routed to the pods in the following ways:
- ClusterIP
- NodePort
- NodePortLocal
ClusterIP
In ClusterIP mode, the Avi SEs will route directly to the Pod IPs. For this to work, Avi will configure static routes on the SEs for the internal Kubernetes cluster network. With this design, Avi can health check each pod individually and provide application persistence at the application level. However, this design requires a new SE Group per cluster since each SE Group will have its own static routes to each Kubernetes cluster. Additionally, the Avi SEs must have a vNic in the Kubernetes node network.
NodePort
In NodePort mode, the Avi SEs will route to the Kubernetes service. No static routes are required, as the service will be externally reachable through the NodePort. This design allows for the reuse of the SE Group since no static routes to the Kubernetes nodes are required. However, this design limits monitoring and persistence because most of this is handled by kube-proxy.
NodePortLocal
In NodePortLocal mode, the Avi SEs will route directly to the pods through a nodeport. Each pod is directly exposed as a NodePort. No static routes are required with this design since the NodePorts are externally routable. Additionally, this design will allow the reuse of SE Groups.
Installing AKO on the Workload Cluster
AKO is installed through helm using a values.yaml file with various parameters specific to the environment. For more information, see values.yaml. When using VDS in the TKGs environment , the below parameters must be configured:
-
AKOSettings.clusterName: cluster-1
Create a Unique cluster name for each cluster. -
AKOSettings.layer7Only: true
Set this to true. Avi will handle the L7, and VDS will still handle the L4. -
NetworkSettings.vipNetworkList: Define the VIP Network List.
-
L7Settings.serviceType: Set this to either ClusterIP (default) or NodePort.
-
ControllerSettings.serviceEngineGroupName: Default-Group
-
ControllerSettings.controllerVersion: 22.1.2
-
ControllerSettings.cloudName: vcenter-cloud
-
ControllerSettings.controllerHost: ‘’
-
ControllerSettings.tenantName: admin
-
Avicredentials.username: username
-
Avicredentials.password: password
After configuring the necessary parameters in the values.yaml file, install AKO using the following command:
helm install ako/ako --generate-name --version 1.10.1 -f values.yaml namespace=avi-system
For complete installation steps, see Install Avi Kubernetes Operator.
Validating the AKO Installation
This optional step will validate the AKO pod running in the avi-system namespace. Use kubectl get pods -n avi-system
command to show ako-0 pod.
Below is an example of the output:
Deploying an Ingress
AKO is now installed and configured for L7 Ingress. After creating the first ingress, the appropriate objects are created in the Avi controller and an SE is automatically be deployed (if not configured already) to handle the L7 load balancing.