Tanzu Kubernetes Cluster Ingress with NSX-ALB

In this blog I will show how to use NSX-ALB (Avi) for Tanzu Kubernetes Clusters (TKC) as an Ingress and a Load Balancer in a vSphere with Tanzu environment on top of NSX-T
I am running vSphere 7U1 with NSX-T 3.1.

The main motivation for this exercise is to provide an Ingress Controller for TKCs. Today (vSphere 7.0U1) TKCs are using NSX-T for Service Type LoadBalancer, but they don’t ship with any Ingress Controller. NSX-ALB is an enterprise class Ingress Controller with GLSB and WAF capabilities and most probably it will become the default choice for ingress in vSphere with Tanzu and TKG in the future. I am using Antrea as a CNI for Pods Networking.

Here is the design i am aiming for,

The reason I am going with two-armed mode is that TKC nodes cannot be reached directly in an NSX-T Environment. When we create a TKC, NSX-T automatically create a Gateway Firewall rule that blocks direct access to the TKC nodes at the Cluster T1-GW level. Having our NSX-ALB SE directly connected the nodes segment will give us the ability to bypass the Cluster T1-GW.

To integrate with K8s, we need a component called Avi Kubernetes Operator (AKO), which will run as a pod in our TKC. AKO will be the bridge between K8s API and NSX-ALB Controller for discovery and automation.

https://avinetworks.com/docs/ako/0.9/ako-design-and-deployment/

Assumptions:
1. ESXi and vCenter 7U1 are deployed
2. NSX-T Manager, Edge, and T0-GW are deployed
3. T0-GW is already peered with the Physical Network using BGP or Static Routes
4. WCP is enabled in vCenter with NSX-T
5. a Supervisor Cluster Namespace is created and setup correctly
4. Avi Controller is deployed as per my previous blog.

Deploy a TKC Cluster

First we need to create our Tanzu Kubernetes Cluster (TKC) in a vSphere 7 Environment. To do that we need to apply a cluster specs yaml in our Supervisor Cluster. K8s Cluster API (CAPI) will take care of deploying the cluster. Below is just an example of that and should be modified based on the cluster requirements

apiVersion: run.tanzu.vmware.com/v1alpha1      
kind: TanzuKubernetesCluster
metadata:
  name: ali-tkg-cluster-1
  namespace: ali-namespace-01
spec:
  distribution:
    fullVersion: v1.17.8+vmware.1-tkg.1.5417466 
  topology:
    controlPlane:
      count: 1
      class: guaranteed-large                  
      storageClass: gold-storage-policy         
    workers:
      count: 3
      class: guaranteed-large                  
      storageClass: gold-storage-policy         
  settings:
    network:
      cni:
        name: antrea
      services:
        cidrBlocks: ["198.51.100.0/12"]     
      pods:
        cidrBlocks: ["192.0.2.0/16"]  

This should create the cluster with Antrea CNI on vSphere as below

NSX-ALB Controller Prerequisites

Before deploying Avi Kubernetes Operator (AKO) in our TKC, there are multiple steps we need to go through in NSX-ALB Controller.

Create IPAM & DNS Profile to Automate creating DNS entries and IP address assignment when an Ingress is created.
Templates >> Profiles >> IPAM/DNS Profiles >> Create DNS Profile
This is going to be the Sub-Domain Name which will be used for our Ingress. There is no need to configure the sub-domain in our DNS Server because we will delegate it to NSX-ALB DNS at a later step.

Create IPAM Profile,
Templates >> Profiles >> IPAM/DNS Profiles >> Create IPAM Profile
We should add the network that will be used for our VIPs. (i am using an NSX-T Overlay Segment)

Now we need to point to the DNS/IPAM Profile in our Default-Cloud
Go to Infrastructure >> Clouds >> Default Cloud to add the IPAM and DNS Profile

While we are in the Default-Cloud, let’s go to the Data Center tab to Enable three things, DHCP, Prefer Static Routes vs Directly Connected Network, and Use Static Routes for Network Resolution of VIP.
By doing that, NSX-ALB will automatically create Static Routes to the internal K8s Pods subnets through the K8s nodes IP Addresses. it will work because in our design. NSX-ALB SE is directly connected to nodes L2-Segment. I will show the static routes later when AKO is deployed.
Even though I have NSX-T in my environment, and i am using NSX-T for the SE connectivity, I am using the Default-Cloud which is a vCenter Cloud. I am doing that because AKO does not support NSX-T Cloud yet when this post is being written.

In the Network tap select the Management Network of the SEs. I am using a VDS dPG, but an NSX-T Overlay could be used too

For Kubernetes environment, each K8s Cluster needs its own SE Group. Lets go ahead and create one for our cluster.
Infrastructure >> Service Engine Group >> CREATE
All what needed is to give it a name. You can leave the rest to the defaults. I named it ali-tkg-cluster-1 which is the name of my Tanzu K8s Cluster.

Last thing we need to do is to make sure our VIP network and a default route are configured correctly
Infrastructure >> Networks
We can add an IP Pools for the Data Network which will be used for the VIPs (NSX-ALB-Data in my case)

Infrastructure >> Routing
Configure a default route for the VIPs

Deploy NSX-ALB SE Manually(Optional)

In this guide, I have decided to deploy the SEs manually for better control to my lab limited resources. If you are looking to deploy the SEs automatically once an ingress or a service type LoadBalancer is created, then please skip this section.

First I need to change my Default-Cloud Access-Permission from Write to Read to be able to get the SE OVA and avoid any SE automatic creation.
Infrastructure >> Clouds >> Edit Default Cloud

Now we can download the SE OVA by pressing the Download icon infront of the Default-Cloud

Now we can deploy the SE OVA in vCenter. The main thing is to get our ports assignments right for Management, Data, and Pool sides.

– Management: in my case i am using a VDS dPG for Mgmt, but it could an NSX-T Overlay-Segment.
– Data Network 1: I am using this one for the VIPs. it is an NSX-T Overlay Segment.
– Data Network 2: This is the NSX-T Overlay Segment that is used by the TKC nodes. It is very important to get this one right to have L2 connectivity to the K8s nodes. AKO will create static routes to reach the pods inside the nodes.

The other configurations we need get right is the Authentication token for Avi Controller and Controller Cluster UUID for Avi Controller. We can get those from NSX-ALB Controller.
Go to the Default-Cloud and press the key icon on the right.

Then enter those during the OVA setup.

Once the OVA is deployed, it should be seen in the Controller. Lets go ahead and move it to our cluster Service Engine Group.
Infrastructure >> Service Engine >> Edit

Edit the Service Engine by changing the SE Group. Make sure your Networks and IP addresses are assigned correctly. If not, please assign the IP addresses manually.

DNS Configuration

NSX-ALB handles Ingress external IP addresses a bit differently than other K8s Ingress Controllers. Typically in other controllers, all Ingresses are assigned a single IP address and a DNS wildcard mask is assigned to this IP address. However, in NSX-ALB Ingress, we assign each K8s Ingress a separate IP Address derived from our VIP pool to enhance the High Availability of the solution.
If you want to know more why DNS wildcard mask are not the best choice, then please watch below video,
https://youtu.be/1t0nayBmQ1g
To avoid Configuring a DNS record per Ingress, we need to delegate our sub-domain to NSX-ALB DNS Service.
First we need to configure DNS Virtual Service (VS)
Application >> Virtual Service >> CREATE VIRTUAL SERVICE
Under Settings, we need to give it a name, VIP Address (Auto-Allocate), and an Application Profile (System-DNS).

Under Advanced, our SE Group should be picked. The SE Group does not have to be the same as the one we are using for AKO if there are other SEs deployed in other SE Groups.

It should look like below (it is ok if it has lower health score when it is just deployed). Please take a note of the IP Address as it will be used for the Domain Delegation.

Now we need to go to enable the service under Administration >> Settings >> DNS Service by pointing to our DNS-VS

In our main DNS Server, we want to delegate our subdomain to NSX-ALB VS
(the subdomain should match our DNS Profile)

Point the DNS-VS IP Addess
(I already created a DNS entry ali-avi-dns-01 >> 192.168.28.106)

By doing that. Any DNS request for subdomain ali-avi.vmwdxb.com will be delegated to NSX-ALB VS.

Deploy AKO

Login to our TKC using below command

$ kubectl vsphere login --server <WCP-IP-ADDRESS> -u administrator@vsphere.local --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace ali-namespace-01 --tanzu-kubernetes-cluster-name ali-tkg-cluster-1

Cretae avi-system namespace
$ kubectl create ns avi-system

Add AKO helm repo
$ helm repo add ako https://avinetworks.github.io/avi-helm-charts/charts/stable/ako 


(optional) Search for AKO in the repo to check the version
$ helm search repo | grep ako
NAME CHART VERSION APP VERSION DESCRIPTION
ako/ako 1.2.1 1.2.1 A helm chart for Avi Kubernetes Operator

Get the values.yaml base file, or simply copy it from below
$ curl -JOL https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/stable/ako/values.yaml 
The file will be used to create K8s Config-Map for our AKO Pod

Edit the file as per the environment
Values that needs to be changed are in blue

#values.yaml
replicaCount: 1

image:
  repository: avinetworks/ako
  pullPolicy: IfNotPresent

AKOSettings:
  logLevel: "INFO" 
  fullSyncFrequency: "1800" 
  apiServerPort: 8080 
  deleteConfig: "false" 
  disableStaticRouteSync: "false" 
  clusterName: "ali-tkg-cluster-1"  #TKC Name
  cniPlugin: "" 

NetworkSettings:
  nodeNetworkList:
  - networkName: "vnet-domain-c34:f593d27f-228d-4795-af60-626f6a697dff-ali-namespace-01-al-d4350-0" #TKC Nodes Segment
    cidrs:
    - 10.244.1.48/28          #TKC Nodes CIDR
  subnetIP: "192.168.28.0"     #VIP Subnet
  subnetPrefix: "24"           
  networkName: "NSX-ALB-Data"  #VIP Network

L7Settings:
  defaultIngController: "true"
  l7ShardingScheme: "hostname"
  serviceType: ClusterIP 
  shardVSSize: "LARGE" 
  passthroughShardSize: "SMALL" 

L4Settings:
  defaultDomain: "" 

ControllerSettings:
  serviceEngineGroupName: "ali-tkg-cluster-1" 
  controllerVersion: "20.1.2" 
  cloudName: "Default-Cloud" 
  controllerIP: "192.168.10.83"

nodePortSelector: 
  key: ""
  value: ""

resources:
  limits:
    cpu: 250m
    memory: 300Mi
  requests:
    cpu: 100m
    memory: 75Mi

podSecurityContext: {}

avicredentials:
  username: "admin"
  password: "CONTROLLER-PWD"

service:
  type: ClusterIP
  port: 80

persistentVolumeClaim: ""
mountPath: "/log"
logFile: "avi.log"

Deploy AKO using helm
$ helm install ako/ako --generate-name --version 1.2.1 -f values.yaml --namespace=avi-system


Check AKO pod status. if you see some restarts, or the status is not “Running”, then there is something wrong with the values.yaml file.

$ kubectl get pods -n avi-system
NAME READY STATUS RESTARTS AGE
ako-0 1/1 Running 0 1h

Now check that AKO created the static routes automatically for the pods under Infrastructure >> Routing

The Static Routes are created automatically because we checked below boxes in our default-cloud

Deploy Test Application with HTTP Ingress

Create a new Namespace for our Application
$ kubectl create ns yelb

Deploy yelb application
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/yelb/main/yelb-no-lb.yaml -n yelb

Deploy an Ingress (you should change the “host” to match your domain name)
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/yelb/main/yelb-ingress.yaml -n yelb

#yelb-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: yelb-ingress
spec:
  rules:
  - host: "yelb.ali-avi.vmwdxb.com"
    http:
      paths:
      - path: 
        backend:
          serviceName: yelb-ui
          servicePort: 80
$ kubectl get ingress -n yelb
NAME           HOSTS                    ADDRESS          PORTS   AGE
yelb-ingress   yelb.ali-avi.vmwdxb.com   192.168.28.101   80     14m

Check NSX-ALB Application Dashboard,

Now we can access our application using “yelb.ali-avi.vmwdxb.com”. Please note that I did not need to configure any specific DNS record for it because we are delegating “ali-avi.vmwdxb.com” subdomain to NSX-ALB DNS Virtual Service.
One more thing to notice is NSX-ALB automatically created an HTTPS ingress. Give it a try and access the application with HTTPS

We can see in NSX-ALB Contoller how is our application performing

And we can even see the logs for a specific request

Deploy Test Application with HTTPS Ingress

Lets deploy another application
$ kubectl create ns hipster
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/k8s-lab/master/hipster-no-lb.yaml -n hipster


Create HTTPS Ingress (dont forget to change the host with your domain name and cert)
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/k8s-lab/master/ingress.yml -n hipster

apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
  name: hipster-tls
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYekNDQWtjQ0ZDUUVsVnVYck5OWW1XZDdRUHE0WHhndlFoZEJNQTBHQ1NxR1NJYjNEUUVCQ3dVQU1Hd3gKQ3pBSkJnTlZCQVlUQWxWVE1Rc3dDUVlEVlFRSURBSkRRVEVTTUJBR0ExVUVCd3dKVUdGc2J5QkJiSFJ2TVF3dwpDZ1lEVlFRS0RBTlBVek14RERBS0JnTlZCQXNNQTBWdVp6RWdNQjRHQTFVRUF3d1hhR2x3YzNSbGNpNWhjSEJ6CkxtTnZjbkF1Ykc5allXd3dIaGNOTWpBd01UQTBNREUxTVRVeVdoY05NakF4TWpJNU1ERTFNVFV5V2pCc01Rc3cKQ1FZRFZRUUdFd0pWVXpFTE1Ba0dBMVVFQ0F3Q1EwRXhFakFRQmdOVkJBY01DVkJoYkc4Z1FXeDBiekVNTUFvRwpBMVVFQ2d3RFQxTXpNUXd3Q2dZRFZRUUxEQU5GYm1jeElEQWVCZ05WQkFNTUYyaHBjSE4wWlhJdVlYQndjeTVqCmIzSndMbXh2WTJGc01JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMFViUmoyVVgKeFhPMVFpQXA1MFZzWXFIUWtpWkk3aCs2b2tEWkVuWUE1TEtxMTV4M0RMTjkxZkFTWUhsR0MwcEYvSXFEb3NVYgpHeFJkbGtlbFZuL29EeUt6Q0ZmeS9SeFNkVmRXNm8vTXovS1ovcStkUklSTWJMbU01SXVwbXJIOEZwdFR4TElwCmM4dWtZby9pNHZhcFlOY1ZpaWhFOVJ1T2cxTWFoSCtBVEpJOEorR0o5aVhwdUZCOEswaStqYU93OTNrZGFQSXMKd243RzBNT2NrSCt6QXBZOTFJSDhSN3pkOWZXa0U0ZHJWMzM2QVFoUWUyTi9Ia0w0b05SZythOG90Y2RyVDdyRwpHbStTRjUvOTZNd3pUdUZnZWJIbHhybWQ3emFUUGJLYkwvaHlhRi9CSEtBWFJhOFpYdTNNdVJzdW41bW0yZTZvCkp3MVJ4aE50ZFN1SmxRSURBUUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElDQVFDUzg2TkVwY1lSS2h6NmQ5UkoKaWJ0TXRBQ0RxdHFKNkF0SWJsbFYyYjlSVEYvZ2ZKaVNBTEJnNnJQRXpPbndVVmV3bE9sTXRtZGtyekdCY042TApUelAzSE1RQjQxNy8ycnVHUXczaFdoNEpRdXpLOHZFTW1YOFlsSkpzSk0yS0svSitnRHNKendIM2ZSTWhDWjBHCnltYVVtbE5rN1RzQVRIWk1CcDV0aUZmL0dIN01jeVVSWkNxak4ydHREN2REUEhmb3JvMzM1VjBuMzNZSkxQZHEKRzl1bDNlUnNvcFRTZ2VFWWh0ZVRBMkpFdktkN0xMOXF6dG9raXp0eTNlcmZ1c2NrVk0wS3FkeUhkcTBPUlpNMgpySHhZU280TUJJb0phTjZ6Z282bTFQMmNoL2wyNXU5NUVkWlA1cnJJOWduL3JEbmJwdU44QlY0R3k1ZEYyRDY0CmY0SlplZVVzc281VG9ETzlvODBWY3pValhPMC9BamtmWTdPNEE3UlNiOXpIQldwbURrajBWTGdITEN6ekFXcUUKMjA4WHkrcXV1UXRSV2VhMFVsSTcvM09zdlJHLzlyYVFEYXZoUlhNWmZXZVBPdm1jZzdYK25zN29RTWhZNkoxWApKK09VYXdPWG9oc0NlVlhDcFVEZVFCN1pxbTNBamE5WjBVOXNJdG1Ud3paU3NSbE4rdi9xdkZ1UUkvbDVBQWxFCkhlaWw1ck1lQzdwTjBkYi9qU1JtZldxMEtXL1NCU3hKaVoyeFMrMVBNMEk4Uk44RURONFlaaXg0SGkrblRhYzAKVGk4MCtFT2xkaHEwR0g4bnFXSkRFSGFiVUVKNWk4SjAybm85MDV0M1dKdXdNYWo1Y3czdkJQMGVMUVd6SVBWbQpmeDVCelZYWnpLL3lTa0xHM05QNFpuOW1PQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFJSdEdQWlJmRmM3VkMKSUNublJXeGlvZENTSmtqdUg3cWlRTmtTZGdEa3NxclhuSGNNczMzVjhCSmdlVVlMU2tYOGlvT2l4UnNiRkYyVwpSNlZXZitnUElyTUlWL0w5SEZKMVYxYnFqOHpQOHBuK3I1MUVoRXhzdVl6a2k2bWFzZndXbTFQRXNpbHp5NlJpCmorTGk5cWxnMXhXS0tFVDFHNDZEVXhxRWY0Qk1randuNFluMkplbTRVSHdyU0w2Tm83RDNlUjFvOGl6Q2ZzYlEKdzV5UWY3TUNsajNVZ2Z4SHZOMzE5YVFUaDJ0WGZmb0JDRkI3WTM4ZVF2aWcxR0Q1cnlpMXgydFB1c1lhYjVJWApuLzNvekROTzRXQjVzZVhHdVozdk5wTTlzcHN2K0hKb1g4RWNvQmRGcnhsZTdjeTVHeTZmbWFiWjdxZ25EVkhHCkUyMTFLNG1WQWdNQkFBRUNnZ0VBZlllOFJnWStwd3JMNi9rOUNXT2tLdG1qTVRkVHdibzRpZ0RaOUcvaUEweUUKbThaWHhyK1h1STlEaHFqWDhnZkFTVWFReFQ3MERsODk3OW5UL0RuRzZlVkhmTGE4bzBTczFZUHBOOU8vNS9BKwpuUDJjR1RBK1kyMDliUTIxVTN4MW1OM2M5bnhqenZpVkJ5WUYwMXhmcHgzODVwMVhGNnRLNWMwZ2Q5Ky9CcTRQCk1VT0MxRkFTMkVHZ2FFdHlFSkpSVU1mN3NvYWx2RTM1Y0ZNVUFLZ1pNNEFqUmtBTDFDQ1BvOVQ5bjZJczdRNG4KdFVlSkV1cUt1YVplUUc0ZG5EbUVYazNNZlM0QVZrMnRsdjhNVW5EUVRMUW5qMzdDK0krN1ZLWmg3UGJKbjQ4ZgpBSFVtZjZ4VnpiV3FNQTdUMFFPOTdoRk9KT1R6RVZXUXlWc1BST0xNRlFLQmdRRHRUTE54UlcvdGNwMmgvUUZHClJtc3Q2OGdFckd6bTFUY2l3eTVReGNVWHUyYjk2NnY1UWNjci91bGZKRkJSQ09lb0NPQmRyb2lwd2dhdnpVd2EKYnlJS0RjZ3hWNDZ6a1JtOVpYaXR6OFczY1k3LzdLajFDdjRCYVMzSEI3cEpBc1pUYVJ2V2tCVVhBSnhobW1vbgoweWJuMkZTaDI5NlBZTFNGM2I0RmFVeSszd0tCZ1FEaHhNZTJJdDZ4SnZieUcyVVVEa2FjaWtsVnh6ZkZKSGp1CjRlcHJ5YlplT1BrSml4cXM5cnRUL2VoYnNtbVRNT0ROSzVuQ1poaGgzMGVCYm5FRk04N3I0Qi9TT3d5SmY3UWsKRElGOEg1VUxJcTRTeFpXanFvcnJyVEdsRWdaUmZ6emdJMHZRMVdWT05Rc3FGWWcxd1l3cXI2WC9WUUdiZ1RxagpTcWx5emdscUN3S0JnSG1RRWxqVGpud2dmQm93eHdkZUthZlRvcHFxVGZ1T2ZIbEZiYU9aUE5ka2ZHVlY1cnFBCjlPeFg0T3VKYWMrcGRTc0NxUlcweEhQYVhweU8yZzZzb2M1dXN3Qjc3ekdVQXBDZ3U4cW1wbzNNRWNxUFRScUMKOEE1KytDRitsdkt5QmpGU3BoMHJvSEl4TU90Yk5FaUVoZWk5VE5YQ0VlaDNUT05LN2Y1TnJEQVhBb0dCQUpUWQo3L0tkT3NVQk0zNmJvU0IvNlAzOERpMkhrclZmUG53QVpsVjZQOG9QTmVHYzNKRjhlalQrQ2R1cTNRQTJFWUF6ClpzUk1HM2NyaGpGSFp5eE80L1dQWm10c2t1OTBTb2daMXFUSERiU3h3S0tQc2dDZHg4bHAvbmtlVVJ3YUQwQ1gKQkwxQ2MvQUQrTUJlUWRkdks4SlkyOUJqY3hQYk41WEErOGE5SUdmUkFvR0FIVFQ5dlhuVGVxWmRialFNTDR1MwpJMXZScnpOWThhSmVVTHlGNU5CQVJCU1I2QzFXVHZBK0syYVhmc29UUTNHVTBhdjFwK1VMVHBKVHNKTEQ2bTVMCjRPL1E3allIcXFjaHR1RGRsUmlqMFMzRHUzK3NuOXJ5WUE3OUJkc0t3QnhQTmpER3pkZ1puWENZT3NjSzJQS1AKNHBKMEpTODh5L25ldU9FMVVCZElOVXc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: hipstershop
  annotations:
    ncp/http-redirect: "true"
spec:
  tls:
  - hosts:
    - hipster.ali-avi.vmwdxb.com
    secretName: hipster-tls
  rules:
  - host: hipster.ali-avi.vmwdxb.com
    http:
      paths:
      - path: 
        backend:
          serviceName: frontend
          servicePort: 80

Check out HTTPS Ingress
$ kubectl get ingress -n hipster
NAME HOSTS ADDRESS PORTS
hipstershop hipster.ali-avi.vmwdxb.com 192.168.28.103 80, 443


Please note the different IP address per Ingress (even if it was HTTP, NSX-ALB will always assign diffrent IP address).

Now lets access our application using HTTPS. Please note we did not need to configure anything DNS entries because we are delegating ali-avi.vmwdxb.com to NSX-ALB DNS Virtual Service.

Deploy Service Type Load Balancer

Delete previously created Ingress to avoid confusion
$ kubectl delete -f https://raw.githubusercontent.com/aidrees/k8s-lab/master/ingress.yml -n hipster

Create a Service Type LoadBalancer
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/k8s-lab/master/hipster-lb-svc.yaml -n hipster

apiVersion: v1
kind: Service
metadata:
  name: frontend-external
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
  - name: http
    port: 80
    targetPort: 8080

Check if the service is created and get the external IP address
$ kubectl get svc -n hipster | grep LoadBalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend-external LoadBalancer 198.54.78.174 192.168.28.103 80:31982/TCP 19s


Access the application using the IP address.

With NSX-ALB, an FQDN is assigned automatically even for Service Type LoadBalancer. it can be seen in the NSX-ALB UI

We can access the app using the external IP Address or the FQDN without configuring DNS.

Deploy a Traffic Generator (Optional)

To have some nice Diagrams, lets deploy a Traffic Generator. I am using Locust

Deploy a test app
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/acme_fitness/main/secrets.yaml
$ kubectl apply -f https://github.com/aidrees/acme_fitness/blob/main/acme_fitness.yaml

Deploy Service Type Load Balancer (or Ingress)
$ kubectl apply -f https://raw.githubusercontent.com/aidrees/acme_fitness/main/acme-lb.yaml

Deploy Traffic Generator
$ git clone https://github.com/aidrees/traffic-generator.git
$ cd traffic-generator
$ pip install -r requirements.txt
$ kubectl apply -f loadgen.yaml

$ Locust --host=http://xxx.ali-avi.vmwdxb.com --port=8085
Open your browser and go to http://localhost:8085 and start warming

Explore NSX-ALB UI for some nice traffic analytics.

Conclusion

By that I will conclude this post. I hope you enjoyed reading it and learned something from it. In this post I showed how to make NSX-ALB works in a vSphere with Tanzu environment with NSX-T.
This is my first experience with AKO. I can honestly say that I enjoyed testing NSX-ALB (Avi) and I am very impressed with the solution. The built-in analytics capabilities are really nice, not to mention Active-Active Load Balancing, Dynamic SE creation, Auto-Scaling, ..etc. I am aware that i am only scratching the surface with the solution and hopefully I get the chance to test and write about its WAF and GSLB capabilities.

Thank you for reading!

2 thoughts on “Tanzu Kubernetes Cluster Ingress with NSX-ALB

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: