WHAT TO EXPECT
Before running the desired application in the cloud, it is a good idea to test with swXtch.io's provided tools/examples.
In this article, users will learn how to test xNIC with K8s. Please complete the installation process outlined in Install xNIC Daemonset on K8s Cluster before beginning testing.
Prerequisites
For this test to work, a user should have at least two nodes.
STEP ONE: Create A Consumer
Create a TestConsumer.yaml file using the example below.
Replace the AAA.BBB.CCC.DDD in the section XNIC_SWXTCH_ADDR with the cloudSwXtch control address.apiVersion: v1 kind: Pod metadata: name: consumer-a labels: app: consumer-a spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - producer-a - consumer-b topologyKey: kubernetes.io/hostname containers: - name: consumer-a image: ubuntu:20.04 securityContext: privileged: true env: - name: IS_DAEMON value: "false" - name: PERF_TYPE value: "consumer" - name: PERF_NIC value: "eth0" - name: PERF_MCGIP value: "239.0.0.10" - name: PERF_MCGPORT value: "8410" - name: XNIC_SWXTCH_ADDR value: "AAA.BBB.CCC.DDD" command: ["/bin/bash"] args: ["-c", "apt update && apt install curl -y; curl http://$(XNIC_SWXTCH_ADDR)/services/install/swxtch-xnic-k8s-install.sh --output swxtch-xnic-k8s-install.sh; chmod +x swxtch-xnic-k8s-install.sh; ./swxtch-xnic-k8s-install.sh -v 2; sleep infinity"]Azure provides a way to directly upload the script. But the users can also copy and paste the code directly into the console using a Linux editor, like
nanoorvi.
STEP TWO: Create a Producer
Create a TestProducer.yaml file using the example below.
Replace AAA.BBB.CCC.DDD on the XNIC_SWXTCH_ADDR section with the cloudSwXtch control address.apiVersion: v1 kind: Pod metadata: name: producer-a labels: app: producer-a spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - consumer-a - producer-b topologyKey: kubernetes.io/hostname containers: - name: producer-a image: ubuntu:20.04 securityContext: privileged: true env: - name: IS_DAEMON value: "false" - name: PERF_TYPE value: "producer" - name: PERF_NIC value: "eth0" - name: PERF_MCGIP value: "239.0.0.10" - name: PERF_MCGPORT value: "8410" - name: PERF_PPS value: "100" - name: XNIC_SWXTCH_ADDR value: "AAA.BBB.CCC.DDD" command: ["/bin/bash"] args: ["-c", "apt update && apt install curl -y; curl http://$(XNIC_SWXTCH_ADDR)/services/install/swxtch-xnic-k8s-install.sh --output swxtch-xnic-k8s-install.sh; chmod +x swxtch-xnic-k8s-install.sh; ./swxtch-xnic-k8s-install.sh -v 2; sleep infinity"]Upload the file or copy the content to an editor:

STEP THREE: Run Test
Create the producer pod by running this command in the CloudShell window.
kubectl create -f TestProducer.yamlCreate the consumer pod by running this command in the CloudShell window.
kubectl create -f TestConsumer.yaml
Validate they are running using this command:
kubectl get pods -o wide -ABelow is an example in Azure showing the consumer-a and producer-a running:
$ kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system consumer-a 1/1 Running 0 15m 10.0.1.91 aks-nodepool1-23351669-vmss000006 <none> <none>
kube-system producer-a 1/1 Running 0 15m 10.0.1.90 aks-nodepool1-23351669-vmss000005 <none> <none>
kube-system cilium-node-init-kbql4 1/1 Running 0 27h 10.2.128.101 aks-nodepool1-23164585-vmss00000j <none> <none>
kube-system cilium-node-init-sg4vc 1/1 Running 0 27h 10.2.128.100 aks-nodepool1-23164585-vmss00000i <none> <none>
kube-system cilium-nx7vl 1/1 Running 0 27h 10.2.128.100 aks-nodepool1-23164585-vmss00000i <none> <none>
kube-system cilium-operator-6485c89c66-748tz 1/1 Running 0 27h 10.2.128.101 aks-nodepool1-23164585-vmss00000j <none> <none>
kube-system cilium-vv4qs 1/1 Running 0 27h 10.2.128.101 aks-nodepool1-23164585-vmss00000j <none> <none>
kube-system cloud-node-manager-mncgk 1/1 Running 0 27h 10.2.128.100 aks-nodepool1-23164585-vmss00000i <none> <none>
kube-system cloud-node-manager-qg5wf 1/1 Running 0 27h 10.2.128.101 aks-nodepool1-23164585-vmss00000j <none> <none>
kube-system coredns-autoscaler-569f6ff56-qtqpr 1/1 Running 0 28h 10.0.0.121 aks-nodepool1-23164585-vmss00000i <none> <none>
kube-system coredns-fb6b9d95f-blk6j 1/1 Running 0 28h 10.0.0.236 aks-nodepool1-23164585-vmss00000i <none> <none>
kube-system coredns-fb6b9d95f-pxzh2 1/1 Running 0 28h 10.0.0.131 aks-nodepool1-23164585-vmss00000i <none> <none>Step Four: Validate The Test Is Running
Users can validate it is working by viewing logs with these commands:
Producer
kubectl logs pods/producer-a -fThe console will show something similar to:
swx-perf producer threads started... Ctrl+C to exit.
|-------------------------------------|-------------------------------|
| TOTALS | THIS PERIOD |
| TX PKTS | TX BYTES | TX DROPS | TX-PPS | TX-bps | TX-DPS |
|---------------|----------|----------|----------|----------|---------|
| 1,283 | 128KB | 0 | 1.28K | 1.0Mbps | 0 |
| 2,274 | 227KB | 0 | 991 | 792Kbps | 0 |
| 3,267 | 326KB | 0 | 993 | 794Kbps | 0 |
| 4,262 | 426KB | 0 | 995 | 796Kbps | 0 |That shows the producer sending the multicast stream correctly.
Consumer
Then, for the consumer:
kubectl logs pods/consumer-a -fIt will show something similar to:
swx-perf consumer threads started... Ctrl+C to exit.
|-------------------------------------|-------------------------------|
| TOTALS | THIS PERIOD |
| RX PKTS | RX BYTES | RX DROPS | RX-PPS | RX-bps | RX-DPS |
|---------------|----------|----------|----------|----------|---------|
| 0 | 0B | 0 | 0 | 0bps | 0 |
| 0 | 0B | 0 | 0 | 0bps | 0 |
| 0 | 0B | 0 | 0 | 0bps | 0 |
| 330 | 33.00KB | 0 | 330 | 264Kbps | 0 |
| 1,326 | 132KB | 0 | 996 | 796Kbps | 0 |
| 2,328 | 232KB | 0 | 1.00K | 801Kbps | 0 |
| 3,330 | 333KB | 0 | 1.00K | 801Kbps | 0 |That shows the consumer receiving the multicast stream.
Using the UI
Alternatively, users can log into your cloudSwXtch and run this command to see data flowing between nodes:
swx-topswx-top should show the traffic of the producer (TX) and the consumer (RX).

Step Five: Cleaning the Pods
Stop the test consumer by running this command in the CloudShell window:
kubectl delete -f TestConsumer.yamlswx-top should no longer show the consumer. Additionally, running kubectl get pods -o wide should now show just the test consumer, as shown below (the consumer is being terminated):
$ kubectl get pods -o wide -A NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES consumer-a 1/1 Terminating 0 15m 10.0.1.91 aks-nodepool1-23351669-vmss000006 <none> <none> producer-a 1/1 Running 0 15m 10.0.1.90 aks-nodepool1-23351669-vmss000005 <none> <none> swxtch-xnic-46qgg 1/1 Running 0 39m 10.2.128.96 aks-nodepool1-23351669-vmss000005 <none> <none> swxtch-xnic-szdk7 1/1 Running 0 40m 10.2.128.95 aks-nodepool1-23351669-vmss000004 <none> <none>
Stop the test producer by running this command in the CloudShell window.
kubectl delete -f TestProducer.yamlswx-top should no longer show the producer. This may take a minute to display. Additionally, running kubectl get pods -o wide should now show just the test producer as shown below:
donna@Azure:~$ kubectl get pods -o wide -A NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES producer-a 1/1 Terminating 0 15m 10.0.1.90 aks-nodepool1-23351669-vmss000005 <none> <none> swxtch-xnic-46qgg 1/1 Running 0 42m 10.2.128.96 aks-nodepool1-23351669-vmss000005 <none> <none> swxtch-xnic-szdk7 1/1 Running 0 42m 10.2.128.95 aks-nodepool1-23351669-vmss000004 <none> <none>
Now that the system is validated using swXtch.io tools, you can test with your K8s application.