WHAT TO EXPECT
For Kubernetes, the xNIC is a lightweight daemonset that must be installed on every node with pods sending or receiving cloudSwXtch traffic. This creates a virtual network interface within the node in a Kubernetes Cluster. Applications that use IP multicast should target this virtual network interface.
In this article, users will learn how to install xNIC Daemonset for Kubernetes on one of the supported clouds (AKS, EKS, or GKE).
Overview
Unicast traffic will not be affected by this feature since it will work as it did before. The xNIC will only be used for Multicast traffic. The default interface xNIC will use is eth0. It can be installed via your preferred cloud’s CloudShell or you can assign a VM as a manager to control your cluster. Either way, it is required to have access to the cloudSwXtch and the cluster.
In this document, we will discuss how to do it via the CloudShell. However, the commands below will work in either the CloudShell or on the VM managing the K8s cluster.
Running the xNIC Daemonset Install Script
BEFORE YOU START
If you haven't already, please create a Kubernetes Cluster. This is a prerequisite before installing the xNIC.
To make installation easy, the xNIC is installed from the cloudSwXtch instance via a one-line shell command. The xNIC is matched to the attached cloudSwXtch and should be upgraded if the cloudSwXtch version changes.
This process takes less than a minute to install on an existing K8s cluster.
To run the install:
Ensure your cloudSwXtch is version 2.0.89 or greater. If it is not upgraded, see Upgrading cloudSwXtch.
Sign into your desired cloud provider.
Open cloudShell as Bash.
Paste in the following commands, replacing the <cloudSwXtch_IP> with your cloudSwXtch’s Ctrl IP address.
kubectl run installer --image=busybox -- sh -c "wget http://<cloudSwXtch_IP>/services/install/xnic_ds_installer.sh; sleep 3650" kubectl cp default/installer:/xnic_ds_installer.sh xnic_ds_installer.sh kubectl delete po/installer --grace-period 1 chmod +x xnic_ds_installer.sh
Run one of the following scripts:
cloudSwXtch with Internet Access:
./xnic_ds_installer.sh
cloudSwXtch without Internet Access (Air-Gapped):
./xnic_ds_installer.sh -ag
An example of a successful install without INTERNET access is shown below:
$ ./xnic_ds_installer.sh -ag [i] Detected Cloud: AZURE [i] Cilium Installation detected [i] Setting CNI to CILIUM... ######################################################################################################## This script modifies the underlying configuration of Cilium CNI to make it compatible with Multicast Networks. It also installs xNIC DaemonSet on the existing cluster. ######################################################################################################## - RUNNING INSTALLER: Airgap - IMAGE: 10.144.0.115:443/xnicv2:airgap - CNI PLUGIN: CILIUM - SWXTCH IP ADDRESS: 10.144.0.115 - AGENT TYPE: XNIC XCD ======================================================= Adjusting BPF filter priority on Cilium ======================================================= Setting flag "bpf-filter-priority" to "50000" configmap/cilium-config patched Done! ======================================================= Restarting Cilium Agents ======================================================= daemonset.apps/cilium restarted daemonset.apps/cilium-node-init restarted Waiting for Cilium Agents to be fully UP and Running......OK Done! Proceeding with xNIC Installation ======================================================= Creating xNIC ConfigMap ======================================================= configmap/xnic-config created ======================================================= Installing xNIC ======================================================= daemonset.apps/swxtch-xnic created Done! ==================== Completed! ======================= Please allow a minute for the xNIC DaemonSet to fully spin up before starting to use it. Feel free to follow up on the xNIC Agents installation by running kubectl logs -n kube-system daemonsets/swxtch-xnic -f
Run the following command to view the xNIC DaemonSet logs in the Bash window:
kubectl logs -n kube-system daemonsets/swxtch-xnic -f
Use the command below to follow the xNIC DaemonSet status in the Bash window and check if they have started (i.e “Running”):
kubectl get pods -l app=swxtch-xnic -n kube-system
Example:
user@Azure:~$ kubectl get pods -l app=swxtch-xnic -n kube-system NAME READY STATUS RESTARTS AGE swxtch-xnic-fc58t 1/1 Running 0 11d swxtch-xnic-kn9hg 1/1 Running 0 11d
Sign into your cloudSwXtch and enter in the following command to see the new instances in swXtch-top.
swxtch-top
Restarting xNIC DaemonSet
To restart xNIC DaemonSet for K8s, run the following command:
kubectl rollout restart ds/swxtch-xnic -n kube-system
*Managing Multicast Traffic
Following are some tc
commands that can be useful when it comes to allowing/denying either incoming or outgoing multicast traffic on producer and consumer pods. You must run these commands inside the target producer/consumer pods so that the correct interface name (eth0 in the examples) is picked up.
By default, ALL multicast traffic is allowed on every pod.
For Outgoing (Traffic leaving the Pod)
Deny ALL outgoing multicast
To deny all outgoing multicast, use the following commands:
Specific syntax:
# DENY ALL OUTGOING
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 224.0.0.0/4 action drop
Alternatively, users can deny outgoing multicast to specific groups:
General Syntax:
# DENY OUTGOING TO SPECIFIC GROUP(S)
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst <multicast_group_0> action drop
...
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst <multicast_group_n> action drop
Example: denying outgoing traffic to multicast group 239.0.0.1
:
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 239.0.0.1/32 action drop
Allow outgoing multicast to a specific group(s) - Deny any other
# DENY ALL OUTGOING
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 224.0.0.0/4 action drop
# ALLOW SPECIFIC GROUP(S)
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst <multicast_group_0> action ok
...
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst <multicast_group_n> action ok
Example: allowing outgoing traffic ONLY to multicast group 239.0.0.1
:
tc qdisc add dev eth0 root handle 1: prio
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 224.0.0.0/4 action drop
tc filter add dev eth0 parent 1: protocol ip u32 match ip dst 239.0.0.1/32 action ok
Incoming (Traffic entering the Pod)
To deny ALL incoming multicast, use the following command:
Specific syntax:
# DENY ALL INCOMING
tc qdisc add dev eth0 ingress
tc qdisc add dev eth0 parent ffff: protocol ip u32 match ip dst 224.0.0.0/4 action drop
Alternatively, users can deny incoming multicast for a specific group(s)
General syntax:
# DENY INCOMING TO SPECIFIC GROUP(S)
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst <multicast_group_0> action drop
...
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst <multicast_group_n> action drop
Example: denying incoming multicast traffic to multicast group 239.0.0.1
:
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst 239.0.0.1/32 action drop
In addition, users can specify allowing incoming multicast by a specific group(s) while denying any other:
General syntax:
# DENY ALL INCOMING
tc qdisc add dev eth0 ingress
tc qdisc add dev eth0 parent ffff: protocol ip u32 match ip dst 224.0.0.0/4 action drop
# ALLOW SPECIFIC GROUP(S)
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst <multicast_group_0> action ok
...
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst <multicast_group_n> action ok
Example: allowing incoming traffic ONLY to multicast group 239.0.0.1
:
tc qdisc add dev eth0 ingress
tc qdisc add dev eth0 parent ffff: protocol ip u32 match ip dst 224.0.0.0/4 action drop
tc filter add dev eth0 parent ffff: protocol ip u32 match ip dst 239.0.0.1/32 action ok
Getting a shell to an xNIC DaemonSet pod
At times, it is nice to be able to get into the pod and be able to run commands such as swxtch-tcpdump. To accomplish this, follow these steps:
Sign into your desired cloud.
Open cloudShell as Bash. In this example, the user is using Azure.
Enter in the following command to get the pod name:
kubectl get pods -l app=swxtch-xnic -n kube-system
Example:
user@Azure:~$ kubectl get pods -l app=swxtch-xnic -n kube-system NAME READY STATUS RESTARTS AGE swxtch-xnic-fc58t 1/1 Running 0 11d swxtch-xnic-kn9hg 1/1 Running 0 11d
Enter in the following command, replacing Pod with the pod name:
kubectl exec -it pod/swxtch-xnic-name -n kube-system -- bash
Example:
user@Azure:~$ kubectl exec -it pod/swxtch-xnic-kn9hg -n kube-system -- bash root@aks-nodepool1-23164585-vmss00000A:/
You can now enter in commands similar to any VM, such as ip a or sudo swxcth-tcpdump -i eth0. Note that the pods created in this example do not have tools such as the standard tcpdump. However, swxtch-tcpdump will work. For testing, see swxtch-perf under Testing cloudSwXtch.
Switching Contexts
If you have more than one AKS Kubernetes cluster, then you may need to change the context to work on the desired instance. For more information, please review the Changing K8s Context in Your Preferred Cloud section.
Accessing xNIC Logs
You can get xNIC logs once signed in to the pod. See How to Find xNIC Logs and follow directions for xNIC.
Using xNIC config
Getting to the xNIC config is available once you're signed into the Pod. To get to the xNIC config, use the command below:
cat /var/opt/swxtch/swxtch-xnic.conf
Exiting the Pod
To exit the pod, enter in the following command:
exit
To Change K8s Context in Your Preferred Cloud
If there are more than one K8s clusters in your preferred cloud, then you may need to switch between them to run commands in the CloudShell Bash. Below are steps to switch between K8s clusters.
Get a list of all K8s Contexts by using the following command:
kubectl config get-contexts
Example in Azure:
user@Azure:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cilium-sample cilium-sample clusterUser_saDevNetwork_cilium-sample cilium-sample-200 cilium-sample-200 clusterUser_test-donna-200-rg_cilium-sample-200 cilium-sample2 cilium-sample2 clusterUser_saDevNetwork_cilium-sample2 * cilium-sample300 cilium-sample300 clusterUser_test-donna-300-rg_cilium-sample300 dsd-k8-cluster-100 dsd-k8-cluster-100 clusterUser_saDevNetwork_dsd-k8-cluster-100
Notice in the above list there are multiple context but only one has the asterisks (*). The asterisks marks what is the default context.
To change context, run the following command. The example is changing to cilium-sample2.
kubectl config use-context cilium-sample2
Re-run the get-context command:
kubectl config get-contexts
Example in Azure:
user@Azure:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cilium-sample cilium-sample clusterUser_saDevNetwork_cilium-sample cilium-sample-200 cilium-sample-200 clusterUser_test-donna-200-rg_cilium-sample-200 * cilium-sample2 cilium-sample2 clusterUser_saDevNetwork_cilium-sample2 cilium-sample300 cilium-sample300 clusterUser_test-donna-300-rg_cilium-sample300 dsd-k8-cluster-100 dsd-k8-cluster-100 clusterUser_saDevNetwork_dsd-k8-cluster-100
As you can see above, the asterisk (*) has changed positions to the desired context, cilium-sample2.