Universal Third-Party Tools useful for testing

Prev Next

WHAT TO EXPECT

While xNIC installation provides users with a number of useful tools to test the functionality of the cloudSwXtch network (swx-perf, swx-tcpdump, etc.), there is also a wealth of universal third-party tools available for the same purposes.

In this article, we will take a deeper dive into these alternative options and understand their basic functionality.

Please note that if if the system has multiple interfaces then the production and consumption of the multicast and/or broadcast data must be done on the data interface. Some tools allows the user to select the interface or bind the interface while others do not.  In this case, users will need to modify the routing table to ensure the selection of the proper interface.  For example, if Windows VLC is not consuming the stream, then check if it is joining on the correct interface using the command below in PowerShell:

netsh.exe interface ipv4 show joins

VLC

VLC is a free and open-source cross-platform multimedia player and framework that plays most multimedia files and various streaming protocols. As a highly visual tool, it can be used to demonstrate the delivery and fidelity of video streams from the cloudSwXtch to the xNIC. But it also has a fairly powerful command-line interface to use. On Linux, it also has a cvlc version that will work better on headless VMs, because it won’t show the remote VLC UI.

In the following example, a Linux producer is streaming a .ts file (Transport Stream) using the RTP protocol to the multicast address 239.1.1.2 to port 1000 in an indefinite loop using the data interface ENS6. The agent is consuming the stream using the data interface ENS6.

Producer

cvlc file:////home/ubuntu/your_video.ts --sout '#rtp{dst=239.1.1.2,port=10000,mux=ts}' --miface=ens6 --loop

Sample output in the console for the producer:

$ cvlc file:////home/ubuntu/your_video.ts --sout '#rtp{dst=239.1.1.2,port=10000,mux=ts}' --miface=ens6 --loop
VLC media player 3.0.20 Vetinari (revision 3.0.20-0-g6f0d0ab126b)
[000055dac7b95dd0] vlcpulse audio output error: PulseAudio server connection failure: Connection refused
[000055dac7bbe8f0] dummy interface: using the dummy interface module...

NOTES

  • On the consumer VM the video will not show as playing if the terminal emulator does not support a windows server. Users can prove it works by seeing the statistics on swx-top or wXcked Eye.

  • The vlcpulse audio error is completely normal when running in a headless VM running in a cloud (no sound hardware is available).

Consumer

cvlc rtp://@239.1.1.2:10000 --miface=ens6

Sample output from the console. If there are lots of errors, audio-related, that can be ignored.

$ vlc rtp://@239.1.1.2:10000 --miface=ens6
VLC media player 3.0.20 Vetinari (revision 3.0.20-0-g6f0d0ab126b)
[00005d46352141b0] vlcpulse audio output error: PulseAudio server connection failure: Connection refused
[00005d463517b220] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
ALSA lib confmisc.c:855:(parse_card) cannot find card '0'
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_card_inum returned error: No such file or directory
ALSA lib confmisc.c:422:(snd_func_concat) error evaluating strings
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1342:(snd_func_refer) error evaluating name
ALSA lib conf.c:5204:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5727:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2721:(snd_pcm_open_noupdate) Unknown PCM default
[00005d46352141b0] alsa audio output error: cannot open ALSA device "default": No such file or directory
[00005d46352141b0] main audio output error: module not functional
[0000792a60037200] main decoder error: failed to create audio output
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory

Sample output for the video:

Note for Windows VM’s with multiple NIC’s.

VLC on Windows (Version 3.0.x) won’t let users chose the interface. In order to make VLC use the correct data interface, users will need to modify the routing table to ensure the selection of the proper interface.

FFmpeg

FFmpeg project has two command-line tools (ffmpeg and ffplay) that can be used to demonstrate the delivery of a stream. ffmpeg can be used as a streamer, and ffplay as the player. Contrary to VLC, it does NOT have a GUI. But while VLC requires a file to stream from the producer to the consumer, ffmpeg can produce video test patterns. In the following example, we will use "testsrc" as a “video source”, which will display a color pattern with a scrolling gradient and a timestamp when viewed.

Note

FFmpeg can be compiled with different sets of functions. Users may or may not be able to use some of then with the current binary. For example, SRT libraries are not default on some Linux distributions.

In this example, the user is testing a stream signal with a resolution of 640×480 at 30fps, in MPEG using lavfi (Libavfilter input virtual device) for 1000 seconds to the multicast address 239.1.1.1, port 10000 using a packet size of 1316 bytes.

Producer

ffmpeg -hide_banner -f lavfi -re -i testsrc=duration=1000:size=640x480:rate=30 -f mpegts "udp://239.1.1.1:11000?pkt_size=1316"

Below is an example of ffmpeg producer output.

$ ffmpeg -hide_banner -f lavfi -re -i testsrc=duration=1000:size=640x480:rate=30 -f mpegts "udp://239.1.1.1:11000?pkt_size=1316"
Input #0, lavfi, from 'testsrc=duration=1000:size=640x480:rate=30':
  Duration: N/A, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: wrapped_avframe, rgb24, 640x480 [SAR 1:1 DAR 4:3], 30 fps, 30 tbr, 30 tbn
Stream mapping:
  Stream #0:0 -> #0:0 (wrapped_avframe (native) -> mpeg2video (native))
Press [q] to stop, [?] for help
Output #0, mpegts, to 'udp://239.1.1.1:11000?pkt_size=1316':
  Metadata:
    encoder         : Lavf60.16.100
  Stream #0:0: Video: mpeg2video (Main), yuv420p(tv, progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 30 fps, 90k tbn
    Metadata:
      encoder         : Lavc60.31.102 mpeg2video
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
frame=  155 fps= 33 q=11.9 size=     369kB time=00:00:05.13 bitrate= 588.3kbits/s speed= 1.1x

Consumer

ffplay -hide_banner "udp://239.1.1.1:11000"

Below is an example of what users can expect as output of ffplay. 

$ ffplay -hide_banner "udp://239.1.1.1:11000"
[mpegts @ 0x74ec68000c80] Packet corrupt (stream = 0, dts = 432000).
Input #0, mpegts, from 'udp://239.1.1.1:11000':KB sq=    0B f=0/0
  Duration: N/A, start: 1.433333, bitrate: N/A
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
  Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, progressive), 640x480 [SAR 1:1 DAR 4:3], 30 fps, 30 tbr, 90k tbn
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 49152 vbv_delay: N/A
  43.78 M-V:  0.347 fd=1261 aq=    0KB vq=    4KB sq=    0B f=1/1

A successful delivery of the stream will display the following stream in the consumer’s VM:

ffmpeg-producer-result

Note about the interface

If users need to change the interface they are using to produce or consume, they can change the URL used in the command, adding localaddr=<IP_localaddress> of their data interface. Users have to separate that from the rest of the URL with a “?” (or “&” if there’s already a “?” present). For example, the same line as above, but producing through the data interface that is currently using 192.168.0.2 as local IP:

ffmpeg -hide_banner -f lavfi -re -i testsrc=duration=1000:size=640x480:rate=30 -f mpegts "udp://239.1.1.1:11000?pkt_size=1316&localaddr=192.168.0.2"

iPerf (v2.x)

Similar to swx-perf, iPerf is a multi-platform tool for network performance measurement and tuning. It is commonly used for connectivity testing, bandwidth, and latency measurements. However, what differs is that iPerf has additional arguments not found in swx-perf, our streamlined tool. Since iPerf (v3) does not support multicast, it is advised to use iPerf (v2.x) for cloudSwXtch networking testing.

In this example, the user is creating a UDP stream at multicast address 239.1.1.3, port 1000, using interface ens6, for 120 seconds, with a buffer length of 1000 bytes. The enhanced report will display stats for every second with enhanced reporting.

Producer

iperf -c 239.1.1.3 -p 10000 -u -t 120 -i 1 -e -l 1000
  • Arguments Explained:

    • -c: client, the machine as the producer, specifying the multicast address (239.1.1.3)

    • -p: the port (10000)

    • -u: the producer will be sending UDP packets

    • -t: the duration of the stream in seconds (120)

    • -i: the time interval stats will be reported. In this case, it is every second (1).

    • -e: enhanced reporting (interval, transfer, bandwidth, write/err, PPS)

    • -l: length of buffers to read or write (1000)

This is an example of an iPerf producer output binding the interface to ens6.

$ iperf -c 239.1.1.1%ens6 -p 10000 -u -i 1
------------------------------------------------------------
Client connecting to 239.1.1.1, UDP port 10000 with pid 379893 via ens6 (1 flows)
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local 172.31.103.38 port 47031 connected with 239.1.1.1 port 10000
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-1.0000 sec   131 KBytes  1.07 Mbits/sec
[  1] 1.0000-2.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 2.0000-3.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 3.0000-4.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 4.0000-5.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 5.0000-6.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 6.0000-7.0000 sec   129 KBytes  1.06 Mbits/sec
[  1] 7.0000-8.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 8.0000-9.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 9.0000-10.0000 sec   128 KBytes  1.05 Mbits/sec
[  1] 0.0000-10.0153 sec  1.25 MBytes  1.05 Mbits/sec

Note

If users need to force the data through a give interface (for example ens6) they can use the % command next to the IP:

iperf -c 239.1.1.3%ens6 -p 10000 -u -t 120 -i 1 -e -l 1000

Consumer

iperf -s -u -B 239.1.1.3 -p 10000 -i 1
  • Arguments Explained:

    • -s: server, signifying a consumer

    • -u: consumer will be taking in UDP packets

    • -B: bind, specifying the multicast address (Note the capitalization) (239.1.1.3)

    • -p: the port (10000)

    • -i: the time interval stats will be reported (1)

This is an example of an iPerf consumer output using the ens6 interface. Note how each line item coincides with the single second interval.

$ iperf -s -u -B 239.1.1.3%ens6 -p 10000 -i 1
------------------------------------------------------------
Server listening on UDP port 10000
Joining multicast (*,G)=*,239.1.1.3 w/iface ens6
Server set to single client traffic mode (per multicast receive)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  1] local 239.1.1.3 port 10000 connected with 172.31.103.38 port 54577
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  1] 0.0000-1.0000 sec   129 KBytes  1.06 Mbits/sec   0.003 ms 5179/5311 (98%)
[  1] 1.0000-2.0000 sec   128 KBytes  1.05 Mbits/sec   0.004 ms 0/131 (0%)
[  1] 2.0000-3.0000 sec   128 KBytes  1.05 Mbits/sec   0.003 ms 0/131 (0%)
[  1] 3.0000-4.0000 sec   128 KBytes  1.05 Mbits/sec   0.004 ms 0/131 (0%)
[  1] 4.0000-5.0000 sec   128 KBytes  1.05 Mbits/sec   0.002 ms 0/131 (0%)
[  1] 5.0000-6.0000 sec   128 KBytes  1.05 Mbits/sec   0.004 ms 0/131 (0%)
[  1] 6.0000-7.0000 sec   128 KBytes  1.05 Mbits/sec   0.005 ms 0/131 (0%)
[  1] 7.0000-8.0000 sec   128 KBytes  1.05 Mbits/sec   0.003 ms 0/131 (0%)
[  1] 8.0000-9.0000 sec   128 KBytes  1.05 Mbits/sec   0.003 ms 0/131 (0%)

No Consumer Output

On some older versions of iPerf 2, the consumer will not show any output. This can happen even when configured to do so. However, users can still check swx-top to see if the xNIC interface is getting traffic. To remedy this issue, use the latest version of iPerf 2.

Note for Windows VM’s with multiple NIC’s.

In order to make iPerf 2 use the correct data interface, users will need to modify the routing table to ensure the selection of the proper interface.

sockperf

Another alternative to swx-perf is sockperf (documentation here), a Linux-only network benchmarking utility that utilizes the socket API, designed for testing the latency and throughput of high-performance systems. While it is similar to both swx-perf and iPerf, it has far fewer commands than its counterparts. It is recommended to use sockperf for saturating the Network Interface and using the VM’s maximum amount of bandwidth.

In this example, the user will create a stream at multicast address 239.1.1.4, port 10000, using a message size of 1472, sending 2000 messages per second in 30 seconds, specifying the data interface that should be used for the producer and the consumer:

Producer

sockperf throughput --ip 239.1.1.4 --msg-size 1472 --port 10000 --mps 2000 --time 30 --mc-tx-if 172.31.103.38

This is a sample output for the producer:

$ sockperf throughput --ip 239.1.1.4 --msg-size 1472 --port 10000 --mps 2000 --time 30 --mc-tx-if 172.31.103.38
sockperf: == version #3.7-no.git ==
sockperf[CLIENT] send on:
[ 0] IP = 239.1.1.4       PORT = 10000 # UDP
sockperf: Warmup stage (sending a few dummy messages)...
sockperf: Starting test...
sockperf: Test end (interrupted by timer)
sockperf: Test ended
sockperf: Total of 60001 messages sent in 30.000 sec

sockperf: NOTE: test was performed, using msg-size=1472, mps=2000. For getting maximum throughput use --mps=max (and consider --msg-size=1472 or --msg-size=4096)
sockperf: Summary: Message Rate is 2000 [msg/sec]
sockperf: Summary: BandWidth is 2.808 MBps (22.461 Mbps)

Consumer

sockperf server --ip 239.1.1.4 --Activity 800 --port 10000 --mc-rx-if 172.31.106.231

This will consume the traffic and print the activity for the last 800 messages processed. The consumer will continually wait for the producer until it shuts off.

$ sockperf server --ip 239.1.1.4 --Activity 800 --port 10000 --mc-rx-if 172.31.106.231
sockperf: == version #3.7-no.git ==
sockperf: [SERVER] listen on:
[ 0] IP = 239.1.1.4       PORT = 10000 # UDP
sockperf: Warmup stage (sending a few dummy messages)...
sockperf: [tid 315630] using recvfrom() to block on socket(s)
    -- Interval --     -- Message Rate --  -- Total Message Count --
    6227693 [usec]           128 [msg/s]              800 [msg]
     399994 [usec]          2000 [msg/s]             1600 [msg]
     400008 [usec]          1999 [msg/s]             2400 [msg]
     399977 [usec]          2000 [msg/s]             3200 [msg]
     400018 [usec]          1999 [msg/s]             4000 [msg]
     400001 [usec]          1999 [msg/s]             4800 [msg]
     400109 [usec]          1999 [msg/s]             5600 [msg]
     399889 [usec]          2000 [msg/s]             6400 [msg]
     400003 [usec]          1999 [msg/s]             7200 [msg]
     399992 [usec]          2000 [msg/s]             8000 [msg]
     399996 [usec]          2000 [msg/s]             8800 [msg]
     400010 [usec]          1999 [msg/s]             9600 [msg]
     399986 [usec]          2000 [msg/s]            10400 [msg]
     400018 [usec]          1999 [msg/s]            11200 [msg]
     399991 [usec]          2000 [msg/s]            12000 [msg]
     399990 [usec]          2000 [msg/s]            12800 [msg]
     400013 [usec]          1999 [msg/s]            13600 [msg]
     399986 [usec]          2000 [msg/s]            14400 [msg]
     400002 [usec]          1999 [msg/s]            15200 [msg]
     400008 [usec]          1999 [msg/s]            16000 [msg]
    -- Interval --     -- Message Rate --  -- Total Message Count --
     400015 [usec]          1999 [msg/s]            16800 [msg]
     399980 [usec]          2000 [msg/s]            17600 [msg]

Python/Go

Users may also choose to build their own tool to test their cloudSwXtch network. Below are two examples using Python or Go programming language to test multicast messaging. What is great about cloudSwXtch is that when we say it requires no code changes, the same applies to testing.

Python

As a pre-installed program on Linux-based machines, using Python is a simple way to test xNIC.

Producer

The key object here is the socket module, and the instruction used to send is sock.sendto. As users can see in the code, no modification to the code is necessary in order to make it “compatible” with the xNIC/cloudSwXtch. The following code will work on any multicast-compatible environment, so a default VM in any public cloud won’t be able to send the data unless xNIC is installed.

  1. Create a producer.py file on the producer VM.

  2. Copy and paste the following script:

    import socket
    import struct
    import time
    
    def send_multicast_message(ip, port, message, delay, interface_ip=None):
        # Create the socket
        multicast_group = (ip, port)
        sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    
        # Set the time-to-live for messages to 1 so they do not go past the local network segment
        ttl = struct.pack('b', 1)
        sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, ttl)
    
        if interface_ip:
            print(f"Sending via interface {interface_ip}")
            sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton(interface_ip))
    
        try:
            while True:
                # Send the message
                print(f'Sending "{message}" to {ip}:{port}')
                sock.sendto(message.encode('utf-8'), multicast_group)
    
                # Delay between messages
                time.sleep(delay)
        finally:
            print('Closing socket')
            sock.close()
    
    # Example usage
    # Replace 'x.x.x.x' with the actual IP address of the data interface to use.
    send_multicast_message('239.1.1.1', 10000, 'Hello, Multicast!', 2, 'x.x.x.x')
  3. Replace x.x.x.x with the IP of the data interface. Save and close.

  4. Run the script:

    python3 producer.py

Below is an example of the output:

$ python3 prod.py
Sending via interface 172.31.103.38
Sending "Hello, Multicast!" to 239.1.1.1:10000
Sending "Hello, Multicast!" to 239.1.1.1:10000
Sending "Hello, Multicast!" to 239.1.1.1:10000
Sending "Hello, Multicast!" to 239.1.1.1:10000

Consumer

The key here is the .setsockopt() method.

  1. Create a receiver.py file on the consumer VM.

  2. Copy and paste the following script:

    import socket
    import struct
    
    def receive_multicast_message(ip, port, interface_ip=None):
        # Create the socket
        sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    
        # Bind to the server address
        sock.bind(('', port))
    
        # Tell the operating system to add the socket to the multicast group
        group = socket.inet_aton(ip)
        if interface_ip:
            print(f"Joining multicast group on interface {interface_ip}")
            mreq = struct.pack('4s4s', group, socket.inet_aton(interface_ip))
        else:
            mreq = struct.pack('4sL', group, socket.INADDR_ANY)
        sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
    
        # Receive/respond loop
        while True:
            print('Waiting to receive')
            data, address = sock.recvfrom(1024)
    
            print(f'Received "{data.decode("utf-8")}" from {address}')
    
    # Example usage
    # Replace 'x.x.x.x' with the actual IP address of the data interface to use.
    receive_multicast_message('239.1.1.1', 10000, '172.31.106.231')
  3. Replace x.x.x.x with the IP of the data interface. Save and exit.

  4. Run the script:

python3 receiver.py

Below is an example of the output on the consumer VM:

$ python3 cons.py
Joining multicast group on interface 172.31.106.231
Waiting to receive
Received "Hello, Multicast!" from ('172.31.103.38', 56547)
Waiting to receive
Received "Hello, Multicast!" from ('172.31.103.38', 56547)
Waiting to receive
Received "Hello, Multicast!" from ('172.31.103.38', 56547)
Waiting to receive

Go

Go is another popular programming language users can use to test. Unlike Python, it must be installed manually, but it can be used to generate better-performing programs. Use the following example scripts to test traffic.

Producer

The producer uses the command net.DialUDP to send a message using multicast.

// producer.go
package main

import (
        "fmt"
        "net"
        "os"
        "time"

        "golang.org/x/net/ipv4"
)

func main() {
        multicastAddr := "239.1.1.1:10000"
        interfaceName := "ens6" // ⬅️  Replace with the desired data interface name, e.g., "en0", "eth0"

        // Resolve the desired network interface by name
        iface, err := net.InterfaceByName(interfaceName)

        // Resolve the remote multicast address
        remoteAddr, err := net.ResolveUDPAddr("udp", multicastAddr)

        // Create a regular UDP connection
        conn, err := net.DialUDP("udp", nil, remoteAddr)

        // Create a new specialized IPv4 connection from the base UDP connection
        pc := ipv4.NewPacketConn(conn)

        // Set the multicast interface on the specialized connection
        if err := pc.SetMulticastInterface(iface); err != nil {
                fmt.Printf("Error setting multicast interface: %v\n", err)
                os.Exit(1)
        }
        defer conn.Close()

        for {
                message := "Hello, Multicast!"
                _, err = conn.Write([]byte(message))
                if err != nil {
                        fmt.Println("Error sending message:", err)
                        os.Exit(1)
                }
                fmt.Println("Message sent:", message)
                time.Sleep(2 * time.Second)
        }
}

Note

Remember to change the interfaceName according to the data interface

Sample output:

$ go run prod.go
Message sent: Hello, Multicast!
Message sent: Hello, Multicast!
Message sent: Hello, Multicast!
Message sent: Hello, Multicast!

Consumer

The receiver uses the command net.ListenMulticastUDP.

// receiver.go
package main

import (
        "fmt"
        "net"
        "os"
)

func main() {
        multicastAddr := "239.1.1.1:10000"
        interfaceName := "ens6" // ⬅️  Replace with the desired interface name, e.g., "en0", "eth0"

        // Resolve the desired network interface by name
        iface, err := net.InterfaceByName(interfaceName)

        // Resolve the multicast address
        addr, err := net.ResolveUDPAddr("udp", multicastAddr)

        // Listen for multicast traffic on the specified interface
        conn, err := net.ListenMulticastUDP("udp", iface, addr)
        if err != nil {
                fmt.Println("Error listening for multicast:", err)
                os.Exit(1)
        }
        defer conn.Close()

        buf := make([]byte, 1024)
        for {
                n, src, err := conn.ReadFromUDP(buf)
                if err != nil {
                        fmt.Println("Error receiving message:", err)
                        os.Exit(1)
                }
                message := string(buf[:n])
                fmt.Printf("Received message from %v: %s\n", src, message)
        }
}

Note

Remember to change the interfaceName according to the data interface

Sample output would be:

$ go run cons.go
Received message from 172.31.19.230:53197: Hello, Multicast!
Received message from 172.31.19.230:53197: Hello, Multicast!
Received message from 172.31.19.230:53197: Hello, Multicast!
Received message from 172.31.19.230:53197: Hello, Multicast!
Received message from 172.31.19.230:53197: Hello, Multicast!

Other Programming Languages

As with Python or Go, users can create programs in virtually any language with the same logic for sending multicast traffic and let the xNIC + cloudSwXtch take care of the rest. For example, in Javascript, the commands will be server.send and server.on. In C++, the commands will be sendto() and recvfrom().