Introducing fireperf

Michael Tremer michael.tremer at
Thu Feb 4 15:57:42 UTC 2021

Hello all y’all,

I would like to introduce a small side-project I have been working on called fireperf.

It is a networking benchmarking tool which I have written to debug some issues with IPFire on AWS and I thought this would be very useful for others, too.

A bit of a backstory

Everyone probably knows iperf and its newer brother iperf3 and has used them. They can do bandwidth tests and usually should be quite good at then. Unfortunately they are reaching their limits quite early. In my environment I had at least a 5 GBit/s connection between my two machines and I wanted to create lots and lots of connections to stress-test the connection tracking.

This was unfortunately not possible with either of them, because iperf is starting a thread per connection and iperf3 limits its connections to 128 per process. In both cases this simply did not scale because my goal was to create connections in the 6 figures or more range. I simply ran out of memory.

Another issue that both sometimes have - and which I did not validate specifically in this case - is that they cannot generate enough traffic to saturate a link. However, I need to be able to simply trust that this is possible as long as I have the CPU resources available.

Therefore a new tool was needed.

When I started writing fireperf, I did not intend to make it fit for throughput tests, but since it was such a low-hanging fruit in the development process I added it, too. The original goal was simply to open a number of connections (at least hundreds of thousands) and keep them open or let me know when this is no longer possible.

Since I knew I was working on IPFire, I started to take advantage of Linux’ modern APIs and try to delegate as much work as possible to the kernel. Especially since the whole Meltdown/Spectre debacle, sending data between the kernel and userland is slow, and the less work I have to do in the user land, the more time I can spend on other things.

Therefore I use epoll() to let the kernel tell me when a socket is ready to accept data and when something has happened and the connection broke down. I am using getrandom() to get random data to send and I use timerfd to regularly notify me when to print some statistics. Therefore this application is not very easily portable (it wasn’t an original design goal), but I am sure that there are some alternatives available if someone were to port this to another OS.

iperf3 - the most efficient one I knew - used up all of my 8GB of memory on my test system when started multiple times to create about 10k connections. Fireperf uses a few hundreds of kilobytes with tens of thousands of open connections. In fact, it does not keep any state about the connections and therefore uses the same amount of memory no matter how many connections are open. The kernel will use some memory though, but I could not measure how much.

Without saturating my processor I can saturate any network link that I could test up to 10 GBit/s. CPU usage normally is less than 10% and fireperf knows a mode (—k) in which it won’t send any data, but only keep the connections open and regularly let the kernel (again, because I am a lazy developer) send some keep alive packets. That way, it uses next to no CPU resources while still generating a lot of stress for the network.

So here it is, my new tool. I hope someone finds this useful.

It is nice and tiny and everything comes in one binary file which only depends on the C standard library.

Sources are available on our Git server as usual:;a=summary

I tagged release number 0.1.0 and I will push a patch into next very soon. There are also Debian packages available if you want to give fireperf a try on Debian:

  deb buster/
  deb-src buster/

Replace buster with bullseye or sid if you are on those and do not forget to import the key:

  curl | apt-key add -

Documentation in form of a man-page is available here:

It would be great, if fireperf would become a great tool to benchmark IPFire. We definitely could do better in this department and hopefully gain better insights on any regressions in performance, or if certain hardware is better than other. I suppose throughput is not everything and fireperf should be able to help us measure other factors, too.


More information about the Development mailing list