From mboxrd@z Thu Jan 1 00:00:00 1970 From: Michael Tremer To: development@lists.ipfire.org Subject: Re: Introducing fireperf Date: Thu, 04 Feb 2021 16:17:52 +0000 Message-ID: In-Reply-To: <04D30B6A-D429-4CD2-8881-69B53BEDA49D@rymes.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2216077609942648289==" List-Id: --===============2216077609942648289== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hello, Well, it is a network testing tool. They all can be used for some evil stuff,= but so can be ping(8). I was thinking about this during development, but I am not sure what could be= done to prevent this. -Michael > On 4 Feb 2021, at 16:16, Tom Rymes wrote: >=20 > Michael, >=20 > Any concerns that this could be used for evil? >=20 > Tom >=20 >> On Feb 4, 2021, at 10:57 AM, Michael Tremer = wrote: >>=20 >> Hello all y=E2=80=99all, >>=20 >> I would like to introduce a small side-project I have been working on call= ed fireperf. >>=20 >> It is a networking benchmarking tool which I have written to debug some is= sues with IPFire on AWS and I thought this would be very useful for others, t= oo. >>=20 >>=20 >> A bit of a backstory >>=20 >> Everyone probably knows iperf and its newer brother iperf3 and has used th= em. They can do bandwidth tests and usually should be quite good at then. Unf= ortunately they are reaching their limits quite early. In my environment I ha= d at least a 5 GBit/s connection between my two machines and I wanted to crea= te lots and lots of connections to stress-test the connection tracking. >>=20 >> This was unfortunately not possible with either of them, because iperf is = starting a thread per connection and iperf3 limits its connections to 128 per= process. In both cases this simply did not scale because my goal was to crea= te connections in the 6 figures or more range. I simply ran out of memory. >>=20 >> Another issue that both sometimes have - and which I did not validate spec= ifically in this case - is that they cannot generate enough traffic to satura= te a link. However, I need to be able to simply trust that this is possible a= s long as I have the CPU resources available. >>=20 >> Therefore a new tool was needed. >>=20 >> When I started writing fireperf, I did not intend to make it fit for throu= ghput tests, but since it was such a low-hanging fruit in the development pro= cess I added it, too. The original goal was simply to open a number of connec= tions (at least hundreds of thousands) and keep them open or let me know when= this is no longer possible. >>=20 >> Since I knew I was working on IPFire, I started to take advantage of Linux= =E2=80=99 modern APIs and try to delegate as much work as possible to the ker= nel. Especially since the whole Meltdown/Spectre debacle, sending data betwee= n the kernel and userland is slow, and the less work I have to do in the user= land, the more time I can spend on other things. >>=20 >> Therefore I use epoll() to let the kernel tell me when a socket is ready t= o accept data and when something has happened and the connection broke down. = I am using getrandom() to get random data to send and I use timerfd to regula= rly notify me when to print some statistics. Therefore this application is no= t very easily portable (it wasn=E2=80=99t an original design goal), but I am = sure that there are some alternatives available if someone were to port this = to another OS. >>=20 >> iperf3 - the most efficient one I knew - used up all of my 8GB of memory o= n my test system when started multiple times to create about 10k connections.= Fireperf uses a few hundreds of kilobytes with tens of thousands of open con= nections. In fact, it does not keep any state about the connections and there= fore uses the same amount of memory no matter how many connections are open. = The kernel will use some memory though, but I could not measure how much. >>=20 >> Without saturating my processor I can saturate any network link that I cou= ld test up to 10 GBit/s. CPU usage normally is less than 10% and fireperf kno= ws a mode (=E2=80=94k) in which it won=E2=80=99t send any data, but only keep= the connections open and regularly let the kernel (again, because I am a laz= y developer) send some keep alive packets. That way, it uses next to no CPU r= esources while still generating a lot of stress for the network. >>=20 >> So here it is, my new tool. I hope someone finds this useful. >>=20 >> It is nice and tiny and everything comes in one binary file which only dep= ends on the C standard library. >>=20 >> Sources are available on our Git server as usual: >>=20 >> https://git.ipfire.org/?p=3Dfireperf.git;a=3Dsummary >>=20 >> I tagged release number 0.1.0 and I will push a patch into next very soon.= There are also Debian packages available if you want to give fireperf a try = on Debian: >>=20 >> deb https://packages.ipfire.org/fireperf buster/ >> deb-src https://packages.ipfire.org/fireperf buster/ >>=20 >> Replace buster with bullseye or sid if you are on those and do not forget = to import the key: >>=20 >> curl https://packages.ipfire.org/79842AA7CDBA7AE3-pub.asc | apt-key add - >>=20 >> Documentation in form of a man-page is available here: >>=20 >> https://man-pages.ipfire.org/fireperf/fireperf.html >>=20 >> It would be great, if fireperf would become a great tool to benchmark IPFi= re. We definitely could do better in this department and hopefully gain bette= r insights on any regressions in performance, or if certain hardware is bette= r than other. I suppose throughput is not everything and fireperf should be a= ble to help us measure other factors, too. >>=20 >> -Michael >=20 --===============2216077609942648289==--