From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Rymes To: development@lists.ipfire.org Subject: Re: Introducing fireperf Date: Thu, 04 Feb 2021 11:36:08 -0500 Message-ID: In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4384263134700303730==" List-Id: --===============4384263134700303730== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Just curious. I know that there=E2=80=99s an inherent tension in this area, a= nd I wanted to prompt discussion, nothing more. Tom > On Feb 4, 2021, at 11:17 AM, Michael Tremer w= rote: >=20 > Hello, >=20 > Well, it is a network testing tool. They all can be used for some evil stuf= f, but so can be ping(8). >=20 > I was thinking about this during development, but I am not sure what could = be done to prevent this. >=20 > -Michael >=20 >> On 4 Feb 2021, at 16:16, Tom Rymes wrote: >>=20 >> Michael, >>=20 >> Any concerns that this could be used for evil? >>=20 >> Tom >>=20 >>> On Feb 4, 2021, at 10:57 AM, Michael Tremer = wrote: >>>=20 >>> Hello all y=E2=80=99all, >>>=20 >>> I would like to introduce a small side-project I have been working on cal= led fireperf. >>>=20 >>> It is a networking benchmarking tool which I have written to debug some i= ssues with IPFire on AWS and I thought this would be very useful for others, = too. >>>=20 >>>=20 >>> A bit of a backstory >>>=20 >>> Everyone probably knows iperf and its newer brother iperf3 and has used t= hem. They can do bandwidth tests and usually should be quite good at then. Un= fortunately they are reaching their limits quite early. In my environment I h= ad at least a 5 GBit/s connection between my two machines and I wanted to cre= ate lots and lots of connections to stress-test the connection tracking. >>>=20 >>> This was unfortunately not possible with either of them, because iperf is= starting a thread per connection and iperf3 limits its connections to 128 pe= r process. In both cases this simply did not scale because my goal was to cre= ate connections in the 6 figures or more range. I simply ran out of memory. >>>=20 >>> Another issue that both sometimes have - and which I did not validate spe= cifically in this case - is that they cannot generate enough traffic to satur= ate a link. However, I need to be able to simply trust that this is possible = as long as I have the CPU resources available. >>>=20 >>> Therefore a new tool was needed. >>>=20 >>> When I started writing fireperf, I did not intend to make it fit for thro= ughput tests, but since it was such a low-hanging fruit in the development pr= ocess I added it, too. The original goal was simply to open a number of conne= ctions (at least hundreds of thousands) and keep them open or let me know whe= n this is no longer possible. >>>=20 >>> Since I knew I was working on IPFire, I started to take advantage of Linu= x=E2=80=99 modern APIs and try to delegate as much work as possible to the ke= rnel. Especially since the whole Meltdown/Spectre debacle, sending data betwe= en the kernel and userland is slow, and the less work I have to do in the use= r land, the more time I can spend on other things. >>>=20 >>> Therefore I use epoll() to let the kernel tell me when a socket is ready = to accept data and when something has happened and the connection broke down.= I am using getrandom() to get random data to send and I use timerfd to regul= arly notify me when to print some statistics. Therefore this application is n= ot very easily portable (it wasn=E2=80=99t an original design goal), but I am= sure that there are some alternatives available if someone were to port this= to another OS. >>>=20 >>> iperf3 - the most efficient one I knew - used up all of my 8GB of memory = on my test system when started multiple times to create about 10k connections= . Fireperf uses a few hundreds of kilobytes with tens of thousands of open co= nnections. In fact, it does not keep any state about the connections and ther= efore uses the same amount of memory no matter how many connections are open.= The kernel will use some memory though, but I could not measure how much. >>>=20 >>> Without saturating my processor I can saturate any network link that I co= uld test up to 10 GBit/s. CPU usage normally is less than 10% and fireperf kn= ows a mode (=E2=80=94k) in which it won=E2=80=99t send any data, but only kee= p the connections open and regularly let the kernel (again, because I am a la= zy developer) send some keep alive packets. That way, it uses next to no CPU = resources while still generating a lot of stress for the network. >>>=20 >>> So here it is, my new tool. I hope someone finds this useful. >>>=20 >>> It is nice and tiny and everything comes in one binary file which only de= pends on the C standard library. >>>=20 >>> Sources are available on our Git server as usual: >>>=20 >>> https://git.ipfire.org/?p=3Dfireperf.git;a=3Dsummary >>>=20 >>> I tagged release number 0.1.0 and I will push a patch into next very soon= . There are also Debian packages available if you want to give fireperf a try= on Debian: >>>=20 >>> deb https://packages.ipfire.org/fireperf buster/ >>> deb-src https://packages.ipfire.org/fireperf buster/ >>>=20 >>> Replace buster with bullseye or sid if you are on those and do not forget= to import the key: >>>=20 >>> curl https://packages.ipfire.org/79842AA7CDBA7AE3-pub.asc | apt-key add - >>>=20 >>> Documentation in form of a man-page is available here: >>>=20 >>> https://man-pages.ipfire.org/fireperf/fireperf.html >>>=20 >>> It would be great, if fireperf would become a great tool to benchmark IPF= ire. We definitely could do better in this department and hopefully gain bett= er insights on any regressions in performance, or if certain hardware is bett= er than other. I suppose throughput is not everything and fireperf should be = able to help us measure other factors, too. >>>=20 >>> -Michael >>=20 >=20 --===============4384263134700303730==--