From: Michael Tremer <michael.tremer@ipfire.org>
To: development@lists.ipfire.org
Subject: Re: fireperf results
Date: Tue, 16 Feb 2021 19:07:49 +0000 [thread overview]
Message-ID: <E8AA41F5-C0D5-481F-9C92-E640ADD8C3EE@ipfire.org> (raw)
In-Reply-To: <a7a31eca-7032-3bde-7840-c68e10211760@ipfire.org>
[-- Attachment #1: Type: text/plain, Size: 10747 bytes --]
Hello,
> On 16 Feb 2021, at 18:50, Adolf Belka (ipfire-dev) <adolf.belka(a)ipfire.org> wrote:
>
> Hi Michael,
>
> Daniel asked if I was running suricata and I was.
That email must have got lost.
But of course this explains it.
> Removing that made everything much better. Now with -P 1 to -P 1000 I was getting ~950Mb/s. With -P 10000 I got only ~550Mb/s.
>
> On 16/02/2021 17:16, Michael Tremer wrote:
>> Hello Adolf,
>> This is very surprising to me. I am almost shocked.
>> Maybe any of my assumptions are wrong, but if this is the actual throughput of this piece of hardware, I find this not enough.
>>> On 16 Feb 2021, at 12:44, Adolf Belka (ipfire-dev) <adolf.belka(a)ipfire.org> wrote:
>>>
>>> Hi All,
>>>
>>> Following are the fireperf results I obtained:-
>>>
>>> server: IPFire 2.25 - Core Update 153; Intel Celeron CPU J1900 @ 1.99GHz x4; I211 Gigabit Network Connection
>> You have a small processor here with a rather high clock rate. Four cores at 2 GHz is quite something.
>> However, it is a Celeron processor and that means it is a bit more stripped down than others. Usually it is caches and pipeline throughput. It might be that, that bites you really bad here.
>> You have a better than average NIC. The Intel network controllers are not bad, although the i2xx series is not fully active.
>> Could you please send the output of “cat /proc/interrupts” so that we can see how many queues they have?
>
> -bash-5.0$ cat /proc/interrupts
> CPU0 CPU1 CPU2 CPU3
> 0: 40 0 0 0 IO-APIC 2-edge timer
> 1: 3 0 0 0 IO-APIC 1-edge i8042
> 4: 430 0 0 0 IO-APIC 4-edge ttyS0
> 8: 55 0 0 0 IO-APIC 8-fasteoi rtc0
> 9: 0 0 0 0 IO-APIC 9-fasteoi acpi
> 12: 4 0 0 0 IO-APIC 12-edge i8042
> 18: 0 0 0 0 IO-APIC 18-fasteoi i801_smbus
> 91: 2959648 0 0 0 PCI-MSI 311296-edge ahci[0000:00:13.0]
> 92: 292544 0 0 0 PCI-MSI 327680-edge xhci_hcd
> 93: 1 0 0 0 PCI-MSI 2097152-edge orange0
> 94: 332181 0 0 0 PCI-MSI 2097153-edge orange0-rx-0
> 95: 94258 0 0 0 PCI-MSI 2097154-edge orange0-rx-1
> 96: 328866 0 0 0 PCI-MSI 2097155-edge orange0-tx-0
> 97: 169838 0 0 0 PCI-MSI 2097156-edge orange0-tx-1
> 98: 1 0 0 0 PCI-MSI 3670016-edge red0
> 99: 9795304 0 0 0 PCI-MSI 3670017-edge red0-rx-0
> 100: 94258 0 0 0 PCI-MSI 3670018-edge red0-rx-1
> 101: 9574443 0 0 0 PCI-MSI 3670019-edge red0-tx-0
> 102: 1067926 0 0 0 PCI-MSI 3670020-edge red0-tx-1
> 103: 1 0 0 0 PCI-MSI 4194304-edge green0
> 104: 15302199 0 0 0 PCI-MSI 4194305-edge green0-rx-0
> 105: 94259 0 0 0 PCI-MSI 4194306-edge green0-rx-1
> 106: 13422909 0 0 0 PCI-MSI 4194307-edge green0-tx-0
> 107: 4977558 0 0 0 PCI-MSI 4194308-edge green0-tx-1
> 108: 1 0 0 0 PCI-MSI 4718592-edge blue0
> 109: 97391 0 0 0 PCI-MSI 4718593-edge blue0-rx-0
> 110: 94259 0 0 0 PCI-MSI 4718594-edge blue0-rx-1
> 111: 94259 0 0 0 PCI-MSI 4718595-edge blue0-tx-0
> 112: 137222 0 0 0 PCI-MSI 4718596-edge blue0-tx-1
> NMI: 638 468 287 294 Non-maskable interrupts
> LOC: 18102811 13305397 18242209 25513971 Local timer interrupts
> SPU: 0 0 0 0 Spurious interrupts
> PMI: 638 468 287 294 Performance monitoring interrupts
> IWI: 9208 21 2 26 IRQ work interrupts
> RTR: 0 0 0 0 APIC ICR read retries
> RES: 563980 301713 552880 579914 Rescheduling interrupts
> CAL: 184217 137668 310137 256984 Function call interrupts
> TLB: 170395 122144 132849 103440 TLB shootdowns
> TRM: 0 0 0 0 Thermal event interrupts
> THR: 0 0 0 0 Threshold APIC interrupts
> DFR: 0 0 0 0 Deferred Error APIC interrupts
> MCE: 0 0 0 0 Machine check exceptions
> MCP: 605 605 605 605 Machine check polls
> HYP: 0 0 0 0 Hypervisor callback interrupts
> ERR: 1
> MIS: 0
> PIN: 0 0 0 0 Posted-interrupt notification event
> NPI: 0 0 0 0 Nested posted-interrupt event
> PIW: 0 0 0 0 Posted-interrupt wakeup event
>
>
>>> client: Arch Linux; Intel Core i5-8400 CPU @ 2.80GHz 6 core; 1GBit nic
>>>
>>> Server:
>>> fireperf -s -P 10000 -p 63000:630010
>>>
>>>
>>> Client:
>>> fireperf -c <IP address> -P 1 -x -p 63000:63010 -> 100 - 3000 cps strongly fluctuating. After a couple of minutes the client cps went down to 0 and stayed there. I had to stop fireperf and restart the terminal to get it working again.
>>>
>>> fireperf -c <IP address> -P 10 -x -p 63000:63010 ->250 - 500 cps fluctuating
>>>
>>> fireperf -c <IP address> -P 100 -x -p 63000:63010 -> 220 - 1000 cps fluctuating
>>>
>>> fireperf -c <IP address> -P 1000 -x -p 63000:63010 -> 1200 - 2500 cps fluctuating
>>>
>>> fireperf -c <IP address> -P 10000 -x -p 63000:63010 -> 0 - 7000 cps hugely fluctuating
>> From the beginning you have quite a large fluctuation here. Some is normal, but this is a lot. It seems that the system is overloaded from the very beginning.
>> I have not done experiments with lots of different hardware (used the same usually), but Daniel has, and we normally have the systems being very idle with only one connection at a time. There isn’t too much to do for the CPU except waiting.
>>> In all cases the cpu utilisation was quite low on both IPFire and the Arch Linux desktop.
>> Not surprising on the desktop side, because there wasn’t a lot stuff to do.
>>> I then repeated the above tests removing the -x option so I could see the data bandwidth.
>>>
>>>
>>> fireperf -c <IP address> -P 1 -p 63000:63010 -> 225Mb/s - 1 core at 100%, rest around 30% to 40%
>> This is the most surprising part.
>> The IPFire Mini Appliance for example only has 1 GHz of clock and it doesn’t have any problems with transmissing a whole gigabit a second of data. This system has double the clock speed and the same NIC (or at least very similar to it).
>>> fireperf -c <IP address> -P 10 -p 63000:63010 -> 185Mb/s - similar as above
>>>
>>> fireperf -c <IP address> -P 100 -p 63000:63010 -> 210Mb/s - similar to above
>> The bandwidth should have increased here. That means we know that the bottleneck is not the network, but something else.
>> The one core that is maxed out is to some good extend the fireperf process generating packets. The rest is overhead of the OS, network stack and NIC driver. Which feels way too high for me.
>>> fireperf -c <IP address> -P 1000 -p 63000:63010 -> 370 - 450Mb/s - 2 cores at 100%, rest at 30% to 40%
>>>
>>> fireperf -c <IP address> -P 10000 -p 63000:63010 -> 400Mb/s - 1Gb/s - 2 cores at 100%, rest at 40% to 50%
>> You seem to have more than one receive queue as it looks like.
>> Did you actually achieve the 10k connections?
>>> I recently got my Glass Fibre Gigabit connection connected. The supplier hooked his laptop directly to the media converter and got around 950Mb/s
>> Could you test “speedtest-cli” and see what that reports?
>
> After turning off suricata I ran speedtest-cli on IPFire and got ~850Mb/s
> I ran speedtest-cli on my Arch Desktop and got 840Mb/s
> I ran speedtest++ on my Arch Desktop and got ~930Mb/s
> I ran my ISP's speedtest and got ~970Mb/s
Yeah, so it seems that the ISP speed test is doing something “different”. I am not sure if we can trust them. I definitely wouldn’t trust my ISP.
~840 MBit/s is still about 100 MBit/s away from the maximum which is 940 MBit/s taking away any overhead from Ethernet and IP.
970 MBit/s is technically not possible for an IP connection. It might work out if you include Ethernet headers into the maths.
How did the CPU load change? It suggests that the hardware still didn’t run at full capacity because of two cores being idle.
> So all showing similar-ish values and around where they should be. So on my hardware suricata is making a very big difference. I now have to decide if that is worth it or not for my situation.
Very good question. Probably not best to be discussed here, but in general I would like to have this discussion.
An IPS is absolutely worth it, and everyone should have it. Unfortunately it doesn’t run on small hardware and therefore we need to be very careful what we recommend and what we compare with each other.
Discussions on the forum always ended up with people buying the cheapest stuff that they could get - that is of course a rational thing to do. However, I am simply running the IPS wherever I go. So that means that a raspberry pi is not useful for more than a megabit a second. However, it is difficult to predict IPS throughput because it depends on so many factors. It would be nice if we could find a reproducible way to benchmark this - at least somewhat accurate.
-Michael
>
>>> Using the same speed test as he used but going through my IPFire hardware I get around 225Mb/s.
>>>
>>> Although my hardware has four Intel I211 Gigabit NIC's, I have suspected that their performance is limited by the processor.
>> It sounds like it. Lets see what more information we can gather and hopefully find it.
>> Can you run powertop along the bechmark and see what that says?
>> -Michael
>>> Regards,
>>>
>>> Adolf.
next prev parent reply other threads:[~2021-02-16 19:07 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-16 12:44 Adolf Belka (ipfire-dev)
2021-02-16 16:16 ` Michael Tremer
2021-02-16 18:50 ` Adolf Belka (ipfire-dev)
2021-02-16 19:07 ` Michael Tremer [this message]
2021-02-17 10:06 ` daniel.weismueller
2021-02-17 14:36 ` Adolf Belka (ipfire-dev)
2021-02-17 15:17 ` daniel.weismueller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E8AA41F5-C0D5-481F-9C92-E640ADD8C3EE@ipfire.org \
--to=michael.tremer@ipfire.org \
--cc=development@lists.ipfire.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox