From mboxrd@z Thu Jan 1 00:00:00 1970 From: Adolf Belka <ahb.ipfire@gmail.com> To: development@lists.ipfire.org Subject: Re: suricata 6.0.0 / 6.0.1 - cpu load (idle) rising compared to 5.0.4 Date: Mon, 14 Dec 2020 19:22:16 +0100 Message-ID: <45f9297c-734a-8241-53d1-ae6e3e4351c2@gmail.com> In-Reply-To: <10a736f6-bb62-6b85-d424-b9f3d31831d5@ipfire.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7978524122422724164==" List-Id: <development.lists.ipfire.org> --===============7978524122422724164== Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Hallo All, I have been testing Core Update 153 on my VirtualBox VM test bed system. With Core 152 with no IPS the CPU is running around 0.7%. With IPS turned on = it runs around 1.5%. With Core 153 with no IPS the CPU is running around 0.7% again. With IPS turn= ed on it runs around 10.5%. The system info is:- IPFire version IPFire 2.25 (x86_64) - core153 Development Build: master/eaa9= 0321 Pakfire version 2.25.1-x86_64 Kernel version Linux ipfire 4.14.211-ipfire #1 SMP Tue Dec 8 23:54:50 GMT 20= 20 x86_64 Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz GenuineIntel GNU/Linux Regards, Adolf Belka On 14/12/2020 16:58, Peter M=C3=BCller wrote: > Hello Michael, hello Matthias, hello *, >=20 > just for the records: I cannot reproduce this issue on two machines running= Core Update 153 (testing) for a while now. >=20 > Both have an Intel N3150 CPU and are running on x86_64 (no virtualisation),= one of those is almost permanently under a significant network load. To be h= onest, it's CPU load actually _decreased_ a bit after installing Core Update = 153, but I cannot pinpoint the reason for this at the moment. >=20 > From my point of view, there is no need to downgrade to Suricata 5.x again= . In terms of security, I dislike that idea as well, however, this seems to a= ffect certain scenarios quite bad... >=20 > Thanks, and best regards, > Peter M=C3=BCller >=20 >=20 >> Hi, >> >>> On 12 Dec 2020, at 02:18, Kienker, Fred <fkienker(a)at4b.com> wrote: >>> >>> Matthas: >>> >>> I worked through some of the examples of the settings described in the >>> Suricata forum discussion. If my observations is correct, the issue >>> centers around the flow manager. A change to it has made a big >>> difference it the resource usage by this process. Its likely going to >>> come down to live with the load created the v6 version or revert to v5 >>> and wait for them to get to the bottom of this. No combination of >>> settings in the flow section of suricata.yaml ever seemed to reduce it >>> and instead increased it. >> >> Good research. >> >>> I don't use low power systems for IPFire and dont have access to one >>> but others with these systems may want to take a look at their >>> performance numbers and report back as to whether they can live with the >>> higher load. >> >> It is not directly low-power systems. >> >> I launched this on AWS today and the CPU load is immediately at 25%. It wa= s mentioned on the linked thread that virtual systems are affected more. >> >> I would now rather lean towards reverting suricata 6 unless there is a hot= fix available soon. >> >> Best, >> -Michael >> >>> >>> Best regards, >>> Fred >>> >>> Please note: Although we may sometimes respond to email, text and phone >>> calls instantly at all hours of the day, our regular business hours are >>> 9:00 AM - 6:00 PM ET, Monday thru Friday. >>> >>> -----Original Message----- >>> From: Matthias Fischer <matthias.fischer(a)ipfire.org> >>> Sent: Friday, December 11, 2020 6:34 PM >>> To: Kienker, Fred; michael.tremer <michael.tremer(a)ipfire.org>; >>> stefan.schantl <stefan.schantl(a)ipfire.org> >>> Cc: development <development(a)lists.ipfire.org> >>> Subject: Re: suricata 6.0.0 / 6.0.1 - cpu load (idle) rising compared to >>> 5.0.4 >>> >>> Hi, >>> >>> looks as if there is something going on in the suricata forum regarding >>> cpu load: >>> >>> =3D> https://forum.suricata.io/t/cpu-usage-of-version-6-0-0/706 >>> >>> I can't really interpret the numrous screenshots and ongoing >>> discussions, but could it be that this is related to what I'm >>> experiencing when upgrading from 5.0.x to 6.0.x? >>> >>> Best, >>> Matthias >>> >>> >> --===============7978524122422724164==--