Hello Peter, Hello Coop, Hello *,
Hi again Peter,
I am absolutely not recommending shipping a beta release! Rather, I'm suggesting you try it to see if it's an issue that has already been resolved as a sanity check.
I've build and uploaded an custom install-able ISO image, containing the latest libhtp (0.5.30) and suricata 5.0.0-beta1.
@Peter, please test and report if any of these changes affects your packet lost rate.
Have you tried using libpcap instead of AF_PACKET as the capture mechanism?
Currently we are using IPtables to delegate the packets via netfilter queue to suricata and to drop or re-inject them after scanning.
Best regards,
-Stefan
-Coop
-----Original Message----- From: peter.mueller@ipfire.org peter.mueller@ipfire.org Sent: Thursday, September 5, 2019 11:56 AM To: Nelson, Cooper cnelson@ucsd.edu; Peter Manev < petermanev@gmail.com> Cc: oisf-users@lists.openinfosecfoundation.org; IPFire: Development- List development@lists.ipfire.org; Stefan Schantl < stefan.schantl@ipfire.org> Subject: Re: [Oisf-users] Suricata causes massive packet loss
Hello Nelson, hello Peter, hello *,
thank you for your replies.
Upgrading to Suricata 5.0-beta is a difficult task, as we cannot simply ship beta releases in our firewall distribution. Personally, I rather doubt this is an issue due to a kernel/library/... combination, as we use Suricata for quite a while now and are upgrading IPFire's distribution kernel on a regular basis.
Anyway, Stefan (see CC) is currently working on Rust for the distribution, so we hope to take advantage of some more features soon. But since our issue is regarding packet loss for at least DNS and TLS traffic, I rather doubt Rust will make a big difference here.
Changing from "workers" to "autofp" mode unfortunately did not solve the problem. It is good to know the latter is recommended for inline deployments, "workers" was about 0.5 % faster in our benchmarks.
In IPFire, Suricata is started by a custom init script (please refer to https://git.ipfire.org/?p=ipfire-2.x.git;a=blob;f=src/initscripts/system/sur... for its content) and appears like this in the process list:
[root@maverick ~]# ps aux | grep suricata suricata 4882 10.9 7.3 1419868 289192 ? Ssl 20:38 1:37 /usr/bin/suricata -c /etc/suricata/suricata.yaml -D -q 0 -q 1 -q 2 -q 3
I am not sure what the number behind "tcp.pkt_on_wrong_thread" should read like normally. @Peter: Is it too low or too high?
We will ship an update for libhtp as soon as possible, thank you for catching this.
Thanks, and best regards, Peter Müller
Hello Stefan, hello Peter, hello Eric, hello *,
sorry for the late reply.
@Peter: Thank you for the "max-pending-packets" hint. Changing the value from 1024 (default) to 2048, 4096 and 8192 unfortunately did not made things better - OpenVPN throughput stays the same.
@Stefan: Thank you for building and packaging! I will install it on my testing machine and report back within the next days.
Since I was unable to reproduce the OpenVPN bandwidth issue on another (productive) system running on Core Update 134, I guessed Core 135 (https://blog.ipfire.org/post/ipfire-2-23-core-update-135-released) introduced that problem. This is wrong, I have updated the system meanwhile, performed a reboot, and everything stays the same.
@Eric: It is good to know that the DNS problem can be tracked down to a Netfilter bug. There are some iptables/Netfilter/... packages which we are not shipping the latest version, I will update them. Do you happen to have a bugtracker ID or link for that problem?
@All: Meanwhile, the domain "suricata-ids.org" was listed at URIBL (http://uribl.com/), so some mails got rejected at our mail server. I guess that was a false positive and removed hard reject action for URIBL. Anyway: Is anyone aware of a compromise or security issues at "suricata-ids.org"?
Thanks, and best regards, Peter Müller