Hello Tom,
thank you for your reply.
Yes, these VMs are all running on FreeBSD. I will try to test it with a Debian machine within the next days.
While I am pretty sure MTU is not the (only) root cause of this problem, setting it to 1492 bytes on the VMs (interface setting was 1500 bytes) causes a slight improvement to ~ 1.1 MB/sec, which is still way too low.
Disabling remote packet filters did not change anything, both VMs and IPFire machine have AES-NI available, thus being able to encrypt a much bigger volume per second (~ 2.1 Gbit/sec on VMs, ~ 267 Mbit/sec on IPFire host).
Changing the OpenVPN tunnel MTU to lower values (tested with 1350 and 1300 bytes) did not change anything.
Thanks, and best regards, Peter Müller
Peter,
Is the issue reproducible on different OS and different OpenVPN clients? There’s a chance that the issue lies with FreeBSD or how you are configuring the VMs at different locations.
Tom
On Sep 24, 2019, at 9:23 AM, peter.mueller@ipfire.org peter.mueller@ipfire.org wrote:
Hello list,
as mentioned several times before, I am experiencing OpenVPN performance problems. Since I am out of ideas by now, asking here to help seemed to make sense to me, as I am not sure whether it can be traced to a bug or not.
Test setup is as follows: (a) IPFire is freshly installed on a testing machine with Core Update 135 (x86_64). The machine is connected to the internet via DSL (link has 100 Mbit/sec down capacity with MTU set to 1492) and runs dial-in by itself. No cascading router or NAT in place here. (b) The remote part is a VM hosted at a big German hosting company, with FreeBSD 12 installed (OpenVPN 2.4.7). Uplink is 1 Gbit/sec with MTU = 1492. (c) Both systems are able to submit ICMP packets up to 1492 bits, so MTU is set correctly on both interfaces. (d) The VM is establishing an OpenVPN roadwarrior connection to the IPFire machine, which can be set up successfully and uses AES-256-GCM (SHA 512) for data channel. Tunnel MTU is set to 1400 bytes.
Downloading a test file via SCP from the VM using the OpenVPN connection takes ages and results in throughput between 400 and 700 kB/sec. While normal ICMP latency through the tunnel is around 35 ms, it fluctuates between 40 and 500 ms while download is running.
Needless to say, a bandwidth of 700 kB/sec is unacceptable. Disabling Suricata speeds up to ~ 1.2 MB/sec, disabling Quality of Service (QoS) does not have any big effects.
Since there are some clients using OpenVPN in restricted environments, TCP and port 443 is more or less fixed. Switching to UDP causes a small improvement (~ 800 kB/sec), but does not seem to cure the root cause.
This effect is reproducible with multiple VMs at multiple locations, so I do not think it is related to network outages at one certain hoster.
What am I doing wrong? Is anyone experiencing the same problem?
As mentioned in the Subject line, any help is appreciated.
Thanks, and best regards, Peter Müller