As stated in https://bugzilla.ipfire.org/show_bug.cgi?id=11808#c5 , Cuda hardware acceleration is unused and so the configuration file section can be removed.
This partially addresses #11808.
Signed-off-by: Peter Müller peter.mueller@link38.eu Cc: Stefan Schantl stefan.schantl@ipfire.org --- config/suricata/suricata.yaml | 35 ----------------------------------- 1 file changed, 35 deletions(-)
diff --git a/config/suricata/suricata.yaml b/config/suricata/suricata.yaml index 94e13f501..55b6c05cf 100644 --- a/config/suricata/suricata.yaml +++ b/config/suricata/suricata.yaml @@ -933,41 +933,6 @@ profiling: filename: pcaplog_stats.log append: yes
-## -## Hardware accelaration -## - -# Cuda configuration. -cuda: - # The "mpm" profile. On not specifying any of these parameters, the engine's - # internal default values are used, which are same as the ones specified in - # in the default conf file. - mpm: - # The minimum length required to buffer data to the gpu. - # Anything below this is MPM'ed on the CPU. - # Can be specified in kb, mb, gb. Just a number indicates it's in bytes. - # A value of 0 indicates there's no limit. - data-buffer-size-min-limit: 0 - # The maximum length for data that we would buffer to the gpu. - # Anything over this is MPM'ed on the CPU. - # Can be specified in kb, mb, gb. Just a number indicates it's in bytes. - data-buffer-size-max-limit: 1500 - # The ring buffer size used by the CudaBuffer API to buffer data. - cudabuffer-buffer-size: 500mb - # The max chunk size that can be sent to the gpu in a single go. - gpu-transfer-size: 50mb - # The timeout limit for batching of packets in microseconds. - batching-timeout: 2000 - # The device to use for the mpm. Currently we don't support load balancing - # on multiple gpus. In case you have multiple devices on your system, you - # can specify the device to use, using this conf. By default we hold 0, to - # specify the first device cuda sees. To find out device-id associated with - # the card(s) on the system run "suricata --list-cuda-cards". - device-id: 0 - # No of Cuda streams used for asynchronous processing. All values > 0 are valid. - # For this option you need a device with Compute Capability > 1.0. - cuda-streams: 2 - ## ## Include other configs ##
Hello Peter,
thanks for your patch - merged!
Best regards,
-Stefan
As stated in https://bugzilla.ipfire.org/show_bug.cgi?id=11808#c5 , Cuda hardware acceleration is unused and so the configuration file section can be removed.
This partially addresses #11808.
Signed-off-by: Peter Müller peter.mueller@link38.eu Cc: Stefan Schantl stefan.schantl@ipfire.org
config/suricata/suricata.yaml | 35 -------------------------------
1 file changed, 35 deletions(-)
diff --git a/config/suricata/suricata.yaml b/config/suricata/suricata.yaml index 94e13f501..55b6c05cf 100644 --- a/config/suricata/suricata.yaml +++ b/config/suricata/suricata.yaml @@ -933,41 +933,6 @@ profiling: filename: pcaplog_stats.log append: yes
-## -## Hardware accelaration -##
-# Cuda configuration. -cuda:
- # The "mpm" profile. On not specifying any of these parameters,
the engine's
- # internal default values are used, which are same as the ones
specified in
- # in the default conf file.
- mpm:
- # The minimum length required to buffer data to the gpu.
- # Anything below this is MPM'ed on the CPU.
- # Can be specified in kb, mb, gb. Just a number indicates it's
in bytes.
- # A value of 0 indicates there's no limit.
- data-buffer-size-min-limit: 0
- # The maximum length for data that we would buffer to the gpu.
- # Anything over this is MPM'ed on the CPU.
- # Can be specified in kb, mb, gb. Just a number indicates it's
in bytes.
- data-buffer-size-max-limit: 1500
- # The ring buffer size used by the CudaBuffer API to buffer
data.
- cudabuffer-buffer-size: 500mb
- # The max chunk size that can be sent to the gpu in a single go.
- gpu-transfer-size: 50mb
- # The timeout limit for batching of packets in microseconds.
- batching-timeout: 2000
- # The device to use for the mpm. Currently we don't support
load balancing
- # on multiple gpus. In case you have multiple devices on your
system, you
- # can specify the device to use, using this conf. By default we
hold 0, to
- # specify the first device cuda sees. To find out device-id
associated with
- # the card(s) on the system run "suricata --list-cuda-cards".
- device-id: 0
- # No of Cuda streams used for asynchronous processing. All
values > 0 are valid.
- # For this option you need a device with Compute Capability >
1.0.
- cuda-streams: 2
## ## Include other configs ##