* [PATCH] Suricata: drop unused cuda HW acceleration
@ 2019-01-23 20:22 Peter Müller
2019-01-29 13:11 ` Stefan Schantl
0 siblings, 1 reply; 2+ messages in thread
From: Peter Müller @ 2019-01-23 20:22 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 2442 bytes --]
As stated in https://bugzilla.ipfire.org/show_bug.cgi?id=11808#c5 ,
Cuda hardware acceleration is unused and so the configuration file
section can be removed.
This partially addresses #11808.
Signed-off-by: Peter Müller <peter.mueller(a)link38.eu>
Cc: Stefan Schantl <stefan.schantl(a)ipfire.org>
---
config/suricata/suricata.yaml | 35 -----------------------------------
1 file changed, 35 deletions(-)
diff --git a/config/suricata/suricata.yaml b/config/suricata/suricata.yaml
index 94e13f501..55b6c05cf 100644
--- a/config/suricata/suricata.yaml
+++ b/config/suricata/suricata.yaml
@@ -933,41 +933,6 @@ profiling:
filename: pcaplog_stats.log
append: yes
-##
-## Hardware accelaration
-##
-
-# Cuda configuration.
-cuda:
- # The "mpm" profile. On not specifying any of these parameters, the engine's
- # internal default values are used, which are same as the ones specified in
- # in the default conf file.
- mpm:
- # The minimum length required to buffer data to the gpu.
- # Anything below this is MPM'ed on the CPU.
- # Can be specified in kb, mb, gb. Just a number indicates it's in bytes.
- # A value of 0 indicates there's no limit.
- data-buffer-size-min-limit: 0
- # The maximum length for data that we would buffer to the gpu.
- # Anything over this is MPM'ed on the CPU.
- # Can be specified in kb, mb, gb. Just a number indicates it's in bytes.
- data-buffer-size-max-limit: 1500
- # The ring buffer size used by the CudaBuffer API to buffer data.
- cudabuffer-buffer-size: 500mb
- # The max chunk size that can be sent to the gpu in a single go.
- gpu-transfer-size: 50mb
- # The timeout limit for batching of packets in microseconds.
- batching-timeout: 2000
- # The device to use for the mpm. Currently we don't support load balancing
- # on multiple gpus. In case you have multiple devices on your system, you
- # can specify the device to use, using this conf. By default we hold 0, to
- # specify the first device cuda sees. To find out device-id associated with
- # the card(s) on the system run "suricata --list-cuda-cards".
- device-id: 0
- # No of Cuda streams used for asynchronous processing. All values > 0 are valid.
- # For this option you need a device with Compute Capability > 1.0.
- cuda-streams: 2
-
##
## Include other configs
##
--
2.16.4
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] Suricata: drop unused cuda HW acceleration
2019-01-23 20:22 [PATCH] Suricata: drop unused cuda HW acceleration Peter Müller
@ 2019-01-29 13:11 ` Stefan Schantl
0 siblings, 0 replies; 2+ messages in thread
From: Stefan Schantl @ 2019-01-29 13:11 UTC (permalink / raw)
To: development
[-- Attachment #1: Type: text/plain, Size: 2582 bytes --]
Hello Peter,
thanks for your patch - merged!
Best regards,
-Stefan
> As stated in https://bugzilla.ipfire.org/show_bug.cgi?id=11808#c5 ,
> Cuda hardware acceleration is unused and so the configuration file
> section can be removed.
>
> This partially addresses #11808.
>
> Signed-off-by: Peter Müller <peter.mueller(a)link38.eu>
> Cc: Stefan Schantl <stefan.schantl(a)ipfire.org>
> ---
> config/suricata/suricata.yaml | 35 -------------------------------
> ----
> 1 file changed, 35 deletions(-)
>
> diff --git a/config/suricata/suricata.yaml
> b/config/suricata/suricata.yaml
> index 94e13f501..55b6c05cf 100644
> --- a/config/suricata/suricata.yaml
> +++ b/config/suricata/suricata.yaml
> @@ -933,41 +933,6 @@ profiling:
> filename: pcaplog_stats.log
> append: yes
>
> -##
> -## Hardware accelaration
> -##
> -
> -# Cuda configuration.
> -cuda:
> - # The "mpm" profile. On not specifying any of these parameters,
> the engine's
> - # internal default values are used, which are same as the ones
> specified in
> - # in the default conf file.
> - mpm:
> - # The minimum length required to buffer data to the gpu.
> - # Anything below this is MPM'ed on the CPU.
> - # Can be specified in kb, mb, gb. Just a number indicates it's
> in bytes.
> - # A value of 0 indicates there's no limit.
> - data-buffer-size-min-limit: 0
> - # The maximum length for data that we would buffer to the gpu.
> - # Anything over this is MPM'ed on the CPU.
> - # Can be specified in kb, mb, gb. Just a number indicates it's
> in bytes.
> - data-buffer-size-max-limit: 1500
> - # The ring buffer size used by the CudaBuffer API to buffer
> data.
> - cudabuffer-buffer-size: 500mb
> - # The max chunk size that can be sent to the gpu in a single go.
> - gpu-transfer-size: 50mb
> - # The timeout limit for batching of packets in microseconds.
> - batching-timeout: 2000
> - # The device to use for the mpm. Currently we don't support
> load balancing
> - # on multiple gpus. In case you have multiple devices on your
> system, you
> - # can specify the device to use, using this conf. By default we
> hold 0, to
> - # specify the first device cuda sees. To find out device-id
> associated with
> - # the card(s) on the system run "suricata --list-cuda-cards".
> - device-id: 0
> - # No of Cuda streams used for asynchronous processing. All
> values > 0 are valid.
> - # For this option you need a device with Compute Capability >
> 1.0.
> - cuda-streams: 2
> -
> ##
> ## Include other configs
> ##
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-01-29 13:11 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-23 20:22 [PATCH] Suricata: drop unused cuda HW acceleration Peter Müller
2019-01-29 13:11 ` Stefan Schantl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox