Skip to content

Commit

Permalink
NETOBSERV-1470: Reduce memory usage in agent due to kafka batches
Browse files Browse the repository at this point in the history
  • Loading branch information
jotak committed Feb 13, 2024
1 parent 93e7811 commit 6ce66b9
Show file tree
Hide file tree
Showing 11 changed files with 35 additions and 35 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ On the Loki server side, configuration differs depending on how Loki was install

More performance fine-tuning is possible when using Kafka, ie. with `spec.deploymentModel` set to `Kafka`:

- You can set the size of the batches (in bytes) sent by the eBPF agent to Kafka, with `spec.agent.ebpf.kafkaBatchSize`. It has a similar impact than `cacheMaxFlows` mentioned above, with higher values generating less traffic and less CPU usage, but more memory consumption and more latency. It is recommended to keep these two settings somewhat aligned (ie. do not set a super low `cacheMaxFlows` with high `kafkaBatchSize`, or the other way around). We expect the default values to be a good fit for most environments.
- You can set the size of the batches (in bytes) sent by the eBPF agent to Kafka, with `spec.agent.ebpf.kafkaBatchSize`. It has a similar impact than `cacheMaxFlows` mentioned above, with higher values generating less traffic and less CPU usage, but more memory consumption and more latency. We expect the default values to be a good fit for most environments.

- If you find that the Kafka consumer might be a bottleneck, you can increase the number of replicas with `spec.processor.kafkaConsumerReplicas`, or set up an horizontal autoscaler with `spec.processor.kafkaConsumerAutoscaler`.

Expand Down
4 changes: 2 additions & 2 deletions apis/flowcollector/v1alpha1/flowcollector_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -204,9 +204,9 @@ type FlowCollectorEBPF struct {
// +optional
Privileged bool `json:"privileged,omitempty"`

//+kubebuilder:default:=10485760
//+kubebuilder:default:=1048576
// +optional
// kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.
// kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.
KafkaBatchSize int `json:"kafkaBatchSize"`

// Debug allows setting some aspects of the internal configuration of the eBPF agent.
Expand Down
4 changes: 2 additions & 2 deletions apis/flowcollector/v1beta1/flowcollector_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -220,9 +220,9 @@ type FlowCollectorEBPF struct {
// +optional
Privileged bool `json:"privileged,omitempty"`

//+kubebuilder:default:=10485760
//+kubebuilder:default:=1048576
// +optional
// `kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.
// `kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.
KafkaBatchSize int `json:"kafkaBatchSize"`

// `debug` allows setting some aspects of the internal configuration of the eBPF agent.
Expand Down
4 changes: 2 additions & 2 deletions apis/flowcollector/v1beta2/flowcollector_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -227,9 +227,9 @@ type FlowCollectorEBPF struct {
// +optional
Privileged bool `json:"privileged,omitempty"`

//+kubebuilder:default:=10485760
//+kubebuilder:default:=1048576
// +optional
// `kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.
// `kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.
KafkaBatchSize int `json:"kafkaBatchSize"`

// `advanced` allows setting some aspects of the internal configuration of the eBPF agent.
Expand Down
12 changes: 6 additions & 6 deletions bundle/manifests/flows.netobserv.io_flowcollectors.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -145,10 +145,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: 'kafkaBatchSize limits the maximum size of a
request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -2589,10 +2589,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of
a request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -5283,10 +5283,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of
a request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ metadata:
],
"imagePullPolicy": "IfNotPresent",
"interfaces": [],
"kafkaBatchSize": 10485760,
"kafkaBatchSize": 1048576,
"logLevel": "info",
"privileged": false,
"resources": {
Expand Down Expand Up @@ -249,7 +249,7 @@ metadata:
],
"imagePullPolicy": "IfNotPresent",
"interfaces": [],
"kafkaBatchSize": 10485760,
"kafkaBatchSize": 1048576,
"logLevel": "info",
"privileged": false,
"resources": {
Expand Down
12 changes: 6 additions & 6 deletions config/crd/bases/flows.netobserv.io_flowcollectors.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -131,10 +131,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: 'kafkaBatchSize limits the maximum size of a
request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -2575,10 +2575,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of
a request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -5269,10 +5269,10 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of
a request in bytes before being sent to a partition. Ignored
when not using Kafka. Default: 10MB.'
when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down
2 changes: 1 addition & 1 deletion config/samples/flows_v1beta1_flowcollector.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ spec:
cpu: 100m
limits:
memory: 800Mi
kafkaBatchSize: 10485760
kafkaBatchSize: 1048576
processor:
port: 2055
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion config/samples/flows_v1beta2_flowcollector.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ spec:
# - "FlowRTT"
interfaces: []
excludeInterfaces: ["lo"]
kafkaBatchSize: 10485760
kafkaBatchSize: 1048576
# Custom optionnal resources configuration
resources:
requests:
Expand Down
12 changes: 6 additions & 6 deletions docs/FlowCollector.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,9 +268,9 @@ ebpf describes the settings related to the eBPF-based flow reporter when the "ag
<td><b>kafkaBatchSize</b></td>
<td>integer</td>
<td>
kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.<br/>
kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.<br/>
<br/>
<i>Default</i>: 10485760<br/>
<i>Default</i>: 1048576<br/>
</td>
<td>false</td>
</tr><tr>
Expand Down Expand Up @@ -4580,9 +4580,9 @@ Agent configuration for flows extraction.
<td><b>kafkaBatchSize</b></td>
<td>integer</td>
<td>
`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.<br/>
`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.<br/>
<br/>
<i>Default</i>: 10485760<br/>
<i>Default</i>: 1048576<br/>
</td>
<td>false</td>
</tr><tr>
Expand Down Expand Up @@ -9240,9 +9240,9 @@ Agent configuration for flows extraction.
<td><b>kafkaBatchSize</b></td>
<td>integer</td>
<td>
`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.<br/>
`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.<br/>
<br/>
<i>Default</i>: 10485760<br/>
<i>Default</i>: 1048576<br/>
</td>
<td>false</td>
</tr><tr>
Expand Down
12 changes: 6 additions & 6 deletions hack/cloned.flows.netobserv.io_flowcollectors.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -95,8 +95,8 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
description: 'kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.'
default: 1048576
description: 'kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -1792,8 +1792,8 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
description: '`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.'
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down Expand Up @@ -3654,8 +3654,8 @@ spec:
type: string
type: array
kafkaBatchSize:
default: 10485760
description: '`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.'
default: 1048576
description: '`kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.'
type: integer
logLevel:
default: info
Expand Down

0 comments on commit 6ce66b9

Please sign in to comment.