CPU & Memory Consumption
- Resource Consumption
- Kubeshark Operations
- Resource Limitations: Container Memory and CPU Limitations
- Resource Limitations: Worker Storage Limitation
- Predictable Consumption: Pod Targeting
- Predictable Consumption: Traffic Sampling
- The Browser
Kubeshark’s resource consumption largely depends on the cluster workload and the amount of dissection required. Most resource-consuming operations are performed by the Worker at the node level.
Kubeshark captures and stores all traffic in memory. It then filters traffic based on pod targeting rules, which include pod regex and a list of namespaces. Traffic filtered out by these rules is discarded. Traffic filtered in is dissected. Among all Kubeshark operations—traffic capturing, storing, filtering, and dissection—dissection is the most resource-intensive and is performed on-demand when a client requests it (e.g., the dashboard, a recording, a running script).
While resource consumption can increase based on the amount of traffic targeted for dissection, it can be limited by setting configuration values.
Container Memory and CPU Limitations
Container resources are limited by default. However, allocations can be adjusted in the configuration:
Worker Storage Limitation
Traffic is recorded and stored by the Worker at the Kubernetes node level. Kubeshark generates a PCAP file per L4 stream and a JSON file per API message. Files are deleted based on a TTL:
- PCAP - 10s TTL
- JSON - 5m TTL
If storage exceeds its limit, the pod is evicted. The storage limit is controlled by setting the
tap.storagelimit configuration value. To increase this limit, provide a different value (e.g., setting it to 5GB with
OOMKilled and Evictions
Whenever a container surpasses its memory limit, it will get OOMKilled. If Worker storage surpasses its limitation, the pod will get evicted.
While limitations ensure Kubeshark does not consume resources above set limits, it is insufficient to ensure proper operation if the available resources aren’t adequate for the amount of traffic Kubeshark must process.
To consume fewer resources and not surpass limitations, Kubeshark offers two methods to control the amount of processed traffic:
Kubeshark allows targeting specific pods using pod regex and a list of namespaces. This ensures only traffic related to targeted pods is processed, and the rest is discarded. Utilizing Pod Targeting can significantly optimize resource consumption:
Read more in the Pod Targeting section
TrafficSampleRate is a number representing a percentage between 0 and 100. This number causes Kubeshark to randomly select L4 streams, not exceeding the set percentage.
For example, this configuration will cause Kubeshark to process only 20% of traffic, discarding the rest:
The Tracer is responsible for TLS interception and can consume a significant amount of CPU, especially during the first few minutes of operation. This high consumption is a result of utilizing eBPF to scan all processes on the host for TLS library invocations. CPU usage should diminish after a few minutes. Disabling the Tracer is recommended if there is no TLS traffic in the cluster or if processing TLS traffic is not a requirement.
Disable the Tracer with these settings:
AF-PACKET and PF-RING
AF-PACKET relies on the Linux kernel to receive network packets. When the kernel becomes busy, an increasing number of packets are dropped, leading to significant memory consumption and potentially causing Worker pods to be OOMKilled.
PF-RING, a popular kernel extension, provides access to network packets without going through the kernel. As it is more efficient, the likelihood of packet drops reduces, thereby mitigating the risk of elevated memory consumption.
Be aware that your browser can consume a significant amount of CPU when displaying a substantial amount of traffic.