Anatomy of Kubeshark
A distributed packet capture system with a minimal footprint, designed for large-scale production clusters.
Kubeshark offers two primary deployment methods:
- On-demand, lightweight traffic investigation accessible through a CLI for anyone with kubectl access.
- Long-term deployment via a helm chart, providing stable and secure access to developers without the need for
kubectl
access.
Kubeshark does not require any prerequisites such as CNI, service mesh, or coding. It functions without the need for a proxy or sidecar, and does not necessitate any changes to existing architecture.
Cluster Architecture
Workers are deployed, as nodes daemonsets, to sniff traffic. Workers listen to requests coming usually from the Hub on port 30001
on each node.
The Hub is a single container deployed at the Control Plane level. It consolidates information received from all the Workers and listens to requests on port 8898
coming usually from the dashboard.
The Front (Dashboard) is a single container deployed at the Control Plane level. It communicates with the Hub to receive consolidated information and serves the dashboard. It listens to requests on port 8899
.
All ports are configurable.
The Dashboard
Kubeshark’s dashboard is a React application packaged as a Kubernetes (K8s) deployment. It operates within the K8s control plane and communicates with the Hub via WebSocket, displaying captured traffic in real-time as a scrolling feed.
Service Name: kubeshark-front
NOTE: For more information, refer to the dashboard documentation.
Hub
The Hub is a Kubernetes deployment that acts as a gateway to the Workers. It hosts an HTTP server and performs several key functions:
- Accepting WebSocket connections along with their respective filters.
- Establishing WebSocket connections to the Workers.
- Receiving processed traffic from the Workers.
- Streaming results back to the requesters.
- Managing Worker states via HTTP requests.
Service Name: kubeshark-hub
Worker
Each Worker pod is deployed into your cluster at the node level as a DaemonSet.
Each Worker pod includes two services:
- Sniffer: A network packet sniffer.
- Tracer: An optional kernel tracer.
Sniffer
The Sniffer is the main container in the Worker pod responsible for capturing packets by one of the available means:
- AF_PACKET (available with most kernels)
- eBPF via the Tracer (for modern kernels with cgroup V2 is enabled)
- PF_RING (where PF_RING kernel module is found)
libpcap
(If the above didn’t work)
The Sniffer attempts to find the best packet capture method starting from AF_PACKET all the way to libpcap
. Each method has a different performance impact, packet drop rate and functionality.
Tracer
Kubeshark offers tracing of kernel-space and user-space functions using eBPF (Extended Berkeley Packet Filter). eBPF is an in-kernel virtual machine running programs passed from user space, first introduced into the Linux kernel with version 4.4 and has matured since then.
This functionality is performed by the Tracer container. Tracer deployment is optional and can be enabled and disabled using the tap.tls
configuration value. When set to false
, Tracer won’t get deployed.
CLI (kubeshark)
The CLI, a binary distribution of the Kubeshark client, is written in the Go language and usually goes under the name kubeshark
. It is an optional component that offers a lightweight on-demand option to use Kubeshark that doesn’t leave any permanent footprint.
Once downloaded, you can simply use the tap
command to begin monitoring cluster-wide API traffic:
kubeshark tap - tap all pods in all namespaces
kubeshark tap -n sock-shop "(catalo*|front-end*)" - tap only pods that match the regex in a certain namespace