Identifying specific events in your Kubernetes Cluster through eBPF
Many eBPF-related tools in the cloud native ecosystem require Kernel internal knowledge to install and manage. However, most Kubernetes users are not Kernel Developers and even people who have a fundamental understanding of the Linux Kernel will struggle to configure related processes. Thus, it’s even more important that users are able to easily access information through the tools and processes they are already familiar with.
In this blog post, I will showcase how you can configure Tracee, a Runtime Security and Forensics tool using eBPF through Tracee Policies. Tracee Policies builds upon the familiar framework of using Kubernetes Custom Resource Definitions to configure applications.
Prerequisites
This blog post is part of a series on Tracee fundamentals and the Tracee User Experience. To follow the practical steps in this tutorial, please ensure that you have Tracee installed and running inside a Kubernetes cluster as detailed in the previous blog post:
At this point, you should be able to query your tracee-system
namespace for logs from Tracee:
kubectl logs -f daemonset/tracee -n tracee-system
And view an output similar to the following:
Tracee Policies
Tracee treats all Linux Kernel Processes as events from your Kernel. Overall, we can differentiate between built-in events and custom events.
Tracee comes with a set of built-in events that Tracee can detect automatically inside your cluster. These include:
In comparison, custom events are any events that are defined and written by the users. Events can be written in Go or Rego. You can learn more about custom events in the documentation.
Tracee Policies are YAML manifests that allow you to define how Tracee should respond to different events. Every policy has rules that specify how Tracee should respond once a particular event is detected. A rule can respond to one or multiple events with an action. In Tracee Policies, the default action is to log the event. At the time of writing, it is possible to load up to 64 different policies into Tracee.
We can query the default Policy in the tracee-system
namespace inside our Kubernetes cluster through the following command:
kubectl get configmap/tracee-policies -n tracee-system -o yaml
This will output an example Tracee Policy as part of the Tracee Configmap. You can view an example of the default ConfigMap in the following GitHub repository.
Writing a custom policy to detect file open events of a specific container
To demonstrate Tracee functionality in a practical use case, in the following section, we will detail writing a policy to detect whether a file has been opened inside an nginx container running inside the cluster or not.
First, we need to create a new deployment. In this case, we are going to use a basic Nginx Deployment. You can find the file in the following GitHub repository.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: demo
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Next, apply the deployment to a demo namespace inside your Kubernetes cluster;
kubectl create ns demo
kubectl apply -f deployment.yaml
Ensure that the deployment is running inside the demo namespace:
kubectl get all -n demo
The idea is that once we port-forward to the Service, it will require opening a file inside the container to serve the nginx application.
The following policy will detect whether the nginx html file has been opened or modified in a running container:
apiVersion: aquasecurity.github.io/v1beta1
kind: TraceePolicy
metadata:
name: syscall--openat-events
namespace: tracee-system
annotations:
description: traces open syscall events
spec:
scope:
- container
rules:
- event: open
filters:
- args.pathname=/usr/share/nginx/html*
Let’s explore every component in more detail:
- If you are familiar with Kubernetes Custom Resource Definitions (CRDs) you won’t have any problem understanding the first parts of the policy up to the spec section. Note that the
apiVersion
andkind
of the CRD cannot change as this is specific to the Tracee installation. - The
scope
specifies which resources the policy applies to. In this case, the Policy will apply to all events coming from containers inside the cluster. - The
rules
section specifies the events that we want to monitor from those running containers. - The Tracee documentation lists all of the syscalls that Tracee can track.
In the example above, we apply one filter – the file to track – to the Policy. Different events allow you to apply different filters to the policy. The documentation lists all the filters that can be applied for each event. Filters make it possible to narrow down the policy to only respond with an action if the event meets certain conditions. We will look at scopes and filters in more detail in the next tutorial.
To install the Policy to Tracee inside of the tracee-system
namespace, first open the Tracee ConfigMap:
kubectl get configmap/tracee-policies -n tracee-system -o yaml
Now, you have two options:
- Save the output of the Tracee ConfigMap into a new YAML manifest and edit the file separately.
`kubectl get configmap/tracee-policies -n tracee-system -o yaml` - Run a
kubectl edit
command to edit the file right in your Kubernetes cluster.
If you want to control the versions of your Tracee ConfigMap, we would highly suggest using the former option.
Insert the following content into the Tracee ConfigMap right after the data:
section:
data:
syscall.yaml: |-
apiVersion: aquasecurity.github.io/v1beta1
kind: TraceePolicy
metadata:
name: syscall--openat-events
namespace: tracee-system
annotations:
description: traces open syscall events
spec:
scope:
- container
rules:
- event: open
filters:
- args.pathname=/usr/share/nginx/html*
If you are editing the ConfigMap directly (option 2) then save and close the ConfigMap, however, if you are editing a copy of the ConfigMap in a new file, then apply the configMap to your Kubernetes cluster:
kubectl apply -f configmap.yaml -n tracee-system
Save the file and restart the DaemonSet to ensure that Tracee will implement the new Policy using the following command:
kubectl rollout restart ds/tracee -n tracee-system
Wait for the DaemonSet to restart and stabilize. You can monitor the progress using the following command:
kubectl rollout status ds/tracee -n tracee-system
Trigger our new Policy
Next, we need to trigger the event from the nginx container. To do so, port-forward to the service and open the default application localhost:
kubectl port-forward service/nginx -n demo 8080
Then check for open events:
kubectl logs -f ds/tracee -n tracee-system | grep open
Further details on writing Policies
The steps above detail how to provide additional policies to your Tracee deployment in your cluster.
You can find a list of example Policies in the following directory.
Summarising
This blog post provided an introduction to Tracee Policies, demonstrated how to write and implement new Policies, and shared further resources on example Policies that you can use directly from the main repository.
In the next blog posts, we are going to explore different ways of filtering collected events in more detail. So make sure to subscribe to my blog and our Open Source YouTube channel.
If you found this blog post helpful, consider having a look at our other ebpf related resources:
- Join our open source Slack community
- Give Tracee a star on GitHub
- Subscribe to our Open Source YouTube Channel to be notified about new content