Welcome to libkefir’s documentation!¶
libkefir /lɪbkəˈfɪər/ – KErnel FIltering Rules

All your filters in one bottle.
Introduction¶
libkefir /lɪbkəˈfɪər/ – KErnel FIltering Rules

All your filters in one bottle.
About libkefir¶
Libkefir is a project aiming at simplifying network filtering rules management on recent Linux systems. Its main objective is to provide an interface to easily turn rules in a variety of formats into flexible, editable, ready-to-use BPF programs.
Filtering rules can be constructed ex nihilo, or can be converted from an expression coming from other filtering tools. Currently supported are expressions from:
- ethtool receive-side ntuples filters
- TC (Linux Traffic Control) flower classifier rules
In the future, support could be added for:
- libpcap expressions used for example with tcpdump or Wireshark
- iptable rules
- …
Note
In all pages of this documentation, “BPF” should be interpreted as “eBPF”, the “extended” 64 bit instructions BPF version with support for maps, function calls etc. Unless otherwise specified, it does not refer to the legacy “classic“ BPF.
Concepts¶
High-level overview¶
Libkefir works with filters, which are sets of rules, themselves containing one or more match objects. Please refer to the Terminology section of the API documentation for more details about those terms.
A filter is a set of rules that can be converted into a BPF program, which can later be loaded and attached to the program. This entails a number of functional blocks that are provided by the library to achieve those tasks. Below is a high-level description of this different functional blocks.
See also available documentation on the Workflow for the library, where the articulation between the main blocks is more deeply addressed.
Creating rules¶
Before a filter can be converted and applied to a traffic flow, it first needs to be created. Libkefir provides several interfaces for building rules, and to attach them to a filter object.
One way to build a rule is to “manually” create the rule object (the C struct associated to it). Helpers in libkefir can be used to ease the creation of match objects for the rule. Once the rule is built, it can be passed to the library in order to be added to a given filter (initialized by the library).
Another way of building rules is to call into functions taking expressions from other filtering tools as arguments, and converting those strings into rule objects that can be similarly attached to a filter.
See Rule crafting and Building rules in libkefir for more details on rule creation, or Filter management for building and handling filters.
Conversion to BPF¶
Filters can eventually be turned into a BPF program, but this is not a direct step. A C file is produced first (although some API functions can hide this intermediary step). This C program depends on the features used by the filter (How many match objects in the rules? What fields are necessary to collect in packet headers? Does the filter use masks?). See also Converting the filter into a C program.
The second step is obviously to convert this C program into BPF bytecode. This is done by calling the clang and llc executables, that must be present on the machine. The result is an ELF object file, that can later be reused to load the BPF bytecode into the kernel. More details are provided in section Compiling to BPF, loading, attaching filters of the API documentation.
Loading and attaching the program¶
Functions are provided to easily load and attach the BPF program derived from the filter. These functions also take care of creating and initializing a BPF maps, in which the filtering rules are stored. Additional details on how rules are stored and applied can be found in section Packet matching. Information about the relevant functions for loading and attaching the BPF programs also are in section Compiling to BPF, loading, attaching filters of the API documentation.
Saving, restoring¶
Besides being converted to BPF and loaded onto the system, a filter generated with the library can be saved into an external file as a JSON object, for being restored at a later time.
Additional Resources¶
- LWN.net article: A thorough introduction to eBPF
- Cilium’s BPF and XDP Reference Guide
- Netronome’s eBPF Offload Getting Started Guide
- Blog post: Dive into BPF: a list of reading material
Workflow¶
Here is a description of how the main functional blocks of the library articulate with one another, and of the different steps required to apply a filter to a traffic flow.
Filters and BPF program¶
From a high-level perspective, there are two distinct, major steps that constitute libkefir’s workflow. The library creates filters and converts then loads/attaches them onto a system, therefore we have:
- Create a filter object.
- Convert, load and attach the filter.
Or with a simple diagram:
+==============================+
| |
| Create filter |
| |
+==============+===============+
|
|
| <filter>
|
v
+==============+===============+
| |
| Convert, load, attach filter |
| |
+==============================+
On this diagram and on the following ones, double lines (=
) indicate a
“meta-step” that does not require any action per se in the workflow, but which
is later broken down into smaller steps.
First phase: filter creation¶
The filter¶
Creating a filter can in turn be broken down into more steps.
First, a filter has to be initialized (kefir_filter_init()
). Then rules
must be added to that filter.
+==============================+
| | +-----------------------+
| Create filter +-+->+ Initialize filter +-+
| | | | * kefir_filter_init() | |
+==============================+ | +-----------------------+ | +---------------------------+
| +->+ Add rules to filter |
| | | * kefir_filter_add_rule() |
| +=======================+ | +---------------------------+
+->+ Create rules +-+
+=======================+
Creating rules¶
Rules can be created in several ways. One possibility is to create and build
directly a struct kefir_rule
object, then to pass it to the library to add
it to the filter (kefir_filter_add_rule()
). Because building all the parts
of the rule can be somewhat tricky, a helper function can be used to help build
the match objects. The flow becomes:
- Initialize a filter.
- Build match objects.
- Build rules.
- Add rules to filter.
Another possibility is to use function kefir_rule_load()
(or
kefir_rule_load_l()
) to parse a rule expressed in the syntax of other
filtering tools. This returns a rule object, that can similarly be added to the
filter.
+=====================+ +--------------------------+
| Create rules +-+-------------->+ Create rule from expr. |
+=====================+ | | * kefir_load_rule() |
| | * kefir_load_rule_l() |
| +--------------------------+
|
|
| +--------------------------+ +--------------------------+
+->+ Build struct kefir_match | | Build struct kefir_rule |
| * kefir_match_create() +-->+ * kefir_rule_create() |
| (or manually) | | (or manually) |
+--------------------------+ +--------------------------+
See Rule crafting and Building rules in libkefir for more details on rule creation.
Second phase: filter conversion and use¶
Simplified workflow¶
Converting the filter into a C program, then into a BPF program, and loading
then attaching the program in the kernel can all be done in a single step, with
one of the two functions provided for that purpose (kefir_filter_attach()
or kefir_filter_attach_attr()
). This is the “simple way” of getting a
filter up and running, without having to take care of all the details.
+==============================+ +------------------------------+
| | | Actually convert/load/attach |
| Convert, load, attach filter +---->+ * kefir_filter_attach() |
| | | * kefir_filter_attach_attr() |
+==============================+ +------------------------------+
Unrolling the steps¶
Alternatively, the library offers functions with a finer granularity to perform each task independently. In that case, the steps are the following:
- Convert the filter into a cprog object
(
kefir_filter_convert_to_cprog()
). - Generate the C source code from that object, save it to a file
(
kefir_cprog_to_file()
). - Compile the C source file into BPF bytecode, stored in an ELF object file
(
kefir_cfile_compile_to_bpf()
). - Load program from object file into the kernel
(
kefir_cprog_load_to_kernel()
). - Possibly attach the program to a hook in the kernel, such as XDP
(
kefir_cprog_load_attach_to_kernel()
).
The last function, kefir_cprog_load_attach_to_kernel()
, is actually an
alternative to kefir_cprog_load_to_kernel()
, doing both loading and
attachment.
The diagram becomes as follows:
+==============================+
| |
| Convert, load, attach filter |
| |
+==+===========================+
|
| +---------------------------------------+
+--->+ Convert filter to cprog |
| * kefir_filter_convert_to_cprog() |
+--+------------------------------------+
|
| +---------------------------------------+
+--->+ Generate C source code from cprog |
| * kefir_cprog_to_file() |
+--+------------------------------------+
|
| +---------------------------------------+
+--->+ Compile C source file to BPF |
| * kefir_cfile_compile_to_bpf() |
+--+------------------------------------+
|
| +---------------------------------------+
+--->+ Load BPF from object file |
| | * kefir_cprog_load_to_kernel() |
| +---------------------------------------+
|
| +---------------------------------------+
+--->+ Load and attach BPF |
| * kefir_cprog_load_attach_to_kernel() |
+---------------------------------------+
Complete diagram¶
Here is what the complete diagram, with the different workflows, looks like:
+==============================+
| |
| Create filter |
| |
+==+===========================+
|
| +-----------------------+ <filter>
+--->+ Initialize filter +--------------------------------------------------------------+
| | * kefir_filter_init() | |
| +-----------------------+ |
| +--------------------------+ |
| | Create rule from expr. | |
| +-------------->+ * kefir_load_rule() +-------------------+ +---------------------------+
| +=====================+ | | * kefir_load_rule_l() | <rule> +->+ Add rules to filter |
+--->+ Create rules +-+ +--------------------------+ | | * kefir_filter_add_rule() |
+=====================+ | | +----------+----------------+
| <match> | |
| +--------------------------+ +--------------------------+ | |
| | Build struct kefir_match | | Build struct kefir_rule | | |
+->+ * kefir_match_create() +-->+ * kefir_rule_create() +-+ |
| (or manually) | | (or manually) | |
+--------------------------+ +--------------------------+ |
|
+---------------------------------------------------------------------------------------------+
| <filter>
v
+==============+===============+
| |
| Convert, load, attach filter |
| |
+==+===========================+
|
| +------------------------------+
| <filter | Actually convert/load/attach |
+------------------------------------->+ * kefir_filter_attach() |
| | * kefir_filter_attach_attr() |
| <filter> +------------------------------+
|
| +-----------------------------------+
+--->+ Convert filter to cprog |
| * kefir_filter_convert_to_cprog() |
+--+--------------------------------+
|
| <cprog>
|
| +-----------------------------------+
+--->+ Generate C source code from cprog |
| * kefir_cprog_to_file() |
+--+--------------------------------+
|
| <C file name>
|
| +--------------------------------+
+--->+ Compile C source file to BPF |
| * kefir_cfile_compile_to_bpf() |
+--+-----------------------------+
|
| <cprog, object file name>
|
| +---------------------------------------+
+--->+ Load BPF from object file |
| | * kefir_cprog_load_to_kernel() |
| +---------------------------------------+
|
| +---------------------------------------+
+--->+ Load and attach BPF |
| * kefir_cprog_load_attach_to_kernel() |
+---------------------------------------+
Clean up¶
Once the objects created with the library are no longer needed, they can be destroyed to free the memory that was allocated for them.
Rule, match and value objects are simple struct
s containing no pointer,
so they don’t need to be destroyed, or they can simply free()
-ed if
pointers to such struct
s were created. Rules attached to a filter are not
to be freed by the user, the function for destroying a filter object takes care
of it.
Function kefir_filter_destroy()
is the one taking care of the filters
(struct kefir_filter *
). It frees memory for all the rules attached to the
filter, and for the filter itself.
C program objects (struct kefir_cprog *
) can be destroyed with
kefir_cprog_destroy()
. This function may or may not destroy the filter
attached to the cprog object. This depends on how the filter is attached: by
default, a cprog links to a filter at its creation, but when this cprog object
is destroyed the filter remains, and can be reused for other cprog objects.
However, if the KEFIR_CPROG_FLAG_CLONE_FILTER
was pass in a struct
kefir_cprog_attr
when creating the cprog, then a clone of the filter is
attached instead. Since the user has no means to retrieve a pointer to this
clone, the clone filter is destroyed at the same time as the cprog object.
At last, the kefir_bpfobj_destroy()
can be used to destroy a struct
bpf_object *
produced when loading a BPF program into the kernel. The
function just calls bpf_object__close()
from libbpf really, but it felt
more consistent to provide a wrapper in this library for all objects produced
by functions of the library.
API¶
This document presents the different functions and values exposed by the library.
Terminology¶
Libkefir works with filters, which are sets of rules. Each rule contains one or several match objects, which are patterns against which packets are evaluated. If the comparison on all matches passed successfully, then the action related to the rule is applied to that packet.
Once the filters have been initialized and built by adding one or several rules, they can be turned into BPF-compatible programs. Such programs are represented in the library by cprog objects. They can in turn be compiled into BPF bytecode (as ELF object files), that can be loaded and attached into the kernel.
The internal structure of filters and cprogs is hidden to the user. The structures for the rules and matches are exposed through the API.
Rule crafting¶
See also Building rules.
enum kefir_comp_operator {
KEFIR_OPER_EQUAL,
KEFIR_OPER_LT,
KEFIR_OPER_LEQ,
KEFIR_OPER_GT,
KEFIR_OPER_GEQ,
KEFIR_OPER_DIFF,
__KEFIR_MAX_OPER
};
enum kefir_action_code {
KEFIR_ACTION_CODE_DROP,
KEFIR_ACTION_CODE_PASS,
__KEFIR_MAX_ACTION_CODE
};
enum kefir_match_type {
KEFIR_MATCH_TYPE_UNSPEC = 0,
KEFIR_MATCH_TYPE_ETHER_SRC,
KEFIR_MATCH_TYPE_ETHER_DST,
KEFIR_MATCH_TYPE_ETHER_ANY, /* Either source or destination */
KEFIR_MATCH_TYPE_ETHER_PROTO,
KEFIR_MATCH_TYPE_IP_4_SRC,
KEFIR_MATCH_TYPE_IP_4_DST,
KEFIR_MATCH_TYPE_IP_4_ANY,
KEFIR_MATCH_TYPE_IP_4_TOS,
KEFIR_MATCH_TYPE_IP_4_TTL,
KEFIR_MATCH_TYPE_IP_4_L4PROTO,
KEFIR_MATCH_TYPE_IP_4_L4DATA,
KEFIR_MATCH_TYPE_IP_4_L4PORT_SRC,
KEFIR_MATCH_TYPE_IP_4_L4PORT_DST,
KEFIR_MATCH_TYPE_IP_4_L4PORT_ANY,
KEFIR_MATCH_TYPE_IP_6_SRC,
KEFIR_MATCH_TYPE_IP_6_DST,
KEFIR_MATCH_TYPE_IP_6_ANY,
KEFIR_MATCH_TYPE_IP_6_TOS, /* Actually TCLASS, traffic class */
KEFIR_MATCH_TYPE_IP_6_TTL,
KEFIR_MATCH_TYPE_IP_6_L4PROTO,
KEFIR_MATCH_TYPE_IP_6_L4DATA,
KEFIR_MATCH_TYPE_IP_6_L4PORT_SRC,
KEFIR_MATCH_TYPE_IP_6_L4PORT_DST,
KEFIR_MATCH_TYPE_IP_6_L4PORT_ANY,
KEFIR_MATCH_TYPE_IP_ANY_TOS,
KEFIR_MATCH_TYPE_IP_ANY_TTL,
KEFIR_MATCH_TYPE_IP_ANY_L4PROTO,
KEFIR_MATCH_TYPE_IP_ANY_L4DATA,
KEFIR_MATCH_TYPE_IP_ANY_L4PORT_SRC,
KEFIR_MATCH_TYPE_IP_ANY_L4PORT_DST,
KEFIR_MATCH_TYPE_IP_ANY_L4PORT_ANY,
KEFIR_MATCH_TYPE_VLAN_ID,
KEFIR_MATCH_TYPE_VLAN_PRIO,
KEFIR_MATCH_TYPE_VLAN_ETHERTYPE,
KEFIR_MATCH_TYPE_CVLAN_ID,
KEFIR_MATCH_TYPE_CVLAN_PRIO,
KEFIR_MATCH_TYPE_CVLAN_ETHERTYPE,
KEFIR_MATCH_TYPE_SVLAN_ID,
KEFIR_MATCH_TYPE_SVLAN_PRIO,
KEFIR_MATCH_TYPE_SVLAN_ETHERTYPE,
__KEFIR_MAX_MATCH_TYPE
};
/*
* A value object, to be matched against data collected from one field of a
* packet.
*/
union kefir_value {
struct ether_addr eth;
struct in6_addr ipv6;
struct in_addr ipv4;
uint32_t u32;
uint16_t u16;
uint8_t u8;
uint8_t raw[sizeof(struct in6_addr)];
};
/**
* A match object, representing a pattern to match against values collected
* from header fields of a network patcket.
* @match_type: a type for the match, indicating the size and semantics of the
* data to match
* @comp_operator: comparison operator to indicate what type of comparison
* should be performed (equality, or other arithmetic operator)
* @value: a value to match
* @mask: a mask to apply to packet data before trying to match it against the
* value
* @flags: for internal use only, will be overwritten when adding parent rule
* to filter
*/
struct kefir_match {
enum kefir_match_type match_type;
enum kefir_comp_operator comp_operator;
union kefir_value value;
uint8_t mask[16];
uint64_t flags;
};
/**
* A rule object, representing one rule that will be evaluated against packet
* data. If all patterns match, the action code will be returned from the BPF
* program.
* @matches: array of match objects to try against packet data
* @action: action code to return from BPF program if packet matches with rule
*/
struct kefir_rule {
struct kefir_match matches[KEFIR_MAX_MATCH_PER_RULE];
enum kefir_action_code action;
};
/**
* Get the number of bytes expected for a value for a match of the given type.
* @type: match type which length is requested
* @return length (in bytes) of the value for the given type
*/
unsigned int kefir_bytes_for_type(enum kefir_match_type type);
/**
* Fill and possibly create a match object.
* @match: pointer to the match object to fill, if NULL the object will be
* allocated by the function and should be later free()-d by the caller
* @type: type for the match (indicating the header field with which the match
* pattern should be compared)
* @oper: comparison operator for the operation to do to check if a packet
* matches a pattern
* @value: pointer to the data to compare to the content of the packets, which
* MUST be of the correct size of the match type in use (this can be a
* pointer to a 2-byte long integer for matching on L4 ports, or to a
* struct ether_addr for matching on MAC address, for example)
* @mask: bitmask to apply to packet data before comparing it to the value
* @is_net_byte_order: true if value and masks are already in network byte
* order (for example if MAC address was obtained with
* ether_aton()), false otherwise
* @return a pointer to the match object (to be free()-d by the caller if
* allocated by the function) on success, NULL otherwise
*/
struct kefir_match *
kefir_match_create(struct kefir_match *match,
enum kefir_match_type type,
enum kefir_comp_operator oper,
const void *value,
const uint8_t *mask,
bool is_net_byte_order);
/**
* Create and fill a rule object.
* @matches: array of pointers to match objects to fill the rule with
* @nb_matches: number of match objects in the array
* @action: action code to return from the BPF program when a packet matches all
* patterns for the rule
* @return a pointer to the rule object (to be free()-d by the caller) on
* success, NULL otherwise
*/
struct kefir_rule *
kefir_rule_create(struct kefir_match * const *matches,
unsigned int nb_matches,
enum kefir_action_code action);
Filter management¶
Functions provided for filter management are called by users for building and manipulating filter objects. In particular, they are used to parse and validate strings provided by the user as grammatically correct (or not) filtering rules. They implement basic operations such as rules addition, deletion, or dump.
In addition to “manual” rule crafting as exposed in the previous section, filtering rules can be provided in one of the following formats:
- Ethtool receive-side ntuples filters (see Ethtool ntuples filters).
- TC (Linux Traffic Control) flower classifier rules (see TC flower).
Other formats (libpcap filters, iptable rules, OvS rules…) may be added in the future.
The following functions are available for managing filters:
struct kefir_filter;
enum kefir_rule_type {
KEFIR_RULE_TYPE_ETHTOOL_NTUPLE,
KEFIR_RULE_TYPE_TC_FLOWER,
};
/**
* Create and initialize a new filter object.
* @return a pointer to the filter object on success (to be free()-d by the
* caller), NULL otherwise
*/
struct kefir_filter *kefir_filter_init(void);
/**
* Destroy a filter object and free all associated memory.
* @filter: filter to destroy
*/
void kefir_filter_destroy(struct kefir_filter *filter);
/**
* Copy a filter object.
* @filter: the filter to copy
* @return a new filter object (the caller is responsible for its destruction)
*/
struct kefir_filter *kefir_filter_clone(const struct kefir_filter *filter);
/**
* Count the number of rules present in the list of a filter.
* @filter: the filter for which to count the rules
* @return the number of rules in that filter
*/
unsigned int kefir_filter_size(const struct kefir_filter *filter);
/**
* Add a rule to a filter.
* @filter: object to add the rule to
* @rule: rule to add the the filter (filter links to the rule, does not clone
* it)
* @index: index of the rule in the list (if filter already has a rule at this
* index, insert before and shift rules with a greater or equal index),
* if negative then start from the end of the list
* @return 0 on success, error code otherwise
*/
int kefir_filter_add_rule(struct kefir_filter *filter,
struct kefir_rule *rule,
int index);
/**
* Create a rule from an expression and add it to a filter.
* @filter: object to add the rule to
* @rule_type: type of the rule to add
* @user_rule: array of words defining the rule in the format for rule_type
* @rule_size: number of words in user_rule
* @index: index of the rule in the list (if filter already has a rule at this
* index, insert before and shift rules with a greater or equal index),
* if negative then start from the end of the list
* @return 0 on success, error code otherwise
*/
int kefir_rule_load(struct kefir_filter *filter,
enum kefir_rule_type rule_type,
const char * const *user_rule,
unsigned int rule_size,
int index);
/**
* Create a rule from an expression and add it to a filter.
* @filter: object to add the rule to
* @rule_type: type of the rule to add
* @user_rule: single string defining the rule in the format for rule_type
* @index: index of the rule in the list (if filter already has a rule at this
* index, insert before and shift rules with a greater or equal index),
* if negative then start from the end of the list
* @return 0 on success, error code otherwise
*/
int kefir_rule_load_l(struct kefir_filter *filter,
enum kefir_rule_type rule_type,
const char *user_rule,
int index);
/**
* Delete a rule at given index from a filter.
* @filter: object to remove the rule from
* @index: index of the rule to delete
* @return 0 on success, error code otherwise
*/
int kefir_rule_delete_by_id(struct kefir_filter *filter,
int index);
/** Dump all rules of a filter to the console.
* OUTPUT IS NOT STABLE, USE FOR DEBUG ONLY!
* (See also kefir_filter_save_to_file().)
* @filter: object to dump
*/
void kefir_filter_dump(const struct kefir_filter *filter);
Saving and restoring a filter¶
Additional functions are provided to save a filter object to an external file, and to reload it at a later time from that file. The filter object is stored as a JSON object.
The detailed specifications of that JSON file are not provided at this time. This feature is intended to be used for saving and restoring filters built with the library, but not to provide a way for users to modify an intermediate version of the filter.
Two functions are needed here: one to save the filter, one to load it again afterwards.
/**
* Save a filter to a file
* @filter: filter to save
* @filename: name of the file where to save the filter (it will be created
* if necessary, overwritten overwise), if "-" then write to stdout
* @return 0 on success, error code otherwise
*/
int kefir_filter_save_to_file(const struct kefir_filter *filter,
const char *filename);
/**
* Load a filter from a backup
* @filename: name of the file to load the filter from, if "-" then read from
* stdin
* @return a pointer to the filter object on success (to be free()-d by the
* caller), NULL otherwise
*/
struct kefir_filter *kefir_filter_load_from_file(const char *filename);
Converting the filter into a C program¶
Once a filter object has been created and filled with a set of rules, it can be
converted into a BPF-compatible C program. This C program is internally
represented as a buffer containing the C source code generated from the filter,
stored in a kefir_cprog
object.
In addition to the source code for the C program, such object holds a number of options about the filter, such as the target (TC or XDP hook) for later conversion into BPF. Such options can be passed when converting the filter into the C program object.
The generated C program can be returned to the user as a buffer containing the source code, or stored into an external file.
struct kefir_cprog;
enum kefir_cprog_target {
KEFIR_CPROG_TARGET_XDP,
KEFIR_CPROG_TARGET_TC,
};
/**
* Destroy and free allocated memory for a C program object.
* @cprog: C program object to destroy
*/
void kefir_cprog_destroy(struct kefir_cprog *cprog);
/*
* Flags for a struct kefir_cprog_attr.
*
* KEFIR_CPROG_FLAG_INLINE_FUNC
* Force inlining of functions (no BPF-to-BPF calls).
* KEFIR_CPROG_FLAG_NO_LOOPS
* Ask clang to unroll loops, do not rely on BPF bounded loops support.
* KEFIR_CPROG_FLAG_CLONE_FILTER
* The filter object is normally attached to the cprog object created. Use
* this flag to create and attach a clone instead. Use if you intend to
* further edit the filter afterwards, but wish to keep the cprog object
* unchanged.
* KEFIR_CPROG_FLAG_NO_VLAN
* Disable generation of VLAN-related code (use if traffic and filter rules
* never rely on VLAN tags).
* KEFIR_CPROG_FLAG_USE_PRINTK
* Generate some calls to bpf_trace_printk() to help with debug.
*/
#define KEFIR_CPROG_FLAG_INLINE_FUNC _BITUL(0)
#define KEFIR_CPROG_FLAG_NO_LOOPS _BITUL(1)
#define KEFIR_CPROG_FLAG_CLONE_FILTER _BITUL(2)
#define KEFIR_CPROG_FLAG_NO_VLAN _BITUL(3)
#define KEFIR_CPROG_FLAG_USE_PRINTK _BITUL(4)
/**
* Struct containing attributes used when converting a filter into a C program.
* @target: target for conversion (TC/XDP)
* @license: license string to use for program, defaults to "Dual BSD/GPL"
* @flags: option flags for conversion
*/
struct kefir_cprog_attr {
enum kefir_cprog_target target;
const char *license;
unsigned int flags;
};
/**
* Convert a filter into an eBPF-compatible C program.
* @filter: filter to convert
* @target: target for conversion (TC/XDP)
* @return an object containing all parameters required to create an
* eBPF-compatible C program
*/
struct kefir_cprog *
kefir_filter_convert_to_cprog(const struct kefir_filter *filter,
const struct kefir_cprog_attr *attr);
/**
* Dump a C program generated by the library.
* @cprog: program to dump
*/
void kefir_cprog_to_stdout(const struct kefir_cprog *cprog);
/**
* Write a generated C program into a buffer.
* @cprog: C program to write
* @buf: pointer to a buffer to write the C program into, if NULL the object
* will be allocated by the function and should be later free()-d by the
* caller
* @buf_len: pointer to buffer size, will be updated if buffer is reallocated
* @return 0 on success, error code otherwise
*/
int kefir_cprog_to_buf(const struct kefir_cprog *cprog,
char **buf,
unsigned int *buf_len);
/**
* Save a C program to a file on the disk.
* @cprog: C program to save
* @filename: name of file to write into (existing file will be overwritten)
* @return 0 on success, error code otherwise
*/
int kefir_cprog_to_file(const struct kefir_cprog *cprog,
const char *filename);
Compiling to BPF, loading, attaching filters¶
A C program under the form of a kefir_cprog
object can later be turned into
a BPF program. The library does not proceed to the compilation itself; instead,
it calls into the clang and llc executables, and relies on them for generating
the BPF bytecode. One consequence of this choice is that clang and llc must be
present on the machine where the filter is compiled, and available to the
application using the library, for this to work. Another aspect to take into
consideration is that the BPF bytecode is not kept in memory and managed by the
library; instead, it is stored in an ELF object file generated by clang and
llc.
At this time it is not expected to support the generation of BPF bytecode directly from a filter object, without the intermediate C program.
The library is also able to load, and even attach the program to a BPF hook (XDP for now, TC as well in the future), on a given interface. Hardware offload is supported as well for compatible devices.
Some functions have overlapping functionalities, and are proposed to better adapt to the different possible workflows. Thus we have:
kefir_cprog_load_to_kernel()
, used to simply load a cprog object into the kernel.kefir_cprog_load_attach_to_kernel()
, which also loads a cprog (actually calling into the previous function), but also attaches it to a given interface (for XDP).kefir_filter_attach()
is a “shortcut” function that does all the work from converting the filter to loading and attaching it. The workflow becomes really simple and straightforward, but it provides few options.kefir_filter_attach_attr()
does the same as the previous one, but takes more arguments to offer a wider range of options at code generation, compilation and load/attach times.
/**
* Struct containing attributes used when compiling a C program into BPF code.
* @object_file: optional name for the output file, if NULL will be derived
* from c_file if possible (".c" extension will be replaced by
* ".o")
* @ll_file: optional name for intermediary ll file (LLVM IR), if NULL will be
* derived from c_file (".ll")
* @clang_bin: optional path to clang executable, if NULL defaults to
* /usr/bin/clang
* @llc_bin: optional path to llc executable, if NULL defaults to /usr/bin/llc
*/
struct kefir_compil_attr {
const char *object_file;
const char *ll_file;
const char *clang_bin;
const char *llc_bin;
};
/**
* Compile a C file into BPF bytecode as an ELF object file.
* @c_file: input C source code file
* @attr: object containing optional attributes to use when compiling the
* program
* @return 0 on success, error code otherwise
*/
int kefir_cfile_compile_to_bpf(const char *c_file,
const struct kefir_compil_attr *attr);
/**
* Unload and destroy a BPF object and free all associated memory.
* @obj: pointer to the BPF object to destroy
*/
void kefir_bpfobj_destroy(struct bpf_object *obj);
/**
* Retrieve the file descriptor of the filter program associated with a BPF
* object.
* @obj: the BPF object resulting from a program load or attachment
* @return a file descriptor related to that program
*/
int kefir_bpfobj_get_prog_fd(const struct bpf_object *obj);
/**
* Struct containing attributes used when loading a BPF program from an object
* file.
* @ifindex: interface index, for indicating where the filter should be
* attached (or where the map should be allocated, for hardware
* offload, even if the program is simply loaded)
* @log_level: log level to pass to kernel verifier when loading the program
* @flags: for XDP: passed to netlink to set XDP mode (socket buffer, driver,
* hardware) (see <linux/if_link.h>)
* for TC: TODO (No support yet for TC)
*/
struct kefir_load_attr {
int ifindex;
int log_level;
unsigned int flags;
};
/**
* Load the BPF program associated to a C program object into the kernel.
* @cprog: cprog used to generate the BPF program
* @objfile: name of ELF object file containing the BPF program generated from
* the filter
* @attr: object containing optional attributes to use when loading the program
* @return a BPF object containing information related to the loaded program,
* NULL on error
*/
struct bpf_object *
kefir_cprog_load_to_kernel(const struct kefir_cprog *cprog,
const char *objfile,
const struct kefir_load_attr *attr);
/**
* Load the BPF program associated to a C program object into the kernel, then
* immediately attach it to a given interface and fill the map with rules
* associated to the filter.
* @cprog: cprog used to generate the BPF program
* @objfile: name of ELF object file containing the BPF program generated from
* the filter
* @attr: object containing optional attributes to use when loading the program
* @return a BPF object containing information related to the loaded program,
* NULL on error
*/
struct bpf_object *
kefir_cprog_load_attach_to_kernel(const struct kefir_cprog *cprog,
const char *objfile,
const struct kefir_load_attr *attr);
/**
* Fill the map associated to a filter loaded in the kernel with the rules
* associated with that filter.
* @cprog: cprog used to generate the BPF program loaded on the system
* @bpf_obj: BPF object resulting from program load
* @return 0 on success, error code otherwise
*/
int kefir_cprog_fill_map(const struct kefir_cprog *cprog,
const struct bpf_object *bpf_obj);
/**
* Dump the commands (bpftool format) that can be used to fill the rules
* associated with a cprog object (loaded or not).
* @cprog: cprog used to generate the BPF program
* @bpf_obj: optional BPF object resulting from program load, used if not NULL
* for retrieving map id
* @buf: pointer to a buffer where to store the commands, if NULL the object
* will be allocated by the function and should be later free()-d by the
* caller
* @buf_len: pointer to buffer size, will be updated if buffer is reallocated
* @return 0 on success, error code otherwise
*/
int kefir_cprog_map_update_cmd(const struct kefir_cprog *cprog,
const struct bpf_object *bpf_obj,
char **buf,
unsigned int *buf_len);
/**
* All-in-one shortcut function to turn a filter into a cprog object, convert
* it into a BPF program, load it, and attach it to an interface.
* @filter: filter to use
* @ifindex: interface to which the filter should be attached
* @return a BPF object containing information related to the loaded program,
* NULL on error
*/
struct bpf_object *
kefir_filter_attach(const struct kefir_filter *filter,
int ifindex);
/**
* All-in-one shortcut function to turn a filter into a cprog object, convert
* it into a BPF program, load it, and attach it to an interface.
* @filter: filter to use
* @cprog_attr: object containing attributes to use when generating C code from
* filter
* @compil_attr: object containing optional attributes to use when compiling
* the filter into BPF
* @load_attr: object containing attributes to use when loading the program
* @return a BPF object containing information related to the loaded program,
* NULL on error
*/
struct bpf_object *
kefir_filter_attach_attr(const struct kefir_filter *filter,
const struct kefir_cprog_attr *cprog_attr,
const struct kefir_compil_attr *compil_attr,
const struct kefir_load_attr *load_attr);
Handling errors¶
The library attempts to provide flexibility for the caller application regarding handling of error messages. In particular, it does not print error messages unconditionally to the console. In fact, any error message is written into a special internal buffer. The content of this buffer is made accessible via specific function.
/**
* Change the printing function used for error messages.
* @fn: function used to print the messages, taking a prefix (used by library
* components to tell what part of the library the error comes from), a
* format string (a la printf), and a list of arguments
* @return an integer returned by the printing function (0 for the default
* function)
*/
void kefir_set_print(int (*fn)(const char *prefix,
const char *format,
va_list ap));
Building rules in libkefir¶
The library offers several ways to build rules for filter objects. They can be built “manually”, by constructing a C structure that will be directly added to the filter, or they can be built by the library from an expression in one of the supported syntaxes.
Building rules¶
Foreword¶
Libkefir offers several interfaces for building rule objects to add to filters. Actually, the structure of the rules is part of the API, and exposed to the user, who is free to build the rules exactly as they intend. This document provides some precisions on this structure, and some explanations on the helpers provided by the library to interact with it.
See also section Rule crafting of the API documentation for more details on the structures and functions exposed by the library at that level.
Struct kefir_rule¶
The struct kefir_rule
and its members are as follow:
/*
* A value object, to be matched against data collected from one field of a
* packet.
*/
union kefir_value {
struct ether_addr eth;
struct in6_addr ipv6;
struct in_addr ipv4;
uint32_t u32;
uint16_t u16;
uint8_t u8;
uint8_t raw[sizeof(struct in6_addr)];
};
/*
* - A type for the match, indicating the semantics of the data to match
* (semantics needed for optimizations).
* - An operator to indicate what type of comparison should be performed
* (equality, or other arithmetic or logic operator).
* - A value to match.
* - One mask to apply to the field.
* - Option flags, indicating for example that masks are used for this match.
*/
struct kefir_match {
enum match_type match_type;
enum kefir_comp_operator comp_operator;
union kefir_value value;
uint8_t mask[16];
uint64_t flags;
};
/*
* A rule object, representing one rule that will be evaluated against packet
* data. If all patterns match, the action code will be returned from the BPF
* program.
*/
struct kefir_rule {
struct kefir_match matches[KEFIR_MAX_MATCH_PER_RULE];
enum kefir_action_code action;
};
A rule contains a fixed number of match objects, but not all of them are used
in the resulting filter (processing stops on the first match object with match
type KEFIR_MATCH_TYPE_UNSPEC
). It also contains an action code, indicating
the action to apply to the packet when all patterns in the different match
objects are found to be validated by the packet fields.
Match objects (struct kefir_match
) contain the value to evaluate against a
specific field in the packet (designated by the match type), and additional
information on how to perform this evaluation (what comparison operator should
be used, what mask, if any). Note that the flags
are for internal use
only, and will be reset by the library when the rule is added to a filter.
The value contained in a match object (union kefir_value
) actually
represents just a single value. Because values to compare with the packet can
take a variety of formats, the object is a union
. Here are some important
notes to keep in mind when manipulating values:
- The value MUST be left-aligned in the union, whatever its length. So if
the value is a two-byte integer, representing for example a layer 4 port, the
two bytes of the value must be stored at the left side of the union, so that
it can be accessed as the
.u16
member. - All values (longer than 1 byte) MUST be stored in network-byte order.
This is so the BPF program does not loose instructions to convert it before
comparing it to packet’s values. This often means calling helpers like
htons(n)
for integers. Note that some functions such asether_aton()
orinet_pton()
, used to convert character strings into Ether or IP addresses respectively, already store their results in network-byte order.
Libkefir helpers for building rules¶
Because it can feel cumbersome to handle all these aspects for storing the values correctly in match objects, the library provides two helpers.
The first one, kefir_match_create()
, takes the items needed to build a
match object, and takes care of creating and storing the value correctly, This
function deduces the relevant length from the match type provided, therefore
the user does not pass the length of the value. Because it may be useful to
know the expected value for a given type (e.g. to check before calling
kefir_match_create()
that the data for which a pointer is passed is big
enough), function kefir_bytes_for_type()
is provided to that effect.
For example:
struct kefir_match match = {0};
uint8_t src_ip[4];
inet_pton(AF_INET, "10.10.10.1", &src_ip);
/* This check is not necessary if we know the length of the value
* associated with KEFIR_MATCH_TYPE_IP_4_SRC, but can be used if in
* doubt, to avoid passing a pointer to a memory area shorter than what
* kefir_match_create() will read.
*/
if (sizeof(src_ip) != kefir_bytes_for_type(KEFIR_MATCH_TYPE_IP_4_SRC))
return -1;
if (!kefir_match_create(&match, KEFIR_MATCH_TYPE_IP_4_SRC,
KEFIR_OPER_EQUAL, &src_ip, NULL, true))
return -1;
The second helper, called kefir_rule_create()
, can be used to build a rule
from one or several match objects, whether or not they were created with
kefir_match_create()
.
Again, please refer to section section Rule crafting of the API documentation for more details on those functions.
Ethtool ntuples filters¶
About ethtool hardware filters¶
Ethtool is a utility used to query or control network driver and hardware settings. It relies on a specific syntax to parse and set up what it calls “ntuples filters” on the hardware, for those NICs that support it (mostly Intel’s). Such filters allow for sending a packet into a specific hardware queue, for configuring the hash options for packets matching a rule for RSS (Receive-Side Scaling), or for dropping the packet. The latter is of particular interest in our context.
As these filters are designed to be used at the hardware level, the syntax for ethtool rules is rather simple, and mostly consist in a combination of fields to check (possibly with masks).
For these reasons, the syntax for ethtool ntuples is well suited for expressing filtering rules, and was integrated to libkefir as a way to build a filter object.
Example¶
Here is an example rule used to drop incoming IPv4 HTTP traffic with ethtool:
# ethtool -U flow-type tcp4 src-port 80 action -1
Libkefir expects an expression identical to that command line, starting after
the name of the binary (ethtool
) and the options (-U
). So the relevant
expression would be:
flow-type tcp4 src-port 80 action -1
Which can be fed to kefir_rule_load_l()
, for example:
if (kefir_rule_load_l(filter,
KEFIR_RULE_TYPE_ETHTOOL_NTUPLE,
"flow-type tcp4 src-port 80 action -1",
0)) {
printf("Error: %s\n", kefir_strerror());
return -1;
}
Example rules can be found in ethtool-based tests. More details on ethtool ntuples syntax and semantics can be found on the ethtool manual page.
Current support¶
Supported keywords:
src xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]
dst xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]
proto N [m N]
src-ip ip-address [m ip-address]
dst-ip ip-address [m ip-address]
tos N [m N]
tclass N [m N]
l4proto N [m N]
src-port N [m N]
dst-port N [m N]
l4data N [m N]
vlan-etype N [m N]
vlan N [m N]
dst-mac xx:yy:zz:aa:bb:cc [m xx:yy:zz:aa:bb:cc]
action N
Unsupported keywords:
spi N [m N]
user-def N [m N]
Non-relevant keywords:
context N
vf N
queue N
loc N
delete N
TC flower¶
About TC flower filters¶
When compared with Ethtool ntuples filters ntuples, TC filters are applied higher in the Linux stack. Socket buffers have been assigned and provide a number of offsets that can help matching packets, which results in a number of additional filtering options that can be supported.
Some of them are supported by libkefir, in order to quickly generate rules from TC flower expressions.
Example¶
The following rule can be used to filter out incoming IPv4 HTTP packets:
# tc flower protocol ip flower ip_proto tcp dst_port 80 action drop
The same line, starting after tc flower
, can be passed to the library to
create a new rule. In our example, it would be the following string:
protocol ip flower ip_proto tcp dst_port 80 action drop
So a call to the kefir_rule_load_l()
, used to build rules from a string
containing the whole expression, would look like this:
if (kefir_rule_load_l(filter,
KEFIR_RULE_TYPE_TC_FLOWER,
"protocol ip flower ip_proto tcp dst_port 80 action drop",
0)) {
printf("Error: %s\n", kefir_strerror());
return -1;
}
Other example rules displaying the various supported options can be found in the tests for TC flower-based filters. For details on the syntax and the semantics of the different keywords in TC flower expressions, please refer to the tc-flower manual page.
Current support¶
Supported keywords:
dst_mac MASKED_LLADDR
src_mac MASKED_LLADDR
vlan_id VID
vlan_prio PRIORITY
vlan_ethtype VLAN_ETH_TYPE
cvlan_id VID
cvlan_prio PRIORITY
cvlan_ethtype VLAN_ETH_TYPE
ip_proto IP_PROTO
ip_tos MASKED_IP_TOS
ip_ttl MASKED_IP_TTL
dst_ip PREFIX
src_ip PREFIX
dst_port NUMBER
src_port NUMBER
action ACTION_SPEC
Unsupported keywords:
mpls_label LABEL
mpls_tc TC
mpls_bos BOS
mpls_ttl TTL
dst_port MIN_VALUE-MAX_VALUE
src_port MIN_VALUE-MAX_VALUE
tcp_flags MASKED_TCP_FLAGS
type MASKED_TYPE
code MASKED_CODE
arp_tip IPV4_PREFIX
arp_sip IPV4_PREFIX
arp_op ARP_OP
arp_sha MASKED_LLADDR
arp_tha MASKED_LLADDR
enc_key_id NUMBER
enc_dst_ip PREFIX
enc_src_ip PREFIX
enc_dst_port NUMBER
enc_tos NUMBER
enc_ttl NUMBER
geneve_opts OPTIONS
ip_flags IP_FLAGS
Non-relevant keywords:
classid CLASSID
hw_tc TCID
indev ifname
verbose
skip_sw
skip_hw
Internals¶
This page is not yet a full walk-through review of the internals of the library. It should rather be seen as a collection on notes about particular points of interests for a better understanding of how functionalities are implemented.
Structure of a rule¶
A filter object contains a list of rules. Each rule contains a set of match objects. Those match objects each contain:
- A match type, indicating against which field of a packet the value should be compared
- A value to match
- A comparison operator
- A mask
- A set of flags, for easier processing
Additionally, a rule associates an action to this set of match objects.
When a rule contains several non-null match objects, it applies to a packet if and only if all of the values in those objects (conjunction) are successfully compared to the related values from the packet. Implementing rules based on a disjunction of patterns is done by creating several distinct rules and trying them one after the other.
Match objects for which the type is KEFIR_MATCH_TYPE_UNSPEC
are considered
null and ignored. Processing of match objects halts after the first null
element (or when all objects in the rule have been processed).
Packet matching¶
It is expected that the library will be used for a moderate number of rules. Still, we want to support more than just two or three rules, so the translation of the filter object to C and BPF should not consist in hard-coding all matching steps in the BPF program itself. To provide more flexibility, it relies on BPF maps instead:
- The C/BPF program is responsible for dissecting the different headers needed by the program to perform the filtering operations (e.g. “is this packet UDP?”).
- Then it collects the different values susceptible to be used in the actual filtering rules (e.g. UDP source and destination ports).
- At last it tries to apply each rule stored in the map, comparing the collected values to those of the match objects in the rule. On the first matching rule, the process ends and the action code associated to this rule is returned. This effectively means that rules with the lower index in the filter list are the ones with the highest priority at the time of evaluation.
If no rule matches against the packet, the packet passes. Future improvements could offer an option to change this default behavior. In the meantime, appending a rule that matches all packets to the list of rules can be used as a workaround.
Filter optimization¶
No filter optimization has been implemented yet, but this section describes the concept.
After a new rule has been added to a filter object, an optimization pass should be run on the filter. It could include:
- Deletion of rules rendered useless by more generic rules
- Reordering of rules, for better performance, as long as the semantics is preserved
- Grouping of rules (using masks or value ranges, for example)
Of course, this should not alter the semantics of the rules loaded by the user. Note that this optimization pass might have consequences in terms of future management of the filter. Rules are loaded into a filter at an index passed by the user. This index can be modified if another rule is later inserted at a lower index, but it remains simple to keep track of the changes. However, if rules are reorganized or combined, it becomes impossible for users to track such indices: one would no longer be able to delete a given rule in its original form, if it has been merged with other rules. If we add filter optimization to the library in the future, this issue could be addressed in several ways:
- By keeping a set of rules as provided by the user. This would mean that the optimized set of rules may have to be fully regenerated from this user-provided set each time the user loads a new rule (instead of adding rules incrementally).
- By offering an API function to disable such optimizations.
- By making the library smart enough to split rules that were previously merged, whenever required.
- By setting a convention and restricting deletion of rules to the rules currently present in the optimized filter, and not allowing users to access them by their initial format. This requires a function able to dump the rules currently loaded, and to provide a handle for each rule, so as to indicate which rule to delete, for example. This last workaround corresponds to the current state of the code: the functions to delete a rule based on its current index, and to list rules with their index so that users can find the index they need, are already provided.
Again, at this date, no such filter optimization has been implemented (Contributions are welcome!)
C program optimization¶
The C program generated from the filter should be as simple, short and efficient as possible. Based on the choices made for creating the program, we could generate a very generic BPF parser and dissector that could retrieve the values for all supported fields in the packet, and simply tries to apply the rules from the map after that. However, that would include a great number of unused values. Furthermore, some features, such as applying masks, or comparison operators, would be systematically enforced, making the program longer and more complex than necessary for many filters.
Therefore the C program generation attempts to limit the amounts of elements shipped in the program. In particular, the generation depends on the following items:
- The use in any rule of the filter of any “special” comparison operator, “special” meaning in that context “different from equality”. Unused comparison operators are optimized out of the program.
- The use of masks in any rule of the filter. If no rule uses a mask, then applying masks when comparing values is optimized out. If but one match object in at least one rule of the filter uses masks, then masks are checked and applied to all values in the program.
- The use of the different match types (header fields). Unused match types are not used in the program, and are even optimized out of the struct used in the program to store collected information about the packet. If no field in a given header is checked, dissection of this header may be left out during translation (assuming it is not necessary for processing an upper layer header).
- Declarations such as helper functions declarations are done on a case-per-case basis, declared only if necessary.
In the future, further optimizations might be applied on the maximum length of the values present in the rules. Currently, the length is aligned on the longest possible value (16 bytes, for IPv6 addresses). This is a performance bottleneck, especially if several comparison operators and/or masks are used, as comparing values to packet fields become expensive. Having shorter values only would allow for easier comparison (e.g. just one instruction if the values are 64-bit long or shorter).
Other optimizations include adapting the number of passes in the loops. This is used in particular for:
- The number of rules to retrieve from the BPF map, and to apply to the packet (this is directly derived from the number of rules present in the table, and so in the filter).
- The number of comparisons to realize for each rule, set to the maximal number of match objects present for one rule in the set.
Notes about hardware offload¶
BPF filters with Libkefir are compatible with BPF hardware offload. However, users trying to run such filters on the hardware should be aware of the following points.
This applies to Netronome’s Agilio SmartNICs, which are the only ones at this date to support BPF hardware offload.
- No BTF support. Hardware offload does not have BTF support, although the library attempts to load BTF objects whenever possible, so users compiling the filters with a recent compiler (LLVM v8+) should expect (harmless) warnings.
- Limited entry size. Hardware offload has a limited size for map entries, 64
bytes per entry (key + value). Therefore, the program may fail to load if the
generated map entries are too big. This can happen if:
- The filter has at least one rule that uses many match objects (e.g. 3 or more).
- The filter has at least one rule that uses masks.
- Not all BPF helpers are supported. When generating C code, do not use the
flag for generating calls to
bpf_trace_printk()
helper for debug, hardware does not support it.
Roadmap¶
This document lists ideas for future evolutions of the library. It should not be interpreted as a commitment from the authors, but only as notes on the possible features and improvements to come. They are provided for informational purpose, and to share ideas with people willing to help. Any contribution welcome!
Here is the current list:
Complete support for all keywords for ethtool/TC flower (some are not supported at the moment, e.g. anything related to IPsec).
Add support for additional rule expression syntax: iptables, libpcap filters, Open vSwitch, …
Add other return actions (to do things more complex than binary “pass or drop”).
Offer an alternative C-generation mode for some compatible filters, where rules would not be stored into an array and checked one after the other but instead put into a hash table, in order to avoid sequential lookup and gain in performance.
Implement the “optimizations” for the filters mentioned in section Internals (removal of rule duplicates, merging of rules when possible, reordering when relevant, etc.).
Improve test framework from a generic point of view (so we can do other things than just validating filters).
Complete test suite (comparison operators, more JSON-loading tests, …).
Add API functions taking a list of rules and two filter arguments and sorting the rules into the two filters, based on whether they should be offloaded or not (depending on skip_sw keyword for TC for example)?
Documentation:
- Create better (nicer) diagrams in Workflow.
- Generate and publish a better API documentation (Doxygen?).
If you feel like working on this, or if you see other elements to improve, do not hesitate to send a pull request!
Examples and tests provided with the library in its GitHub repository are documented in README files located in the respective examples/ and tests/ directories.