Classifiers
Source Code
Full code for the example in this chapter is available here
What is Classifier in eBPF?
Classifier is a type of eBPF program which is attached to queuing disciplines in Linux kernel networking (often referred to as qdisc) and therefore being able to make decisions about packets that have been received on the network interface associated with the qdisc.
For each network interface, there are separate qdiscs for ingress and egress traffic. When attaching Classifier program to an interface,
What's the difference between Classifiers and XDP?
- Classifier is older than XDP, it's available since kernel 4.1, while XDP - since 4.8.
- Classifier can inspect both ingress and egress traffic. XDP is limited to ingress.
- XDP provides better performance, because it's executed earlier - it receives
a raw packet from the NIC driver, before it goes to any layers of kernel
networking stack and gets parsed to the
sk_buff
structure.
Example project
To make a difference from the XDP example, let's try to write a program which allows the dropping of egress traffic.
Design
We're going to:
- Create a
HashMap
that will act as a blocklist. - Check the destination IP address from the packet against the
HashMap
to make a policy decision (pass or drop). - Add entries to the blocklist from userspace.
eBPF code
The program code is going to start with a definition of BLOCKLIST
map. To
enforce the policy, the program is going to lookup the destination IP address in
that map. If the map entry for that address exist, we are going to drop the
packet. Otherwise, we are going to pipe it with TC_ACT_PIPE
action - which
means allowing it on our side, but let the packet be inspected also by another
Classifier programs and qdisc filters.
TC_ACT_OK
There is also a possibility to allow the packet while bypassing the other
programs and filters - TC_ACT_OK
. We recommend that option only if absolutely
sure that you want your program to have a precedence over the other programs
or filters.
Here's how the eBPF code looks like:
```rust linenums="1" title="tc-egress-ebpf/src/main.rs"
#![no_std]
#![no_main]
use aya_ebpf::{
bindings::{TC_ACT_PIPE, TC_ACT_SHOT},
macros::{classifier, map},
maps::HashMap,
programs::TcContext,
};
use aya_log_ebpf::info;
use network_types::{
eth::{EthHdr, EtherType},
ip::Ipv4Hdr,
};
#[map]
static BLOCKLIST: HashMap<u32, u32> = HashMap::with_max_entries(1024, 0);
#[classifier]
pub fn tc_egress(ctx: TcContext) -> i32 {
match try_tc_egress(ctx) {
Ok(ret) => ret,
Err(_) => TC_ACT_SHOT,
}
}
fn block_ip(address: u32) -> bool {
unsafe { BLOCKLIST.get(&address).is_some() }
}
fn try_tc_egress(ctx: TcContext) -> Result<i32, ()> {
let ethhdr: EthHdr = ctx.load(0).map_err(|_| ())?;
match ethhdr.ether_type {
EtherType::Ipv4 => {}
_ => return Ok(TC_ACT_PIPE),
}
let ipv4hdr: Ipv4Hdr = ctx.load(EthHdr::LEN).map_err(|_| ())?;
let destination = u32::from_be(ipv4hdr.dst_addr);
let action = if block_ip(destination) {
TC_ACT_SHOT
} else {
TC_ACT_PIPE
};
info!(&ctx, "DEST {:i}, ACTION {}", destination, action);
Ok(action)
}
#[cfg(not(test))]
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
loop {}
}
```
- Create our map.
- Check if we should allow or deny our packet.
- Return the correct action.
Userspace code
The purpose of the userspace code is to load the eBPF program, attach it to the given network interface and then populate the map with an address to block.
In this example, we'll block all egress traffic going to 1.1.1.1
.
Here's how the code looks like:
```rust linenums="1" title="tc-egress/src/main.rs"
use std::net::Ipv4Addr;
use aya::{
include_bytes_aligned,
maps::HashMap,
programs::{tc, SchedClassifier, TcAttachType},
Bpf,
};
use aya_log::BpfLogger;
use clap::Parser;
use log::{info, warn};
use tokio::signal;
#[derive(Debug, Parser)]
struct Opt {
#[clap(short, long, default_value = "eth0")]
iface: String,
}
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
let opt = Opt::parse();
env_logger::init();
// This will include your eBPF object file as raw bytes at compile-time and load it at
// runtime. This approach is recommended for most real-world use cases. If you would
// like to specify the eBPF program at runtime rather than at compile-time, you can
// reach for `Bpf::load_file` instead.
#[cfg(debug_assertions)]
let mut bpf = Bpf::load(include_bytes_aligned!(
"../../target/bpfel-unknown-none/debug/tc-egress"
))?;
#[cfg(not(debug_assertions))]
let mut bpf = Bpf::load(include_bytes_aligned!(
"../../target/bpfel-unknown-none/release/tc-egress"
))?;
if let Err(e) = BpfLogger::init(&mut bpf) {
// This can happen if you remove all log statements from your eBPF program.
warn!("failed to initialize eBPF logger: {}", e);
}
// error adding clsact to the interface if it is already added is harmless
// the full cleanup can be done with 'sudo tc qdisc del dev eth0 clsact'.
let _ = tc::qdisc_add_clsact(&opt.iface);
let program: &mut SchedClassifier =
bpf.program_mut("tc_egress").unwrap().try_into()?;
program.load()?;
program.attach(&opt.iface, TcAttachType::Egress)?;
// (1)
let mut blocklist: HashMap<_, u32, u32> =
HashMap::try_from(bpf.map_mut("BLOCKLIST").unwrap())?;
// (2)
let block_addr: u32 = Ipv4Addr::new(1, 1, 1, 1).try_into()?;
// (3)
blocklist.insert(block_addr, 0, 0)?;
info!("Waiting for Ctrl-C...");
signal::ctrl_c().await?;
info!("Exiting...");
Ok(())
}
```
- Get a reference to the map.
- Create an IPv4Addr.
- Populate the map with remote IP addresses which we want to prevent the egress traffic to.
The third thing is done with getting a reference to the BLOCKLIST
map and
calling blocklist.insert
. Using IPv4Addr
type in Rust will let us to read
the human-readable representation of IP address and convert it to u32
, which
is an appropriate type to use in eBPF maps.
Running the program
```console
$ RUST_LOG=info cargo xtask run
LOG: SRC 1.1.1.1, ACTION 2
LOG: SRC 35.186.224.47, ACTION 3
LOG: SRC 35.186.224.47, ACTION 3
LOG: SRC 1.1.1.1, ACTION 2
LOG: SRC 168.100.68.32, ACTION 3
LOG: SRC 168.100.68.239, ACTION 3
LOG: SRC 168.100.68.32, ACTION 3
LOG: SRC 168.100.68.239, ACTION 3
LOG: SRC 1.1.1.1, ACTION 2
LOG: SRC 13.248.212.111, ACTION 3
```