6 minutes
Driver Verifier: A Rust-based Kernel Module for Testing Linux Input Devices
Rust + Linux Kernel: Building a Touchpad Driver Verifier
After months of theoretical study on Linux kernel development and Rust programming, I finally decided to get my hands dirty with a practical project. This post details my journey creating a Rust-based kernel driver for verifying touchpad functionality on Linux systems - a project that served as my introduction to low-level kernel programming with Rust.
The Project: Input Device Verification in Kernel Space
The goal was straightforward but ambitious: develop a kernel module in Rust that could verify if input device drivers (particularly touchpads) are functioning correctly. This would involve:
- Scanning for input devices in the system
- Identifying touchpad hardware
- Verifying if the appropriate drivers are loaded
- Checking if the touchpad responds correctly to queries
- Testing if events can be received from the device
What made this particularly interesting was doing all of this from kernel space, where you don’t have the luxury of high-level abstractions available in user space.
The Learning Curve: From Theory to Practice
As with most complex technical endeavors, the gap between theory and practice turned out to be substantial. Here’s what my journey looked like:
Direct Rust to Kernel Approach: Initial Roadblock
My first instinct was to use Rust directly in the kernel, following the emerging “Rust for Linux” initiative. I quickly ran into a significant obstacle: my host system’s kernel didn’t have the Rust module loaded. This is a common issue since Rust support in the mainline kernel is still relatively new and not enabled by default in most distributions.
After weighing my options, I decided to pivot to a more practical hybrid approach.
The Hybrid Approach: Marrying Rust and C
The solution I settled on was to create a hybrid module where:
- C code would handle the direct kernel interactions and provide the module entry points
- Rust code would implement the core verification logic
- A well-defined FFI (Foreign Function Interface) would bridge the two languages
This approach offered several advantages:
- It worked with any kernel version, not just those with built-in Rust support
- It allowed me to leverage Rust’s safety features for the core logic
- The C wrapper handled kernel-specific interfaces that are still awkward in Rust
Wrestling with no_std and Kernel Macros
One of the most educational aspects was learning to write Rust code that could run in the kernel environment:
#![no_std]
#![feature(allocator_api)]
use core::panic::PanicInfo;
// Panic handler for no_std
#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
loop {}
}
The no_std
attribute was necessary because the standard library isn’t available in kernel space. This meant I couldn’t use many familiar Rust features that depend on the standard library.
I had to learn to create special macros for kernel operations, like this one for printing to the kernel log:
#[macro_export]
macro_rules! kprint {
($($arg:tt)*) => ({
extern "C" {
fn kernel_print(msg: *const u8, len: usize);
}
let msg = alloc::format!($($arg)*);
let bytes = msg.as_bytes();
unsafe {
kernel_print(bytes.as_ptr(), bytes.len());
}
});
}
Understanding Rust Compilation for Kernel Targets
Perhaps the most enlightening part of the project was learning how Rust compilation works when targeting kernel space rather than user space:
Target Specification: The code needs to be compiled for a specific target that matches the kernel environment.
Static Linking: All Rust code must be statically linked into the kernel module.
FFI Boundaries: Every interaction between Rust and C needs careful handling of data types, memory management, and error propagation.
Cargo Integration: Configuring Cargo to produce output that the kernel build system can use took some experimentation:
[lib]
name = "driver_verifier"
crate-type = ["staticlib"]
[profile.release]
lto = true
codegen-units = 1
panic = "abort"
debug = true
Core Implementation Concepts
Rather than sharing all the implementation details, I’ll focus on the main conceptual challenges I tackled in this project:
Device Discovery
The first challenge was scanning for input devices in sysfs and identifying which ones were touchpads. This required:
- Traversing the
/sys/class/input
directory to find device entries - Reading device metadata from sysfs attributes
- Using multiple heuristics to identify touchpad devices:
- Name-based detection (looking for keywords like “touchpad” or vendor names)
- Capability-based detection (checking for multi-touch support)
- Hardware-specific patterns for Acer Nitro 5 devices
Touchpad Verification
Once the touchpad was identified, verifying its functionality involved several stages:
- Checking if the required kernel modules were loaded (e.g., hid_multitouch, i2c_hid)
- Verifying if the device node responded correctly to basic queries
- Testing if the device could generate input events
This multi-layered verification approach provided a comprehensive check of the touchpad’s operational status.
FFI Interface Design
The bridge between Rust and C was perhaps the most delicate part of the implementation. It required:
- Careful declaration of external C functions in Rust
- Proper type marshaling across the language boundary
- Memory safety considerations when passing data between languages
- Error handling that worked across the FFI boundary
This interface allowed me to use Rust for the core logic while relying on C for direct kernel interactions.
Lessons Learned and Limitations
This project taught me several valuable lessons:
Kernel Version Dependency: The hybrid approach, while more compatible than pure Rust in the kernel, still has dependencies on specific kernel interfaces that can change between versions.
Rust Safety in Kernel Space: While Rust provides safety guarantees, working at the kernel level requires a significant amount of unsafe code for FFI interactions.
Debug Challenges: Debugging kernel code is fundamentally different from user space applications. Kernel panics and bugs can require system reboots.
Alternative Approaches: For truly kernel-agnostic solutions, technologies like eBPF might be more appropriate, as they provide a more stable interface across kernel versions.
Connection to My Thesis Work
This project isn’t just a standalone learning exercise. It’s closely connected to my Master’s thesis on a Rust-based log collector for identifying potential Advanced Persistent Threats (APTs) in the kernel. The knowledge gained from this driver project will directly inform the development of that more complex system.
By understanding how to interact with kernel structures, monitor device behavior, and bridge the gap between Rust safety and kernel requirements, I’m building the foundation for a more sophisticated system that can detect anomalies in kernel behavior.
Next Steps
This project was a valuable stepping stone, but my next endeavor will be considerably more ambitious. I’m planning to develop a Rust-based kernel syscall hooking system using eBPF, specifically leveraging the Aya framework.
Why this direction?
eBPF Advantages: Unlike traditional kernel modules, eBPF programs can be loaded and unloaded dynamically without rebooting and work across different kernel versions, making them more flexible and portable.
Syscall Hooking: By monitoring and potentially intercepting system calls, I can gain deeper insights into system behavior and potentially detect suspicious activities.
Aya Framework: This Rust framework for eBPF development provides a safer and more ergonomic way to write eBPF programs compared to traditional C-based approaches.
Thesis Alignment: This approach aligns perfectly with my Master’s thesis work on APT detection, as syscall monitoring is a powerful method for identifying unusual or potentially malicious behavior.
This next project will build directly on what I’ve learned about kernel structures and Rust/kernel interactions, while moving to a more modern and flexible approach with eBPF.
Conclusion
Building a Rust-based kernel driver was an enlightening journey that bridged the gap between theoretical knowledge and practical implementation. The combination of Rust’s safety features with low-level kernel programming provides a powerful toolkit for system-level programming.
Despite the challenges, the project validated my belief that Rust has an important role to play in the future of operating system development. As the ecosystem matures and kernel support improves, we’ll likely see more components written in Rust, bringing improved safety and reliability to this critical layer of our computing infrastructure.
The code for this project is available on GitHub.