Due to spite, I created kernel modules using rust. The exact context is very contrived. However, when I do something that is not recommended, I let you know. Keep in mind that some of the topics this post will be going over are irrelevant to most people. I will be providing resources of the information needed. You should be able to go from never having touched anything related to the kernel, to creating complete modules. Keep in mind that this post was written on 3/17/2023 and can easily be out of date by the time you decide to make modules.

Extra Note: 2023-12-24: This post used to be generated as a standalone and looked slightly different. However, it now exists on my blog. I normally don’t have TOC, but it is important for this post, so here you go:

Everything you might need

In short, everything you might need to know about kernel programming can be found here. However, the documents are massive and if you don’t know where to look, it can be overwhelming. For example, the website even has a section on rust. But things are very general and not related to rust specifically. The bottom of this page has extra resources that you might be interested in.


To start with kernel programming in rust, a few things are required. The very first step is to create a VM. There are not a ton of resources out there for rust specific kernel development. But there are some. The first important resource is a tutorial on setting up the environment. After following this tutorial, everything is all setup for building modules in rust! Using QEMU is ideal for speeding up the process. Alternatively, if you don’t need a graphical interface, you can use a premade image that is setup. Just make sure to update the repo source and any other dependencies. The image and instructions can be found in this drive folder. It is the same information used in the Module examples section

Module examples

For an example of how to create modules in rust, there are two resources:

One thing to keep in mind, is that some objects have been renamed. Mainly Ref is now Arc. The speaker ran out of time before they could finish explaining everything. However, the finished code can be found here. Another person tried to continue the explanation. Their code might not work for you. So be cautious. But at least it talks about some interesting topics if you want to implement them. Such as watching a file for changes to automate builds. That tutorial can be found here.

In the environment video, you activate a simple echo server. It isn’t the only sample though. Go to linux/samples/rust for a list of examples. These will be the best examples you get. The same guy who made the video has a repo of some samples. They are not the helpful but just incase, you can find them here.

Out-of-tree modules

It is not recommended to create out of tree modules. However, sometimes you have to create out-of-tree modules. To implement them, just follow rust-out-of-tree-module and replace the .rs file with your own. Don’t forget that if you change the name of the file you must also update the Kbuild file. To activate rust-analyzer for out of tree modules, you will need to modify the kernel. The modifications needed are discussed and provided in pull request 914. It is also in the rust-out-of-tree-module repo: I know this looks like some email or something along those lines. It is an actual link. To be more specific it is the “mail list”

When I was building my modules, rust-analyzer didn't want to work for any new file I created. But it did work for the rust files in the Linux repo. To get around this, I took over a file in the rust samples and replaced it with my code. Then I copied the code over to the out of tree repo I was working on. Not ideal but works in a pinch

After making all of the needed modifications to the necessary files, compile the rs file. Doing so will create a .ko file. Move the file into .../busybox/_install and then re-zip the image. If you would like, you can automate these steps in the Makefile with cp or mv or any other command you want. You can now qemu into the kernel and use insmod on the .ko file. Doing so, you will notice a message to on the terminal.

[ 1.0] module: loading out-of-tree module taints kernel.

We can ignore this message and continue as if it doesn’t exist.

The compiler you use for the linux kernel needs to be the same as one used on this module. For example, use `LLVM=1` on both of them. If they are not the same (for example, one uses clang and one uses gcc) the make will fail

Proc files

While I was developing my kernel modules, I happened to need to use proc files. But in most cases you wouldn’t use them. However, if you happen to need to, here are the steps you can take to getting it setup.

Please do not ever use procfs for any driver implementation. sysfs is the correct place to put new information, never in /proc (which is for processes only.)  

This should not be needed for the rust bindings at all, sorry.

Already made

proc files are not implemented by default. However, you can implement them yourself. That danger warning comes from this pull request . There are two commits, the file and an example of creating a proc file. You don’t need to create a root directory as parent can be None (instead of Some(&root).) Full warning, the implementation in the above commits is incorrect as far as I can tell. The way it associates context is not correct. Be on the lookout for your context being replaced with a null pointer. Luckily you don’t need to assign proc with a context.

Building proc

The easiest way is to go to that pull request, go to the branch it comes from and just clone that repo. Everything should be the same in terms of how to build the kernel. However, I added the changed files into the most recent version of the main repo. Things to keep in mind if you do that

  • into_pointer -> into_foreign
  • from_pointer -> from_foreign
  • PoniterWrapper -> ForeignOwnable Some of the names changed. And some might change again in the future. This was written on 3/17/2023. But hopefully it doesn’t change again.

One thing you might notice, is that the kernel module sample was made in tree. Before you setup out-of-tree modules, doing it that way is fine. Just enable it from the samples just like how the environment tutorial does it.

Dev files

Ideally, the above section is irrelevant because you are able to use dev files instead of proc files. If that is the case, you can look at the samples code to see how it is made.

Writing to a writer

When a user tries to read from a file, the system writes to the buffer being read from. In most examples, the logic is not explained. In a simple sense, the system writes bytes onto the buffer and then returning how many bytes were written. The samples just write a 1 onto the buffer. A simple example of writing dynamic information might look like the following

use core::format_args;


impl Operations for Test {
  fn open(shared: &(), _file: &File) -> Result<()> {
  fn read(shared: (), _: &File, data: &mut impl IoBufferWriter, offset: u64) -> Result<usize> {
    if data.is_empty() || offset != 0 {
      return Ok(0);

    let name = "Your name";
    let output_string = CString::try_from_fmt(format_args!("Hello {}", name)).unwrap();
    let file_output: &[u8] = output_string.as_bytes_with_nul();

If you want to experiment, remove the first if statement. Don’t worry, it won’t break anything. But it is easiest to understand what it is doing by just trying it.

// Try to remove this
if data.is_empty() || offset != 0 {
  return Ok(0);

Bindings and Abstractions

Abstractions are Rust code wrapping kernel functionality from the C side. In order to use functions and types from the C side, bindings are created. Bindings are the declarations for Rust of those functions and types from the C side. For instance, one may write a Mutex abstraction in Rust which wraps a struct mutex from the C side and calls its functions through the bindings. Abstractions are not available for all the kernel internal APIs and concepts, but it is intended that coverage is expanded as time goes on. “Leaf” modules (e.g. drivers) should not use the C bindings directly. Instead, subsystems should provide as-safe-as-possible abstractions as needed.

The bindings are auto generated based on a specific header file. If you implemented proc, you would have to modify rust/bindings/bindings_helper.h. If there is a specific API you need access to, expose it in the header file and recompile the kernel. It should be added to one of the auto generated files. An example of some generated bindings would be time keeping.


Everything you need can be found here

Only use syscalls if there is no other alternative. It is highly prefered to use another interface before adding a System call. (System calls are still important though)

The above link is very hard to read in actual English. Especially if you are new to kernels. These are the steps you need to create and bind a system call with rust. The steps are easy to follow, hard to find. The bad news is that you will have to write a little bit of C code for this to work. The below steps are not to explain what things do but show them in rust. This is a MVP of a system call. You can look into other resources:

Modify the Kernel (start here)

The root moving forward, is the root of the Rust-for-Linux/linux root.


At the bottom, add the link

asmlinkage long sys_initialize_mod(void);


Modify the syscall table to have the info of our new syscall. We add the info at the very end of the file.

548 common initialize_mod        sys_initialize_mod

And now it sits in the table

Make a new folder example_syscall

This should go without saying, but you can call it whatever you want Make a new C file example_syscall/test_call.c That file will have code that looks like:

#include <linux/linkage.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/syscalls.h>

/* System call stub */
long (*STUB_initialize_mod)(void) = NULL;
  if (STUB_initialize_mod != NULL)
    return STUB_initialize_mod();
    return -ENOSYS;

SYSCALL_DEFINEn is a C macro that does a lot of the complicated logic for you. This could be written in rust. But it should not be. Next make a Makefile in that same folder. Only one line is needed for this file :

obj-y := test_call.o

./ (root)

The root Makefile needs to be aware of the new folder and file. To do this, modify the Makefile in the root directory.

< core-y        :=
> core-y        := example_syscall/

(To make it more clear; you will be finding core-y and adding the new folder to the right of the :=) And that is everything that needs to be done for preparations. Go ahead and re-compile the kernel

make LLVM=1 -j`nproc`

At this point, all of the steps are basically the same as the if you were doing it in C. The next part is where we start writing Rust code.

Minimal rust code

The following is a minimal example

//! A minimal example defining the system call function

use core::format_args;
use kernel::prelude::*;
use kernel::{Mode, ThisModule};

extern "C" {
    static mut STUB_initialize_mod: Option<extern "C" fn() -> i64>;

pub extern "C" fn initialize_mod() -> i64 {
    pr_info!("Initialize module\n");

module! {
    type: Example,
    name: "Example",
    author: "Joshie",
    description: "System call example",
    license: "GPL",

struct Example;

impl kernel::Module for Example {
    fn init(_name: &'static CStr, _module: &'static ThisModule) -> Result<Self> {
        pr_info!("Loaded My example\n");
        unsafe {
            STUB_initialize_mod = Some(initialize_mod);


impl Drop for Example {
    fn drop(&mut self) {
        pr_info!("Unloaded example\n");
        unsafe {
            STUB_initialize_mod = None;

There are two important section here. The first section is:

extern "C" {
    static mut STUB_initialize_mod: Option<extern "C" fn() -> i64>;

pub extern "C" fn initialize_mod() -> i64 {
    pr_info!("Initialize module\n");

Here we are using C code inside rust. Don’t worry, just because it is “C” code means nothing. We can write the code inside as if it is rust. I will try to break down what is happening. The first line, STUB_initialize_mod, is defining a static mutable variable with the optional type of some function that returns a long. The reason it is optional, is so it can start at None and so we can return it to None when the module is dropped.

This is easy to follow if the function doesn’t take any parameters. But if it does, just add their type into the parameters of this virtual function

extern "C" {
    static mut STUB_initialize_mod: Option<extern "C" fn(i32, i32) -> i64>;

This is saying that our function takes 2 integers and returns a long.

The other important section is inside the init of the module

unsafe {
 STUB_initialize_mod = Some(initialize_mod);

Here we are setting the symbol STUB_initialize_mod to the function initialize_mod. But don’t forget to reset the symbol to None in the drop.

unsafe {
 STUB_initialize_mod = None;

If you don’t do this and some system calls the syscall, the kernel will crash.

Compile order maters

To make sure this works, please re-compile the kernel first and then this new module. In general, if you make a change to any file in the kernel, re-compile it. The good thing is, that because we are using busybox with our kernel, compile times are very quick. If you don’t have everything properly defined, trying to compile the module will cause an error.

Proving it works

To prove that we actually did anything, let us create and compile some calling C files. The first file is the wrapper

// wrapper.h
#ifndef __WRAPPERS_H
#define __WRAPPERS_H

#define _GNU_SOURCE
#include <unistd.h>
#include <sys/syscall.h>

#define __NR_INITIALIZE_MOD 548

int initialize_mod() {
    return syscall(__NR_INITIALIZE_MOD);


And than some other file to call it

// caller.c
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#include "wrappers.h"

int main(int argc, char **argv) {
    long ret;
    ret = initialize_bar();
    printf("Initialize_mod returned %ld\n", ret);
    return 0;

Compile the C file using any compiler (see the section on compiling C code.) You can take the generated executable and place it in .../busybox/_install. (don’t forget to regenerate the image). Once the file is in the _instal folder, it will appear in our kernel. We can insmod the module and then run ./caller to see that it works!

$ insmod example
[ 1.0] example: Loaded My example
$ ./caller
[ 1.001] example: Initialize module
Initialize_mod returned 100

Let us even check that the dropping of our modules work too.

$ rmmod example
[ 1.0] example: Unloaded example
$ ./caller
Initialize_mod returned -1

YAY!!! It works. Now we can start working on the logic inside the function.

If you followed these steps and there are no errors but the module messages are not showing up, make sure that `pr_info` ends with `\n`. Not providing it with a newline will make it fail to print display when called.


Now that we have syscalls working, we will want to be able to spawn our own threads. It would be nice to have multiple operations going on at the same time. The rust kernel has threads under Task. A new task is a new thread. If you want to make kernel modules, you are going to need to search the source files and interpret them yourself. Doing that, you would be able to find the following:


Launches 10 threads and waits for them to complete.

use core::sync::atomic::{AtomicU32, Ordering};
use kernel::sync::{CondVar, Mutex};
use kernel::task::Task;

kernel::init_static_sync! {
    static COUNT: Mutex<u32> = 0;
    static COUNT_IS_ZERO: CondVar;

fn threadfn() {
    pr_info!("Running from thread {}\n", Task::current().pid());
    let mut guard = COUNT.lock();
    *guard -= 1;
    if *guard == 0 {

// Set count to 10 and spawn 10 threads.
*COUNT.lock() = 10;
for i in 0..10 {
    Task::spawn(fmt!("test{i}"), threadfn).unwrap();

// Wait for count to drop to zero.
let mut guard = COUNT.lock();
while (*guard != 0) {
    COUNT_IS_ZERO.wait(&mut guard);

Some functions have examples of how to use them built into the documentation. Sometimes it is in a testing sections. Sometimes the documentation has testing built into it. It depends and is not always the most reliable. As time goes on, this should improve. At least this one is easy to follow and implement.

Thread Safety

You don’t want to be accessing data at the same time another thread is changing that data. If you have a vector and the block of memory it took up changed, you could end up accessing memory you were not supposed to! Or any other number of bad race conditions. Rust makes thread safety easy to do.

The non safe way

The naïve approach to dealing with global data is to make a static variable and accessing it whenever. But doing this is “unsafe.” Rust will even warn you about this and force you to put a transaction into an unsafe block

static mut DATA: i32 = 10;
unsafe { DATA += 3; }

While this works, and you can do this, putting data into either an atomic type or a mutex type is better.

The safe way

An example is a mutex. There are other types as well such as CondVar or SpinLock. But SpinLock doesn’t get a link because people consider it slow and inefficient. Why? Because the Linux scheduler is not great.

When do locks release?

A lock gets released as soon as it goes out of scope. In a language like C you would have to manually acquire and release a lock. But in rust, as soon as a guard is no longer being used, it get released.

*DATA.lock() += 3; // DATA: Mutex<i32>

Here, we acquire the guard, dereference the guard and change the value. As soon as this line is done, DATA is available to be accessed.

let mut data_guard = DATA.lock();
*data_guard += 3;

If you access the guard like this, the guard won’t be released until after data_guard is out of scope. But as soon as it is, rust automatically releases the guard. You can make this happen sooner by putting data in a nested context:

  let mut data_guard = DATA.lock();
  *data_guard += 3;

Where as soon as the block is exited, it releases. But do be wary of loops holding ownership for too long.

while (*DATA.lock()).get_something() { // DATA: Mutex<RandomStruct>

Depending on the situation, you might expect that the returned value is the only thing being kept in scope. But the whole guard can be. This might not be your intended behavior.

How to lock up your whole program

Race conditions can still exist even if you use Mutex. Better yet, you can cause the whole system to halt because of the order a lock is used. Read this guide If you want to know more about race conditions and locking in general.

Easy static Mutex

kernel::init_static_sync! {
  static DATA: Mutex<i32> = 10;
  static SIGNAL: CondVar;

The kernel has a nice macro for making static Mutex. You can assign the value inside the macro. You don’t have to assign anything for a CondVar.

Linked List

This section is specifically for if you want to try to make your own implementation. Making your own linked list is generally never needed. However, it is always a fantastic exercise to show you know the language. In this case, a good exercise would be to take the below description, implement it, then make it thread safe. Try to put the Linked List into a Mutex and figure out how to handle the errors.

Linked list are not fully implemented in rust at the time of writing this. However, linked lists are not impossible to implement yourself. When the native rust version is finished, it will be nicer. I am sure there is a way to implement the current version of linked lists; but I could not figure it out.

Here’s a high-level overview of the steps you might follow to create a linked list queue

  1. Define a Node struct: A linked list is made up of a series of nodes that hold the data elements of the list, along with pointers to the next node in the list. The Node struct needs to hold two fields: value to store the actual value of the node, and next to store the Option<Box<Node<T>>> type, which will hold a pointer to the next node in the list, or None if there is no next node.
  2. Define the LinkedList struct: This struct will keep track of the head and tail nodes of the list, as well as its length. The head field will store an Option<Box<Node<T>>> type, which will hold a pointer to the first node in the list, or None if the list is empty. The tail field will hold a raw pointer to the last node in the list, or core::ptr::null_mut() if the list is empty. Finally, the len field will hold the length of the list.
  3. Implement the new method: This method will initialize a new empty linked list with head and tail set to None, and len set to 0.
  4. Implement the push method: This method will add a new value to the end of the linked list. First, it will create a new Node containing the value to be added. Then, it will check if the list is empty (i.e., if tail is null_mut()). If it is, it will set head to the new node, and tail to a raw pointer to the new node. If the list is not empty, it will set the next field of the current tail node to the new node, and update tail to a raw pointer to the new node. Finally, it will increment the len field to reflect the new length of the list.
  5. Implement the pop method: This method will remove and return the first value in the linked list, if there is one. First, it will check if the head field is None. If it is, it will return None to indicate that the list is empty. Otherwise, it will remove the first node from the list by setting head to the next field of the current head node. If head is now None, it will also set tail to core::ptr::null_mut() to indicate that the list is now empty. Finally, it will decrement the len field to reflect the new length of the list, and return the value of the removed node.
  6. Implement the len method: This method will simply return the value of the len field, which stores the length of the list.
  7. Implement the Drop trait: This trait allows you to define a custom destructor for the LinkedList struct. In this case, the destructor will simply remove all nodes from the list by repeatedly calling the pop method until the list is empty. Depending on how you implement the linked list, some steps could be done differently or ignored all together. There are many ways to make a linked list. Especially if you are making it act like a queue.

The above implementation is not for multithreading. To make it thread safe, look into the extra resources section. As a hint, tail would become an AtomicPtr or Mutex

Random pitfalls

Compile C code into your kernel

You can compile code and supply it to busybox by placing it in the _instal folder and then generating the image. However, there is one extra step needed for compiling the code. Make sure to add the --static flag when compiling.

You can also make a sh script in the folder. For example

watch -n1 cat proc/timer

Just make sure it is executable by doing

chmod +x ./file_name

Format Strings

Rust does not use printf. Format strings are error prone and easy to exploit. To avoid this, rust does not use printf. That is great. However, when working with the kernel, the typical method of formatting strings doesn’t work. This is because the kernel doesn’t have access to the standard library (std). Look at the proc_fs sample to see an example of how to do a formatted string in the kernel.

Mutable pointers

Lets say you have some struct:

struct Example{ a: i32, b: i32 }

And you have some other function that takes a pointer to that struct for the purposes of modifying it and assigning values.

fn get_example(e: *mut Example);

How would you pass in a variable to call the function? First we will try to do it the :

Direct way (not as good)

You would have to create a pointer to some default or empty data. You need to create a mutable struct and get its pointer. Then, store that in a mutable pointer variable. What this will do, is allocate the storage you will use to modify on the stack. In practice, this looks like:

let a: *mut Example = &mut Example{ a: 0, b: 0, } 
unsafe { 

If you don’t assign the variable with a value, the compiler won’t know how much memory is needed to be added onto the stack. Doing it this way is the most direct. However it isn’t the best or easiest to understand. a is now a pointer instead of just the struct Example. So to access its data, you would need to dereference it. This is “not safe” because you are messing with pointers and that pointer can be anything. Even null. Rust is a “safe” language. That is why it doesn’t have syntax like a->a;. Essentially, avoid using an unsafe block if at all possible (there are times it is unavoidable.) So, to do this the :

Safe way

Initialize the struct and pass in a mutable reference to the function.

let mut a = Example{ a: 0, b: 0 };
unsafe { get_example(&mut a); }

Here we get to use 1 less unsafe block since we don’t have to dereference a pointer. (The real life example happens to be an unsafe function here. But that isn’t always the case.)

Rust has no Arrow Operator (->)

As discussed above, rust doesn’t have the ability to dereference and access data in a single operator. So, if you ever have to do that, you need to use the un-shorthand version. IE a->b becomes (*a).b.

Document your functions

Some functions, when not documented, cause rust to issue a warning. You can ignore the warning but it is ideal to address them all. One such warning is that a function is missing documentation. Proper documentation is triple / : ///. And from there, it formats to markdown. Please take a look at rust kernel coding guidelines


Rust specific

If I linked to it and it uses std there is an equivalent version in the kernel (most likely in core)

Kernel help

These resources are on the kernel specifically. Most of the code you will find in these resources are in C. But they are great for learning about topics and ideas. Ideally, you can translate the concepts into your specific use case