Sunday, September 8, 2024

AI-generated Linux kernel schedulers in Rust

Overview

Many kernel hackers and OS enthusiasts have long dreamed of designing and running a custom Linux scheduler. However, this has traditionally been highly inaccessible, achievable only by a handful of core kernel developers with years of deep expertise.

What if we could leverage Rust, generative Artificial Intelligence (AI) and Large Language Models (LLMs) to create an AI that can translate high-level scheduling concepts directly into functional kernel code?

State of the art

Using an AI to write functional Linux kernel code can be a bit tricky. There are experiments to use LLMs to review kernel patches, see for example Testing AI-enhanced reviews for Linux patches. However, in terms of generating fully functional code, examples have been limited to producing and fixing simple “hello world” kernel modules or similar.

We are still far from being able to automatically generate a fully functional Linux scheduler, primarily due to the vast amount of knowledge and concepts required, which are scattered throughout the kernel’s source code, an already highly complex system to comprehend.

Rust + sched_ext

Recently I’ve been working at improving the usability of scx_rustland_core: a Rust framework based on sched_ext that enables the implementation of custom Linux kernel schedulers in Rust, which run as regular user-space processes, and use BPF to channel scheduling events and actions between the kernel and user space.

This framework offers high-level Rust abstractions for the underlying BPF and sched_ext subsystems, enabling developers to concentrate on scheduling concepts, without worrying about the low-level kernel implementation details.

This can make scheduling development much more accessible, as you can simply create a regular Rust project (like any other user-space Rust application) and implement the scheduling policy using the high-level Rust APIs.

Add ChatGPT to the equation

To validate the usability of this framework I decided to use ChatGPT and see if the AI was able to produce working schedulers and/or improve them using the high-level Rust API.

For this experiment, I have used the ChatGPT-4o LLM, giving as input a simple FIFO scheduler implemented on top of scx_rustland_core with well-documented code, in particular with a very detailed description of how to use the scheduling framework API. Then the prompt includes a request to modify the code based on the requirements specified by command line (that are simply appended to the prompt).

The new source code is then generated, written to a file (replacing the original implementation), recompiled and executed.

All the source code of this experiment is available here: scx_rust_scheduler.

Demo

This demo video shows a simple implementation of this idea.

A Python script sends the initial FIFO scheduler code along with additional requirements to the AI, requesting it to generate a new scheduler that meets the specified criteria.

The AI then produces the updated code, which overwrites the original FIFO scheduler. This new code is compiled and executed, enabling the process to be repeated for multiple iterations by specifying further high-level requirements.

Result

As shown in the video above, the experiment shows that the AI enhanced the initial FIFO scheduler based on high-level guidance from the user. This improvement reduced the total execution time for a specific multi-threaded message-passing workload from ~5.3 seconds with the initial FIFO scheduler to ~4.9 seconds with the final optimized scheduling policy.

Benchmark:

$ sudo perf bench -f simple sched messaging -t -g 24 -l 2000

However, keep in mind that this was just a basic example and represents a single, specific workload. Moreover, if the scheduling policy’s requirements become too complex or intricate, the AI will likely to introduce syntax errors or logical mistakes in the generated code.

This could be improved by better documenting the initial code in a way that’s more comprehensible, but still, the improvements seen over the multiple iterations in the demo were largely driven by the human guidance more than the AI, that was acting more like a translator.

Nevertheless, it is quite impressive that instructions could be given at such a high level of abstraction, similar to explaining concepts to a class of students, and real, functional code capable of replacing the current Linux kernel scheduler was produced and executed in real-time, all within just a bunch of seconds.

Conclusion

The goal of this experiment was to demonstrate the ease of use of scx_rustland_core and showcase the potential that the sched_ext technology can offer.

While LLMs aren’t poised to replace human kernel developers anytime soon (at least not yet), they could still serve as valuable tools to lower the entry barrier for kernel development, especially for those passionate about it.

Though this was just a funny experiment, it could provide a great academic playground for students to test and explore simple scheduling concepts with ease.

No comments:

Post a Comment