Formalizing Rust
My current research project – and the main topic of my PhD thesis – is about developing a semantic model of the Rust programming language and, most importantly, its type system. Rust is an attempt of Mozilla to find a sweet spot in the design space of programming languages: A language that provides low-level resource management (making it a systems language), is convenient for programmers and guards against memory errors and thread unsafety. Other have said and written a lot on why we need such a language, so I won’t lose any more words on this. Let me just use this opportunity for a shameless plug: If you are curious and want to learn Rust, check out Rust-101, a hands-on Rust tutorial I wrote. I am going to assume some basic familiarity with Rust in the following.
Why do we want to do research on Rust? First of all, I’m (becoming) a programming languages researcher, and Rust is an interesting new language to study. It’s going to be fun! Honestly, that’s enough of a reason for me. But there are other reasons: It shouldn’t be a surprise that bugs have been found in Rust. There are lots of things that can be done about such bugs – my take on this is that we should try to prove, in a mathematical rigorous way, that no such bugs exist in Rust. This goes hand-in-hand with other approaches like testing, fuzzing and static analysis. However, we (at my research group) are into formalizing things, so that’s what we are going to do as part of the RustBelt research project.
Update: Added link to RustBelt website.
Scope
Let me just get this out of the way at the beginning: The formal model is not going to cover all features of Rust. Some of the missing pieces will hopefully be added later, some maybe not, and some just don’t matter for what I am attempting to do. Our focus is on the type system, on ownership and borrowing and the ideas these concepts found on. Many features of Rust are pretty much orthogonal to this, so I won’t attempt to model them accurately. This starts with the language itself: The syntax, the concrete set of primitive operations (except for those that concern pointers), the structure of code with modules and crates – none of this really matters. I have some idea what to do with traits, unwinding and a few other pieces, but trying to get all of that right in the beginning would just be a distraction. In terms of what we are interested in, these features do not significantly change the game.
With Rust introducing the MIR, the compiler actually moved much closer to the kind of minimalist language we are using as a model of Rust. So, there’s some hope we might eventually establish a formal connection between rustc and our work through MIR. But for now, that is out of scope.
The first version is also not going to cover automatic destructors, i.e., the Drop
trait and the dropck mechanism.
This is, of course, a significant gap, even more so since dropck may well be the most subtle part of the type system, and the current version (as of Rust 1.3) is known to be unsound.
But we have to start somewhere, and even without Drop
, there is a lot to be done.
So What’s Left?
You may wonder now, what are we even doing then? The goal is to prove that programs following Rust’s rules of ownership, borrowing and lifetimes are memory and thread safe, which, roughly speaking, means that all executed pointer accesses are valid, and there are no data races. I will often just say “memory safety”, but concurrency is definitely part of the story.
The Syntactic Approach and Its Limitations
We could now be looking at the checks the Rust compiler is doing, and prove directly that these checks imply memory safety.
Let us call these checks Rust’s type system, then we are looking for the classic result of type soundness: Well-typed programs don’t go wrong.
This is of course an important property, and actually, it has been proven already.
However, such a proof only gets us so far: The moment your program uses unsafe
, this result does not apply any more.
It is the whole purpose of unsafe
to permit writing dangerous code that the compiler cannot check, and hence it cannot be covered by any proof relying solely on such compiler checks.
In the following, we will consider programs containing unsafe
as not being well-typed.
(We could also consider them well-typed in a more liberal type system that above result does not, and cannot, apply to. This is closer to what the Rust compiler does, but it’s not helpful in our context.)
Note that this issue is viral: Vec
, RefCell
, Rc
– all these and many more standard library data structures use unsafe
, any if your program uses any of them, you’re out of luck.
Semantics to the Rescue
Intuitively, why do we even think that a program that uses Vec
, but contains no unsafe
block itself, should be safe?
It’s not just the compiler checks making sure that, e.g., we don’t call push
on a shared borrow.
We also rely on Vec
behaving well – or, I should rather say, behaving well-typed.
Yes, Vec
uses unsafe
– but it should not be possible to observe this in a well-typed safe program, calling only the interface provided by Vec
.
There are some rules that Rust enforces, some invariants that safe, well-typed code automatically upholds, that people writing unsafe code have to maintain themselves.
Vec
performs checks and bookkeeping to get its tracking of the size and capacity of the container right.
RefCell
counts the number of outstanding borrows, to make sure at run-time that no two mutable borrows to the same cell exist.
Rc
counts the number of references that are hold, and only if that reaches 0 it deallocates memory. Crucially though, it only hands out immutable, shared borrows to its content.
If any of these checks would be removed, the data structure would clearly be unsafe, and even safe code could witness that.
But if the Rust authors got all these checks right, then safe code cannot even tell that unsafe operations are being performed.
This is something that safe code and properly written unsafe code have in common: If you consider only their externally observable behavior, for all means and purposes, both are well-typed.
We call this semantic well-typedness.
The core idea is that we can define the inhabitants of a type solely in terms of what one can do to them.
For example, to check that a function is semantically well-typed, you check what it does on every well-typed input, and make sure the output makes sense.
It doesn’t matter how the function performs these operations.
This is in contrast to the syntactic notion of well-typedness of a function, which involves looking at the code of the function and checking it against a bunch of rules.
Only recently, research has been able to scale up such semantic methods to languages that combine state (i.e., mutable variables, a heap) with higher-order functions – languages like Rust, where I can take a closure (a Fn{,Mut,Once}
) and store it in memory, for later use.
Lucky enough, my advisor Derek Dreyer is one of the world experts in this field, which permits me to intensively study Rust (which I would have done anyways) and call it research!
What We Are Doing
So, that’s what we are trying to do: Come up not only with a formal version of Rust’s (syntactic) type system, but also with appropriate semantics for these types.
We then have to show that the syntactic types make semantic sense. This recovers, in a very roundabout way, the result I mentioned above:
Every well-typed Rust program (that doesn’t use unsafe
, not even indirectly) is memory safe.
However, now that we have these semantic types, we can go even further and prove that Vec
, Rc
, RefCell
and others are semantically well-typed.
At this point, it may well turn out that our semantic model is too restricted and cannot account for all of the Rust data structures that people believe to be safe.
In this case, we have to go back to step one and change the semantics of our types, until they actually match the intuition of the “Rust invariants” people developed over the years.
By combining this manual proof with the first result covering all syntactically well-typed Rust programs, we finally obtain the desired proof that safe programs using only the unsafe bits that were proven correct, are memory safe. No matter what the safe client does with the exposed API of the unsafe data structures, we then know in a mathematical rigorous way that everything will be all right. Isn’t this a beautiful theorem? I certainly think so, even with the caveats described above. We’ve only recently started this though, there is still a long way to go – lots of fun to be had. I will track my progress, in very high-level terms, in this Rust issue.
Once we got a proper semantic model, we also hope to be able to translate some of these insights back into the vocabulary of Rust programmers, and tell them what it is they have to be checking when writing unsafe Rust code. Right now, these exact rules are clear not even to the Rust team at Mozilla. Hopefully, they are eventually going to be spelled out in The Rustonomicon and guide future Rust programmers in their unsafe endeavors.
Posted on Ralf's Ramblings on Oct 12, 2015.
Comments? Drop me a mail or leave a note on reddit!