Performant Software Systems with Rust — Lecture 12
Rust’s Fearless Concurrency
Rust’s ownership and type checking features helped manage both memory safety and concurrency problems
Many runtime concurrency errors becomes compile-time errors!
Other programming languages, such as Erlang and Pony, may impose limits or performance tradeoffs
Learn more: Fearless Concurrency
Concurrent vs. Parallel Programming
Concurrent programming: different parts of a program execute independently, but may not be at the same time
Parallel programming: different parts of a program execute at the same time
When we mention concurrency, we imply concurrent or parallel programming
Learn more: Fearless Concurrency
Using Threads to Run Code Simultaneously
Multi-threading allows the execution of multiple threads in parallel, at the same time
But it also leads to subtle problems — a conventional topic in an OS course
Rust standard library uses a 1:1 (kernel threads) model
Learn more: Using Threads to Run Code Simultaneously
Creating a New Thread with spawn
use std::thread;
use std::time::Duration;
fn main() {
// passing a closure to `spawn`
thread::spawn(|| {
for i in 1..10 {
println!("Printing number {i} from the spawned thread!");
thread::sleep(Duration::from_secs(1));
}
});
for i in 1..5 {
println!("Printing number {i} from the main thread!");
thread::sleep(Duration::from_secs(1));
}
}Learn more: Creating a New Thread with spawn
Waiting for All Threads to Finish
fn main() {
// create a join handle to the spawned thread
let handle = thread::spawn(|| {
for i in 1..10 {
println!("hi number {i} from the spawned thread!");
thread::sleep(Duration::from_secs(1));
}
});
for i in 1..5 {
println!("hi number {i} from the main thread!");
thread::sleep(Duration::from_sec(1));
}
// wait for the spawned thread to finish
handle.join().unwrap();
}Live Demo
Capturing the Environment of the Parent Thread
Live Demo
Learn more: Using move Closures with Threads
Capturing the Environment of the Parent Thread
Learn more: Using move Closures with Threads
Do not communicate by sharing memory; instead, share memory by communicating.
— The Go Language Documentation
The Actor Model
An actor is the basic building block of concurrent computation
In responding to messages that it receives, an actor makes local decisions, creates more actors, sends more messages, and modifies private states
The Actor Model removes the need for lock-based synchronization
Channels
A general programming concept by which data is sent from one thread to another
Two halves: one or more transmitter(s) and one or more receiver(s)
The transmitter half is the upstream location of a “river”, the receiver half is the downstream
Closed if either half is dropped
Multiple-Producer Single-Consumer Channels
send() / recv() vs. try_send() / try_recv()
Both variants return Result<T, E>
send() / recv() blocks the thread’s execution and wait until a channel has available capacity or becomes non-empty
try_send() / try_recv() is non-blocking and returns immediately
Implementing the Actor Model: Use channels for message passing
use std::sync::mpsc;
use std::thread;
fn main() {
// multiple producer, single consumer (MPSC)
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
let val = String::from("hi");
tx.send(val).unwrap();
});
// use try_recv() to check if a message is available in
// a non-blocking way
let received = rx.recv().unwrap();
println!("Got: {received}");
}Transferring Ownership between Threads With Channels
Will this code compile successfully?
Learn more: Channels and Ownership Transference
Cloning Multiple Producers with an MPSC Channel
let (tx, rx) = mpsc::channel();
let tx1 = tx.clone();
thread::spawn(move || {
let vals = vec![
String::from("hi"),
String::from("from"),
String::from("the"),
String::from("thread"),
];
for val in vals {
tx1.send(val).unwrap();
thread::sleep(Duration::from_secs(1));
}
});
thread::spawn(move || {
let vals = vec![
String::from("more"),
String::from("messages"),
String::from("for"),
String::from("you"),
];
for val in vals {
tx.send(val).unwrap();
thread::sleep(Duration::from_secs(1));
}
});
for received in rx {
println!("Got: {received}");
}Again, do no communicate by sharing memory, and keep the states in each thread private and local!
But what if I really want to share memory?
Shared-State Concurrency with Mutex<T>
use std::sync::Mutex;
fn main() {
let m = Mutex::new(5);
{
// block the current thread until having the lock
// call to `lock` would fail if the holder thread panicked
// as `m` is Mutex<T>, cannot access its value directly
let mut num = m.lock().unwrap();
// returned value is a smart pointer, `MutexGuard`,
// which implements `Deref` and `Drop` traits
*num = 6;
}
println!("m = {m:?}");
}Learn more: The API of Mutex<T>
Sharing a Mutex<T> Between Threads
use std::sync::Mutex;
use std::thread;
fn main() {
let counter = Mutex::new(0);
let mut handles = vec![];
for _ in 0..10 {
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Result: {}", *counter.lock().unwrap());
}Will this compile successfully?
Learn more: Sharing a Mutex<T> Between Multiple Threads
Multiple Ownership with Multiple Threads
Will this compile successfully?
Learn more: Multiple Ownership with Multiple Threads
Atomic Reference Counting with Arc<T>
Learn more: Atomic Reference Counting with Arc<T>
Other Atomic Types in the Rust Standard Library
Several atomic types that provide atomic access to primitive types
safe to share between threads
such as AtomicUsize, AtomicBool, and so on
load() and store()
Guaranteed to be lock-free
Can be used as building blocks of other concurrent types
Learn more: Atomic Types in the Rust Standard Library
use std::sync::atomic::{AtomicUsize, Ordering};
static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
// relaxed ordering doesn't synchronize anything except the global
// thread counter itself
let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1,
Ordering::Relaxed);
// this number may not be true at the moment of printing because some
// other thread may have changed static value already
println!("live threads: {}", old_thread_count + 1);Learn more: Atomic Types in the Rust Standard Library
Extensible Concurrency with the Send Marker Trait
Send, a std::marker trait, indicates that ownership of values of the type implementing Send can be transferred between threads
It is safe to send it to another thread
Automatically implemented when the compiler thinks it’s appropriate
Learn more: Send and Sync
Types That Are Send
Send, with a few exceptions
Rc<T> cannot be Send as both threads may update the reference count at the same timeSend types is automatically marked as SendExtensible Concurrency with the Sync Marker Trait
Sync indicates that it is safe for the type implementing Sync to be referenced from multiple threads
A type T is Sync if and only if &T is Send
Automatically implemented when the compiler thinks it’s appropriate
Learn more: Send and Sync
Types That Are Sync
Sync, with a few exceptions
Rc<T> is not SyncRefCell<T> is not Sync since its implementation of borrow checking at runtime is not thread-safeMutex<T> is SyncSync types is automatically marked as SyncLearn more: Allowing Access from Multiple Threads with Sync
So long, multi-threading!
The Rust Programming Language, Chapter 16 and 21