Mutex
In Rust we can modify data that is shared among many threads. But it is essential we wrap accesses to this data in a Mutex
.
With a Mutex
, writes will not overlap and the data will not become corrupted. We can place a Mutex
in an Arc
and pass it to many threads.
To begin, we place a Mutex
in an Arc
by calling Arc::new
on the result of Mutex::new
. Our program is sharing a String
between many threads. Each thread can store a String
in the Mutex
.
Mutex
data, we must call lock()
. If we use try_unlock
, the access will not block and we might not get the data.unwrap()
the result of the lock()
method. This gives us a mutable pointer to the data.string
.use std::sync::*; use std::thread; fn main() { let data = Arc::new(Mutex::new(String::new())); let mut children = vec![]; // Create threads. for _ in 0..8 { let data = Arc::clone(&data); children.push(thread::spawn(move || { // Lock blocks until the mutex is available. let mut data = data.lock().unwrap(); // Generate a string. let number = 100; let result = "Data ".to_string() + &number.to_string(); // Store string in mutex. *data = result; })); } // Join all threads. for child in children { let _result = child.join(); } // Print shared string. let result = data.lock().unwrap(); println!("{}", result); }Data 100
Try_lock
Suppose we have some behavior that only needs to happen once, and can happen on any thread. With try_lock
, we can run some code once, and other threads can skip over the lock.
try_lock
on Mutex
. The first thread that reaches this will acquire the lock, but other threads will skip past the lock.usize
is initialized to the value 5000. This only occurs once.use std::sync::*; use std::thread; fn main() { let data = Arc::new(Mutex::new(0usize)); let mut children = vec![]; for _ in 0..8 { let data = Arc::clone(&data); children.push(thread::spawn(move || { // Step 1: use try_lock. if let Ok(mut data) = data.try_lock() { // Step 2: check that data is not yet assigned. if *data == 0 { println!("Data assigned in try_lock"); *data = 5000; } } })); } for child in children { let _ = child.join(); } // Step 3: print result. let result = data.lock().unwrap(); println!("{}", result); }Data assigned in try_lock 5000
Mutex
benchmarkSometimes we have two fields we want to access with a Mutex
. Instead of putting a Mutex
around each field, we can combine the 2 fields and use 1 Mutex
.
struct
and share it among 8 threads with an Arc
. We lock 2 Mutexes on each iteration.Mutex
. We only lock once on each iteration.mutex
improves performance.use std::sync::*; use std::thread; use std::time::*; const MAX: usize = 1000000; const THREADS: usize = 8; struct Test1 { vals1: Mutex<Vec<usize>>, vals2: Mutex<Vec<usize>>, } struct Test2 { vals: Mutex<(Vec<usize>, Vec<usize>)>, } fn main() { // Version 1: use 2 separate Mutexes. let t0 = Instant::now(); let arc = Arc::new(Test1 { vals1: Mutex::new(vec![]), vals2: Mutex::new(vec![]), }); let mut thread_vec = vec![]; for _ in 0..THREADS { thread_vec.push(arc.clone()); } let mut children = vec![]; for t in thread_vec { children.push(thread::spawn(move || { for _ in 0..MAX { let mut vals1 = t.vals1.lock().unwrap(); vals1.push(0); let mut vals2 = t.vals2.lock().unwrap(); vals2.push(0); } })); } for child in children { let _ = child.join(); } println!("{}", t0.elapsed().as_nanos()); // Version 2: use 1 Mutex with 2 separate values in it. let t1 = Instant::now(); let arc = Arc::new(Test2 { vals: Mutex::new((vec![], vec![])), }); let mut thread_vec = vec![]; for _ in 0..THREADS { thread_vec.push(arc.clone()); } let mut children = vec![]; for t in thread_vec { children.push(thread::spawn(move || { for _ in 0..MAX { let mut vals = t.vals.lock().unwrap(); vals.0.push(0); vals.1.push(0); } })); } for child in children { let _ = child.join(); } println!("{}", t1.elapsed().as_nanos()); }1663781334 ns lock(), push(), lock(), push() 1334830000 ns lock(), push(), push()
It is important not to keep locked mutex
data in scope too long. If 2 threads are blocking on a single mutex
, the program may stall.
mutex
as soon as possible. This can avoid deadlocks.mutex
will release its lock automatically when it goes out of scope.Threads make thinking about performance difficult. But Mutex
can be used to reduce threading latency if we can reduce the complexity of the program.
AtomicUsize
to synchronize between threads.Mutexes are reliable and fairly easy to use in Rust. It is important to allow the locks to release—copying data out of the Mutex
can help with this.