Home
Map
Mutex, lock ExamplesUse Mutex to modify shared data on multiple threads safely. Store a string in a mutex.
Rust
This page was last reviewed on Jun 1, 2023.
Mutex. In Rust we can modify data that is shared among many threads. But it is essential we wrap accesses to this data in a mutex.
With a mutex, writes will not overlap and the data will not become corrupted. We can place a mutex in an Arc and pass it to many threads.
Arc
OnceLock
Chunks
Example program. To begin, we place a Mutex in an Arc. Our program is sharing a String between many threads. Each thread can store a String in the Mutex.
Tip To access the Mutex data, we must call lock(). If we use try_unlock, the access will not block and we might not get the data.
Tip 2 We can unwrap() the result of the lock() method. This gives us a mutable pointer to the data.
Here In the thread, we write to the value pointed to by the data pointer. We store a string.
use std::sync::*; use std::thread; fn main() { let data = Arc::new(Mutex::new(String::new())); let mut children = vec![]; // Create threads. for _ in 0..8 { let data = Arc::clone(&data); children.push(thread::spawn(move || { // Lock blocks until the mutex is available. let mut data = data.lock().unwrap(); // Generate a string. let number = 100; let result = "Data ".to_string() + &number.to_string(); // Store string in mutex. *data = result; })); } // Join all threads. for child in children { let _result = child.join(); } // Print shared string. let result = data.lock().unwrap(); println!("{}", result); }
Data 100
Try_lock. Suppose we have some behavior that only needs to happen once, and can happen on any thread. With try_lock, we can run some code once, and other threads can skip over the lock.
Step 1 We call try_lock on Mutex. The first thread that reaches this will acquire the lock, but other threads will skip past the lock.
Step 2 We ensure that the data usize is initialized to the value 5000. This only occurs once.
Step 3 We print the result, which is always 5000—but any of the 8 threads could have acquired the lock.
use std::sync::*; use std::thread; fn main() { let data = Arc::new(Mutex::new(0usize)); let mut children = vec![]; for _ in 0..8 { let data = Arc::clone(&data); children.push(thread::spawn(move || { // Step 1: use try_lock. if let Ok(mut data) = data.try_lock() { // Step 2: check that data is not yet assigned. if *data == 0 { println!("Data assigned in try_lock"); *data = 5000; } } })); } for child in children { let _ = child.join(); } // Step 3: print result. let result = data.lock().unwrap(); println!("{}", result); }
Data assigned in try_lock 5000
Mutex benchmark. Sometimes we have two fields we want to access with a Mutex. Instead of putting a Mutex around each field, we can combine the 2 fields and use 1 Mutex.
Version 1 In this version of the code we create a Test1 struct and share it among 8 threads with an Arc. We lock 2 Mutexes on each iteration.
Version 2 Here we have a tuple of the 2 vectors contained in a Mutex. We only lock once on each iteration.
vec
Result Reducing Mutexes by storing multiple fields in a single mutex improves performance.
However This optimization only works when the fields in Mutexes are accessed at the same time.
use std::sync::*; use std::thread; use std::time::*; const MAX: usize = 1000000; const THREADS: usize = 8; struct Test1 { vals1: Mutex<Vec<usize>>, vals2: Mutex<Vec<usize>>, } struct Test2 { vals: Mutex<(Vec<usize>, Vec<usize>)>, } fn main() { // Version 1: use 2 separate Mutexes. let t0 = Instant::now(); let arc = Arc::new(Test1 { vals1: Mutex::new(vec![]), vals2: Mutex::new(vec![]), }); let mut thread_vec = vec![]; for _ in 0..THREADS { thread_vec.push(arc.clone()); } let mut children = vec![]; for t in thread_vec { children.push(thread::spawn(move || { for _ in 0..MAX { let mut vals1 = t.vals1.lock().unwrap(); vals1.push(0); let mut vals2 = t.vals2.lock().unwrap(); vals2.push(0); } })); } for child in children { let _ = child.join(); } println!("{}", t0.elapsed().as_nanos()); // Version 2: use 1 Mutex with 2 separate values in it. let t1 = Instant::now(); let arc = Arc::new(Test2 { vals: Mutex::new((vec![], vec![])), }); let mut thread_vec = vec![]; for _ in 0..THREADS { thread_vec.push(arc.clone()); } let mut children = vec![]; for t in thread_vec { children.push(thread::spawn(move || { for _ in 0..MAX { let mut vals = t.vals.lock().unwrap(); vals.0.push(0); vals.1.push(0); } })); } for child in children { let _ = child.join(); } println!("{}", t1.elapsed().as_nanos()); }
1663781334 ns lock(), push(), lock(), push() 1334830000 ns lock(), push(), push()
It is important not to keep locked mutex data in scope too long. If 2 threads are blocking on a single mutex, the program may stall.
So It is essential to copy data out of the mutex as soon as possible. This can avoid deadlocks.
Tip The mutex will release its lock automatically when it goes out of scope.
Threads make thinking about performance difficult. But Mutex can be used to reduce threading latency if we can reduce the complexity of the program.
Info When possible, prefer atomic types like AtomicUsize to synchronize between threads.
AtomicUsize
Mutexes are reliable and fairly easy to use in Rust. It is important to allow the locks to release—copying data out of the Mutex can help with this.
Dot Net Perls is a collection of tested code examples. Pages are continually updated to stay current, with code correctness a top priority.
Sam Allen is passionate about computer languages. In the past, his work has been recommended by Apple and Microsoft and he has studied computers at a selective university in the United States.
This page was last updated on Jun 1, 2023 (edit link).
Home
Changes
© 2007-2024 Sam Allen.