Experimenting with our new runtime
If you remember from Chapter 7, we implemented a join_all method to get our futures running concurrently. In libraries such as Tokio, you’ll find a join_all function too, and the slightly more versatile FuturesUnordered API that allows you to join a set of predefined futures and run them concurrently.
These are convenient methods to have, but it does force you to know which futures you want to run concurrently in advance. If the futures you run using join_all want to spawn new futures that run concurrently with their “parent” future, there is no way to do that using only these methods.
However, our newly created spawn functionality does exactly this. Let’s put it to the test!
An example using concurrency
Note
The exact same version of this program can be found in the ch08/c-runtime-executor folder.
Let’s try a new program that looks like this:
fn main() {
let mut executor = runtime::init();
executor.block_on(async_main());
}
coro fn request(i: usize) {
let path = format!(“/{}/HelloWorld{i}”, i * 1000);
let txt = Http::get(&path).wait;
println!(“{txt}”);
}
coro fn async_main() {
println!(“Program starting”);
for i in 0..5 {
let future = request(i);
runtime::spawn(future);
}
}
This is pretty much the same example we used to show how join_all works in Chapter 7, only this time, we spawn them as top-level futures instead.
To run this example, follow these steps:
- Replace everything below the imports in main.rs with the preceding code.
- Run corofy ./src/main.rs.
- Copy everything from main_corofied.rs to main.rs and delete main_corofied.rs.
- Fix the fact that corofy doesn’t know we changed our futures to take waker: &Waker as an argument. The easiest way is to simply run cargo check and let the compiler guide you to the places we need to change.
Now, you can run the example and see that the tasks run concurrently, just as they did using join_all in Chapter 7. If you measured the time it takes to run the tasks, you’d find that it all takes around 4 seconds, which makes sense if you consider that you just spawned 5 futures, and ran them concurrently. The longest wait time for a single future was 4 seconds.
Now, let’s finish off this chapter with another interesting example.
Running multiple futures concurrently and in parallel
This time, we spawn multiple threads and give each thread its own Executor so that we can run the previous example simultaneously in parallel using the same Reactor for all Executor instances.
We’ll also make a small adjustment to the printout so that we don’t get overwhelmed with data.
Our new program will look like this:
mod future;
mod http;
mod runtime;
use crate::http::Http;
use future::{Future, PollState};
use runtime::{Executor, Waker};
use std::thread::Builder;
fn main() {
let mut executor = runtime::init();
let mut handles = vec![];
for i in 1..12 {
let name = format!(“exec-{i}”);
let h = Builder::new().name(name).spawn(move || {
let mut executor = Executor::new();
executor.block_on(async_main());
}).unwrap();
handles.push(h);
}
executor.block_on(async_main());
handles.into_iter().for_each(|h| h.join().unwrap());
}
coroutine fn request(i: usize) {
let path = format!(“/{}/HelloWorld{i}”, i * 1000);
let txt = Http::get(&path).wait;
let txt = txt.lines().last().unwrap_or_default();
println!(«{txt}»);
}
coroutine fn async_main() {
println!(“Program starting”);
for i in 0..5 {
let future = request(i);
runtime::spawn(future);
}
}
The machine I’m currently running has 12 cores, so when I create 11 new threads to run the same asynchronous tasks, I’ll use all the cores on my machine. As you’ll notice, we also give each thread a unique name that we’ll use when logging so that it’s easier to track what happens behind the scenes.