There is one subtle point to make a note of here. The first time JoinAll::poll is called, it will call poll on each future in the collection. Polling each future will kick off whatever operation they represent and allow them to progress concurrently. This is one way to achieve concurrency with lazy coroutines, such as the ones we’re dealing with here.

Next up are the changes we’ll make in main.rs.

The main function will be the same, as well as the imports and declarations at the start of the file, so I’ll only present the coroutine/await functions that we’ve changed:
coroutine fn request(i: usize) {
    let path = format!(“/{}/HelloWorld{i}”, i * 1000);
    let txt = Http::get(&path).wait;
    println!(“{txt}”);
}
coroutine fn async_main() {
    println!(“Program starting”);
    let mut futures = vec![];
    for i in 0..5 {
        futures.push(request(i));
    }
    future::join_all(futures).wait;
}

Note

In the repository, you’ll find the correct code to put in main.rs in ch07/c-async-await/original_main.rs if you ever lose track of it with all the copy/pasting we’re doing.

Now we have two coroutine/wait functions. async_main stores a set of coroutines created by read_request in a Vec<T: Future>.

Then it creates a JoinAll future and calls wait on it.

The next coroutine/wait function is read_requests, which takes an integer as input and uses that to create GET requests. This coroutine will in turn wait for the response and print out the result once it arrives.

Since we create the requests with delays of 0, 1, 2, 3, 4 seconds, we should expect the entire program to finish in just over four seconds because all the tasks will be in progress concurrently. The ones with short delays will be finished by the time the task with a four-second delay finishes.

We can now transform our coroutine/await functions into state machines by making sure we’re in the folder ch07/c-async-await and writing corofy ./src/main.rs.

You should now see a file called main_corofied.rs in the src folder. Copy its contents and replace what’s in main.rs with it.

If you run the program by writing cargo run, you should get the following output:
Program starting
FIRST POLL – START OPERATION
FIRST POLL – START OPERATION
FIRST POLL – START OPERATION
FIRST POLL – START OPERATION
FIRST POLL – START OPERATION
HTTP/1.1 200 OK
content-length: 11
connection: close
content-type: text/plain; charset=utf-8
date: Tue, xx xxx xxxx 21:11:36 GMT
HelloWorld0
HTTP/1.1 200 OK
content-length: 11
connection: close
content-type: text/plain; charset=utf-8
date: Tue, xx xxx xxxx 21:11:37 GMT
HelloWorld1
HTTP/1.1 200 OK
content-length: 11
connection: close
content-type: text/plain; charset=utf-8
date: Tue, xx xxx xxxx 21:11:38 GMT
HelloWorld2
HTTP/1.1 200 OK
content-length: 11
connection: close
content-type: text/plain; charset=utf-8
date: Tue, xx xxx xxxx 21:11:39 GMT
HelloWorld3
HTTP/1.1 200 OK
content-length: 11
connection: close
content-type: text/plain; charset=utf-8
date: Tue, xx xxx xxxx 21:11:40 GMT
HelloWorld4
ELAPSED TIME: 4.0084987

The thing to make a note of here is the elapsed time. It’s now just over four seconds, just like we expected it would be when our futures run concurrently.

If we take a look at how coroutine/await changed the experience of writing coroutines from a programmer’s perspective, we’ll see that we’re much closer to our goal now:

  • Efficient: State machines require no context switches and only save/restore the data associated with that specific task. We have no growing vs segmented stack issues, as they all use the same OS-provided stack.
  • Expressive: We can write code the same way as we do in “normal” Rust, and with compiler support, we can get the same error messages and use the same tooling
  • Easy to use and hard to misuse: This is a point where we probably fall slightly short of a typical fiber/green threads implementation due to the fact that our programs are heavily transformed “behind our backs” by the compiler, which can result in some rough edges. Specifically, you can’t call an async function from a normal function and expect anything meaningful to happen; you have to actively poll it to completion somehow, which gets more complex as we start adding runtimes into the mix. However, for the most part, we can write programs just the way we’re used to.

Leave a Reply

Your email address will not be published. Required fields are marked *