Note

While I use 12 cores, you should use the number of cores on your machine. If we increase this number too much, our OS will not be able to give us more cores to run our program in parallel on and instead start pausing/resuming the threads we create, which adds no value to us since we handle the concurrency aspect ourselves in an a^tsync runtime.

You’ll have to do the same steps as we did in the last example:

  1. Replace the code that’s currently in main.rs with the preceding code.
  2. Run corofy ./src/main.rs.
  3. Copy everything from main_corofied.rs to main.rs and delete main_corofied.rs.
  4. Fix the fact that corofy doesn’t know we changed our futures to take waker: &Waker as an argument. The easiest way is to simply run cargo check and let the compiler guide you to the places we need to change.

Now, if you run the program, you’ll see that it still only takes around 4 seconds to run, but this time we made 60 GET requests instead of 5. This time, we ran our futures both concurrently and in parallel.

At this point, you can continue experimenting with shorter delays or more requests and see how many concurrent tasks you can have before the system breaks down.

Pretty quickly, printouts to stdout will be a bottleneck, but you can disable those. Create a blocking version using OS threads and see how many threads you can run concurrently before the system breaks down compared to this version.

Only imagination sets the limit, but do take the time to have some fun with what you’ve created before we continue with the next chapter.

The only thing to be careful about is testing the concurrency limit of your system by sending these kinds of requests to a random server you don’t control yourself since you can potentially overwhelm it and cause problems for others.

Summary

So, what a ride! As I said in the introduction for this chapter, this is one of the biggest ones in this book, but even though you might not realize it, you’ve already got a better grasp of how asynchronous Rust works than most people do. Great work!

In this chapter, you learned a lot about runtimes and why Rust designed the Future trait and the Waker the way it did. You also learned about reactors and executors, Waker types, Futures traits, and different ways of achieving concurrency through the join_all function and spawning new top-level futures on the executor.

By now, you also have an idea of how we can achieve both concurrency and parallelism by combining our own runtime with OS threads.

Now, we’ve created our own async universe consisting of coro/wait, our own Future trait, our own Waker definition, and our own runtime. I’ve made sure that we don’t stray away from the core ideas behind asynchronous programming in Rust so that everything is directly applicable to async/await, Future traits, Waker types, and runtimes in day-to-day programming.

By now, we’re in the final stretch of this book. The last chapter will finally convert our example to use the real Future trait, Waker, async/await, and so on instead of our own versions of it. In that chapter, we’ll also reserve some space to talk about the state of asynchronous Rust today, including some of the most popular runtimes, but before we get that far, there is one more topic I want to cover: pinning.

One of the topics that seems hardest to understand and most different from all other languages is the concept of pinning. When writing asynchronous Rust, you will at some point have to deal with the fact that Future traits in Rust must be pinned before they’re polled.

So, the next chapter will explain pinning in Rust in a practical way so that you understand why we need it, what it does, and how to do it.

However, you absolutely deserve a break after this chapter, so take some fresh air, sleep, clear your mind, and grab some coffee before we enter the last parts of this book.

Leave a Reply

Your email address will not be published. Required fields are marked *