If you run the example on Windows, you’ll see that you get two Schedule other tasks messages after each other. The reason for that is that Windows emits an extra event when the TcpStream is dropped on the server end. This doesn’t happen on Linux. Filtering out these events is quite simple, but we won’t focus on doing that in our example since it’s more of an optimization that we don’t really need for our example to work.
The thing to make a note of here is how many times we printed Schedule other tasks. We print this message every time we poll and get NotReady. In the first version, we printed this every 100 ms, but that’s just because we had to delay on each sleep to not get overwhelmed with printouts. Without it, our CPU would work 100% on polling the future.
If we add a delay, we also add latency even if we make the delay much shorter than 100 ms since we won’t be able to respond to events immediately.
Our new design makes sure that we respond to events as soon as they’re ready, and we do no unnecessary work.
So, by making these minor changes, we have already created a much better and more scalable version than we had before.
This version is fully single-threaded, which keeps things simple and avoids the complexity and overhead synchronization. When you use Tokio’s current-thread scheduler, you get a scheduler that is based on the same idea as we showed here.
However, there are also some drawbacks to our current implementation, and the most noticeable one is that it requires a very tight integration between the reactor part and the executor part of the runtime centered on Poll.
We want to yield to the OS scheduler when there is no work to do and have the OS wake us up when an event has happened so that we can progress. In our current design, this is done through blocking on Poll::poll.
Consequently, both the executor (scheduler) and the reactor must know about Poll. The downside is, then, that if you’ve created an executor that suits a specific use case perfectly and want to allow users to use a different reactor that doesn’t rely on Poll, you can’t.
More importantly, you might want to run multiple different reactors that wake up the executor for different reasons. You might find that there is something that mio doesn’t support, so you create a different reactor for those tasks. How are they supposed to wake up the executor when it’s blocking on mio::Poll::poll(…)?
To give you a few examples, you could use a separate reactor for handling timers (for example, when you want a task to sleep for a given time), or you might want to implement a thread pool for handling CPU-intensive or blocking tasks as a reactor that wakes up the corresponding future when the task is ready.
To solve these problems, we need a loose coupling between the reactor and executor part of the runtime by having a way to wake up the executor that’s not tightly coupled to a single reactor implementation.
Let’s look at how we can solve this problem by creating a better runtime design.