2021-04-06: Interfacing a low-level actor system to Rust async/await, part 1
I've been coding on never-blocking actor systems for maybe 8 years, and that is "home" to me and the natural way to go about things. But in Rust most of the async ecosystem is based around async/await. So in order to join that ecosystem and make use of some of those crates, I need to interface my actor runtime to async/await. So Stakker needs to become an async/await executor.
So inspired by the Async Foundations Visioning exercise, I'm documenting this process to provide some hard data for a possible status quo story about interfacing to async/await from a foreign runtime, and perhaps to highlight what is needed to better support executor-independence.
Contents:
- Ground rules
- Impressions from an actor perspective
- Mapping between
Ret
andFuture
- Mapping between
Fwd
andStream
- State of play
- Next steps
Ground rules
First of all, here are the relevant characteristics of the runtime that I'm interfacing from:
-
Never-blocking. This means that all events or messages must be handled by the actor immediately on delivery, and the runtime delivers messages ASAP. The actor can't temporarily block its queue whilst waiting for some external process to complete, nor selectively accept just certain types of messages, like some actor systems allow. This may seem limiting but actually it works out fine in practice, not least because there can't be deadlocks in the messaging layer. So I don't anticipate this being a big problem for interfacing to async/await. Note that in this runtime an actor message IS an event which IS an asynchronous actor call which IS an
FnOnce
closure on the queue. The are equivalent. -
No futures or promises. Everything is imperative and direct. You simply make an asynchonous call (i.e. conceptually send a message) when you have something to communicate to another actor. If you want to be notified of something or receive data or a response at some point in the future, you provide a callback in the form of a
Ret
orFwd
instance.Ret
is effectively the opposite of aFuture
, the other end of the conceptual pipe passing a result back to the code that requested it, andFwd
is the opposite ofStream
. So the problem is to interfaceRet
andFwd
to the common async/await traits. Note thatFwd
andRet
handlers run inline at the callsite but typically result in asynchronous calls (i.e.FnOnce
closures) being pushed to the queue. -
Anything might fail: It is expected that actors may fail and be restarted, and the rest of the actor system should continue running fine. This is normal operation. This raises questions about how to deal with failure when async/await code is waiting for data from an actor that goes away.
-
Single-threaded. Stakker makes a conscious choice to optimise for single-threaded operation and insist that load-balancing/etc be done at a higher level. This encourages load-balancing of larger units of work, which should improve parallel performance when several Stakker runtimes work in parallel. This might cause some problems because async/await seems to be oriented around multi-threaded operation.
The characteristics of the target ecosystem (Rust async/await) presumably don't need describing.
Impressions from an actor perspective
Futures and streams
First of all, futures seem weird as a concept. You want a result and
you effectively get given an IOU. What use is that? What purpose
does a future serve? Why can't the other end just wait and pass us
the final result when it is done, instead of giving us a proxy for the
result? But then I realized that a future is effectively a temporary
mailbox. If the receiving code does not already have some kind of a
mailbox, i.e. some concept of a component and a way for events to be
delivered to that component, then this may be the only way to get the
response delivered. However Stakker has no need for futures as it
already has a means for messages to be delivered asynchronously to a
destination. So Stakker works the other way around: A Ret
sends a value to an end-point, whereas a Future
is held by an
end-point to receive a value.
Stream
in future-core seems to work similarly, i.e. a
Stream
acts as a mailbox where values will be received by an
end-point. Contrast this with Fwd
which sends a stream of values
to an end-point, i.e. conceptually Fwd
is at the opposite end of
the pipe. Both Stream
and Future
operate on a "pull" model.
The Stakker primitives on the other hand are clearly "push"
operations. So this is a difference in approach.
Forced use of RefCell
The poll
method of the Future
trait seems like a narrow door in
a wall between two bodies of code. There is no way to do
qcell/GhostCell-style statically-checked cell borrowing within a
Future
, because there is no way to communicate an active borrow up
through the poll
calls from the runtime. Given that, the path of
least resistance leads to using RefCell::borrow_mut
, which IMHO is
a bad habit to get into. I found myself writing "Borrow-safety: ..."
comments to justify my use of borrow_mut()
and how/why it was going
to be panic-free, just as if I was dealing with unsafe
. (It's hard
to go back to manually-verified cell borrowing once you've got used to
statically-checked cell borrowing ...)
Could this work differently and still be executor-independent? Maybe.
The std::task::Context
is where you'd have to put a borrow of a
cell-owner (or "brand" owner), but then it would have to be built into
the standard library and be one that all executors could support. For
QCell
, TCell
and TLCell
the poll
signature is already
adequate: the Context<'_>
already indicates that it can contain
borrows of other things. However for LCell
or GhostCell, it would
also need an <'id>
added to the signature. The GhostCell style has
least restrictions, but unfortunately adding <'id>
to all poll
implementations would be a difficulty. Could the compiler derive this
automatically? I would be totally in favour of Context
adding
GhostCell-like cell borrowing if the compiler didn't require the
<'id>
.
In Stakker, I pass active borrows to cell-owners up through all
the calls, which allows statically-checked access to two independent
classes of data: both actor-state and Share
-state. But I have
full control in Stakker and I don't have to conform to any
external traits, nor worry about compatibility with other actor
runtimes. To allow statically-checked cell borrowing in async/await,
the standard library would have to adopt one single
qcell/GhostCell-like solution for std::task::Context
-- and
probably this is not an easy decision right now.
Cache-efficiency
Rust async/await seems like it might be more cache-efficient than Stakker. Since the code is "pull"-based, it will keep on pulling data until there is no more data immediately available. So that exhausts a single resource in one go, whilst all the related code is still in cache. On the other hand, Stakker processes actor calls in submission order. So if the input events are interleaved, then so will be the processing. However Stakker can still do bulk operations, e.g. if what is queued is a notification to examine a resource rather than the individual chunks of data, then the entire resource can still be flushed in one go. There are probably pros and cons of both approaches.
Complexity
It may be that the complexity behind async/await is the minimum necessary complexity to get the job done, but it doesn't seem very simple or elegant on the surface. In particular working with pinning just seems really awkward. Maybe this is conceptually elegant underneath and it's just the initial implementation that is a bit rough, but I haven't got to that point with it yet.
Multi-threaded
The standard library Waker
is Send
, so this wake-up mechanism
is obviously not designed for efficient single-threaded use. Given
that I'm writing a single-threaded executor, that is an unacceptable
cost, so I implemented a separate wake-up mechanism that
Stakker-specific glue code can use instead. So this means that
where async/await code is passing data to or from actor code, no
synchronization operations (atomics, mutexes, etc) are required at
all. I will later add a separate spawn_with_waker
call to spawn
with a traditional Waker
where that is required, e.g. where some
async code spawns threads and needs to send wake-up notifications back
across threads to the Stakker thread.
Mapping between Ret
and Future
Conceptually Ret
and Future
are like opposite ends of the same
pipe, so it's quite natural to interface the two together. There are
a few combinations:
-
Spawn:
Future
→Ret
: Connects an existingFuture
andRet
together and runs the future to completion, passing the final value to theRet
. Allows flow of data from async/await to the actor system. -
Push pipe:
Ret
→Future
: Create a newFuture
/Ret
pair where the future will resolve to the value passed to theRet
, as soon as that value is provided. Allows flow of data from the actor system to async/await. -
Pull pipe:
Future
⇄Ret
/Ret
: Create aFuture
which when first polled sends a message to the providedRet
requesting data and providing anotherRet
to return it with, which when responded to resolves the future to that value. This is like a push pipe, except that data doesn't have to be generated until it is needed.
Within those combinations there are also choices about handling of
failure. Ret
has the property that if it is dropped (e.g. the
actor handling it fails), None
is sent back. There are two ways of
mapping that failure onto async/await:
-
Drop the whole async/await task. This means that from the point of view of the async/await code, it will abruptly stop executing at whatever
.await
it was stuck on. However, the plus side is that no special handling of failure is required. -
Pass the error through, using
Result<T, ActorFail>
as the type returned by theFuture
. That way the task can handle the failure and continue executing.
Mapping between Fwd
and Stream
First of all Stream
→ Fwd
can be supported with a
spawn-like operation, i.e. running the stream in the background until
it terminates, forwarding all values to the Fwd
. This is
straightforward.
For Fwd
→ Stream
, it is more complex. There is a
fundmental difference in semantics between Fwd
and Stream
.
Fwd
is effectively just a connection between A and B, allowing an
endless stream of values to be pushed, whereas Stream
is a "pull"
connection and supports termination. Connecting a Fwd
to a
Stream
directly as a "push pipe" requires a queue in between,
because we can't force the owner of the Stream
to handle values if
it doesn't want to. So given the requirement for queuing and no
mechanism for backpressure, this is probably not the ideal setup in
most cases.
However a "push pipe" can be made more manageable with a Fwd<()>
callback that is called whenever the queue becomes empty.
That way the sender can refill batches of data when requested. If the
sender pushes just one value each time it is called, then the "push
pipe" has become a "pull pipe". So this allows the full spectrum of
implementations. To handle termination, we can require that the
Fwd
passes Option<T>
values the same as the Stream
does.
Then there is the question of handling actor failures. If the last
reference to the Fwd
goes away before the final None
is sent
then that's assumed to be an irregular termination of the stream. As
for Future
there are two ways to handle this:
-
Drop the whole async/await task. This means that
poll_next
can returnOption<T>
as normal, and the async/await code doesn't have to do any special handling of failure. -
Pass the error through, meaning that the return from the
poll_next
will beOption<Result<T, ActorFail>>
. So normally you'd get zero or moreSome(Ok(value))
values, then aNone
for termination of the stream. However in the case of actor failure instead you'd getSome(Err(ActorFail))
thenNone
to terminate the stream.
State of play
Spawning futures and streams, plus the actor interfaces to futures and streams are running fine, with basic tests, in the current version of the stakker_async_await crate.
So far things have not been too bad. Figuring out how best to implement a single-threaded task wake-up mechanism within Stakker took a while, trying various different methods. Pinning was awkward but manageable after going through the docs. Finding the best mapping between the two sides took some consideration.
Next steps
The aim is to attempt to support all the executor-independent async/await interfaces available in the ecosystem, to see how that goes and what the differences are. Also to see how much executor-independent code is out there, and what precisely it requires of the runtime.
So these crates will be looked at, to see how much can be supported:
- futures crate
- futures-lite crate
- tokio, to see if any Tokio-specific traits can be interfaced
- async_executors crate
- agnostik crate
Are there any other crates out there that would be worth looking at?
In addition, it would be good to be able to make asynchronous actor calls from async/await code written specifically to run on Stakker. But that will probably need quite a bit of work to get the ergonomics right.