Twitter’s 2013 Paper Your Server as a Function presents a functional model for distributed server software, based on the primitives of futures, services, and filters. The paper touches on the ideas behind high-level Rust APIs like tokio or futures, but it’s particularly relevant to Actix-net, which creates the building blocks for the Rust-based micro web framework Actix-web.
I’ll start by summarizing the paper, then get into how it relates to Activ-net after that.
Your Server as a Function
The paper mostly details the Finagle runtime library, but outlines some general principles.
As the title suggests, the gist is to treat servers as functions – composable buiding blocks that can be made by composing functions themselves. Some key quotes to set the tone:
Server operations (e.g. acting on an incoming RPC or a timeout) are defined in a declarative fashion,
Don’t worry about low-level primitives like threads, only data flow. Servers are the primary building blocks, with lower-level concerns are are handled by the runtime engine.
relating the results of the (possibly many) subsequent sub-operations through the use of future combinators
Asynchronous results compose, feeding into each other and building up greater functionality.
Futures, Services, and Filters
These three concepts fit together to create the system: Futures represent the results of asynchronous operations. Services are asynchronous functions, representing system boundaries. Filters are asynchronous functions that compose to build services, and can be reused in multiple services.
Futures are the familiar concept that’s made it into many popular languages, as either futures or promises.
They compose by feeding into another operation, resulting in another future. Multiple asynchronous operations, eg, separating a query into requests to multiple servers, can also be combined into a single future.
Services are just asynchronous functions, but can be used to represent by themselves either a client or server:
val client: Service[HttpReq, HttpRep] = Http.newService("twitter.com:80")
server = Http.serve(":80", { req: HttpReq => Future.value(HttpRep(Status.OK, req.body)) })
Generally, services take a request and return a future.
type Service[Req, Rep] = Req => Future[Rep]
A filter is used to augment a service; it takes the form:
type Filter[Req, Rep] = (Req, Service[Req, Rep]) => Future[Rep]
For example, the auth service combines the method authReq
and the argument
:
val auth: (HttpReq, Service[AuthHttpReq, HttpRes]) => Future[HttpRep] = {
(req, service) =>
authReq(req) flatMap { authReq =>
service(authReq)
}
}
With the andThen combinator, you can combine auth
and authedService
into a single service that performs auth
and runs authedService
with the result:
val service: Service[HttpReq, HttpRep] = auth andThen authedService
Concerns
Interrupts
Since a consumer cannot affect its producer, operations like cancelation are not possible with a pure dataflow model. Finagle uses interrupts to solve this; a consumer can send an interrupt to its producer. The producer remains a black box, its state not directly affected, but it can choose to stop its operations for performance reasons.
Allocation
Future combinators allocate futures on the heap, and, in practice, a single can service can include a lot of futures:
recordHandletime
andThen traceRequest
andThen collectJvmStats
andThen parseRequest
andThen logRequest
andThen recordClientStats
andThen sanitize
andThen respondToHealthCheck
andThen applyTrafficControl
andThen virtualHostServer
The use of closures can be a problem, as a closure can inadvertently capture some shorter-lived value. Many of the problems described by are solved by Rust’s borrow checker and lifetimes – eg, not allowing the containing object to be accidently captured.
Comparison with other models
Dataflow programming is similar, but requires determinacy that makes them unsuitable for systems programming. See this paper for a fuller treatment. Channels (as used in Go, Rust) remove Finagle’s constraints on data flow direction (producers can also be consumers, both sending and receiving), but are less easily composed.
Actix-net
Actix-net borrows heavily in terms and concepts from “Your Server as a Function.” Here is an example that uses and_then
combinators to compose a service that takes a stream, transforms it, logs it, and counts the number of connections:
let num = Arc::new(AtomicUsize::new(0));
fn_service(move |stream: Io<tokio_tcp::TcpStream>| {
SslAcceptorExt::accept_async(&acceptor, stream.into_parts().0)
.map_err(|e| println!("Openssl error: {}", e))
})
.and_then(fn_service(logger))
.and_then(move |_| {
let num = num.fetch_add(1, Ordering::Relaxed);
println!("got ssl connection {:?}", num);
future::ok(())
})
Each individual service acts on requests, fed into it by the stream produced in the first service.
and_then is an overloaded concept in Rust, usually used with Result
and Option
types. The signature for Service
maps very closely to its signature for Result
, (Result<U, F>, fn(T) -> Result<U,E>) -> Result<U, E>
. Instead of U being a value, it’s a Request/Response object: A response from one service and request for another.
The source represents how and_then
transforms a service into a new service (actually AndThen
, which implements the Service trait) whose input (Request
) is the same as the output (Response
). This is a powerful usage of Rust’s traits that allows any function that can be converted into a service to then be converted into a type, AndThen
, that explicits implements the Service
trait.
fn and_then<F, B>(self, service: F) -> AndThen<Self, B>
where
Self: Sized,
F: IntoService<B>,
B: Service<Request = Self::Response, Error = Self::Error>,
{
AndThen::new(self, service.into_service())
}
Rust futures
The library relies heavily on Rust futures for both Serivce
s and its runtime model (actix-server
).
Services return a future, but also are themselves a kind of future with poll_ready
and call
methods. In the actix-server
, workers will poll the service for readiness, then call the service with incoming requests:
while let Some(msg) = conns.pop() {
match self.check_readiness(false) {
Ok(true) => {
let guard = self.conns.get();
let _ = self.services[msg.token.0]
.as_mut()
.expect("actix net bug")
.1
.call((Some(guard), ServerMessage::Connect(msg.io)));
}
...
}
}
Service primitives implement this directly, like the OpensslConnectorService.
For a plain function, poll_ready
always returns ready, and call
will simply call the function, interpreting its return type as a future.
The event loop of the server ultimately relies on the implementations of the service building blocks themselves. Even through many layers of abstraction of service combinators, the IO implementation at the bottom of the chain will drive the execution of the whole works.