Software design similar to chain of responsibility or decoration pattern using specialization: are there more simple alternatives?

Hi all. I'm working on the rsynth crate for developing real-time audio applications. This crate can be used to develop audio effects (like reverb, ...) and virtual music instruments. A simplified version of the current design is as follows:

trait Plugin<Context> {
    fn render_audio(
        &mut self, 
        audio_input_buffers: &[&[f32]], 
        audio_output_buffers: &mut[&mut[f32]], 
        context: Context
    );
}
trait EventHandler<Event, Context> {
    fn handle_event(&mut self, event: Event, context: Context);
}

Then there is the concept of "middleware".

E.g. the polyphonic middleware.

The middleware is a struct Polyphonic<Event: PolyphonicEvent, Voice: Plugin> (where PolyphonicEvent and Voice are traits) that has a list of voices, each implementing Plugin. Polyphonic itself implements Plugin by having all its voices render audio and mixing the results together. It also implements EventHandler<E, _> by dispatching the event to the relevant voice if its type is the type parameter Event, and by simply broadcasting the event to all voices otherwise.

Another example is the `TimeSplitter` middleware.

Some events have a time offset relative to the beginning of the next audio buffer. TimeSplitter<Event, Child: Plugin<_> + EventHandler<Event, _> chunks the audio buffer into pieces, such that each event that has type Event happens at the beginning of a chunk. It implements EventHandler<Event, _> by putting all events in a queue. It implements EventHandler<E, _> where E is not Event by delegating it to the child.
Plugin is implemented by chunking the audio buffers, based on the offsets of the events in the queue. Then for each chunk, first handle_event is called on the child with the corresponding event and then render_audio for the chunk.

Middleware has a "child" that implements Plugin and the middleware itself implements Plugin by doing some preprocessing and/or post-processing, maybe adding something to the context, and passing the rest of the rendering to the child.

Middleware typically implements EventHandler<E, _> by delegating it to the child, but some middleware handles some types in a special way. E.g. Polyphonic simply broadcasts all its events to all its "voices", except for the events of one type, that are dispatched only to the relevant voice. Similarly, TimeSplitter delegates all the events, except for events of one type: these events are queued until render_audio is called.

This is the first usage of specialization: the middleware handles a special event type in a special way and delegates the other event types to its child(ren).

Another use of specialization is for the context. Middleware can add something to the context. This happens as follows: the context is wrapped into another type that in addition to the wrapped context, contains a new field. The wrapping context implements a trait similar to the Borrow<T> trait by returning the additional field when T is the type of that field and by delegating borrow to the wrapped context for all other types. In order to make abstraction of this for the audio developer using the rsynth crate, traits are defined, e.g.

trait WithSpecialX {
    fn get_X(&self) -> X;
}

impl<Context> WithSpecialX for Context
where 
    Context: TraitSimilarToBorrow<X> 
{
    fn get_X(&self) -> X {
        self.borrow()
    }
}

An example of middleware that changes the context is the Envelope middleware that adds a smooth envelope to the context.

If you are not familiar with audio development, it's worth noting that the situation may be similar to web services, where event corresponds to an HTTP request and the context can contain information like the identity of the logged-in user. A difference can be that for web services, you typically have only one request type, whereas for audio software, there are different event types: key pressed on the keyboard, something changed in the UI, ...

It's important to note that for real-time audio, memory allocation on the heap is "not allowed" in the real-time part of the application: all heap memory needs to be pre-allocated.

The advantage of the design described above is that it's convenient for the audio-developer who uses the rsynth crate for its application: in order to have polyphony, you only need to wrap your plugin in the Polyphonic struct (and implement EventHandler for the relevant types), in order to make a plugin sample-accurate, you don't need to take special care for timed events, you simply wrap your plugin in the TimeSplitter middleware. In order to smoothen parameters to avoid clicking, you just use the Envelope middleware.

There are two downsides:

  1. We're taking a bet on specialization.
  2. It's complex to write middleware.

Specialization has not landed yet at the time of writing. It's included in the 2019 roadmap, but there is no guarantee that it will stabilize this year. In the meanwhile, the rsynth crate uses the syllogism crate. syllogism is a crate that is developed specially for rsynth and that provides a work-around so that you can use some form of specialization in stable Rust. syllogism is not intended to be used eternally, it's intended to be used temporarily until specialization has stabilized. There are some caveats, however: it looks like the syllogism crate allows to specialize for special types, whereas -- if I understand it correctly -- the current proposal for the specialization feature is to be able to specialize for a specific behavior, i.e.: trait-based specialization instead of type-based specialization. (I don't fully understand the proposal, but I think that specialization for a specific type is also allowed, only not for a type that is a type parameter.) For most middleware, this is not a problem, but for the TimeSplitter middleware, this is a problem because it needs to queue the special events, so it really needs to specialize for a special type, rather than for any type that implements a special trait, because it needs to store the events of this type.

In the current implementation of the rsynth crate, we have a feature flag that can be used to switch between the syllogism-based specialization and the specialization as it is currently implemented in nightly Rust. (With this note that the current implementation in nightly will most likely not be stabilized in this form, so we will need to make some changes there.)

Another downside is that there is an added complexity in developing new middleware. I sketched a simplified design above, but there are some limitations that need to be taken into account and that make it rather complex. For instance, for the Envelope middleware that adds envelope properties to the context, it is common to have multiple envelopes: e.g. one envelope that corresponds to the volume and another envelope that corresponds to the frequency and if you want to combine these into one context, you have to apply some tricks so that the user can select the correct envelope.

When I'm writing this middleware, it's not unusual that I need to think a couple of days or even weeks on how I can implement this. I am wondering: "Am I following the right path? Am I not making it to complicated?" I have the feeling that I am not removing any complexity, but just shifting it from the crate user to the crate author and that the resulting complexity is even bigger.

So my question for you is: is there a more simple design that I can use?

Thinking about it a little more, I believe the stated advantage (convenience) does not outweigh the added complexity.

I'm going to remove the chain of responsibilities pattern from the rsynth crate and let the audio developer using the rsynth crate "route" the events "manually". This is a little more work for the developer using the crate, but it's probably more clear what's going on because there's no "magic" and the developer has also more grasp on what's going on; it's also a more flexible design.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.