Design decision for a crate providing access to a peripheral

Hi there,

I'm looking for some advice, guidance, opinions or best practices :wink:

When designing a crate that will provide an abstraction for a specific peripheral (e.g. UART, GPIO) of a specific hardware where this peripheral could exist only once, where would you see the responsibility of ensuring there will ever be only one instance of the peripheral access abstraction within a final binary?

I'd currently opt for a Singleton provided by the actual crate to ensure that the crate user does not need to care and access to the peripheral is available through this Singleton only. This however comes with some disadvantages.

Another option would be to handover responsibility to the crate user to ensure they will never create more than one instance of the structure provided by the crate.

Would there be a third approach? Like static atomic bool in the peripheral crate panicking if a second instance is about to be created?

Any advice, tip or best practice (idiomatic way if doing this) is much appreciated.

Thanks in advance.

So, the way to tackle this, is that the peripheral is basically the same as a static mut, of sorts:

struct RawPeripheral(());

static mut RAW_PERIPHERAL: RawPeripheral = RawPeripheral(());

impl RawPeripheral {
    fn clear (self: &'_ mut Self) { ... }

Now, since access to the RAW_PERIPHERAL is very error-prone (unguarded mutable global state), the most typical pattern here is indeed that of using some kind of "singleton", although that term is quite overloaded and may refer to multiple distinct implementations.

The way I'd do it would be the following.

First of all, the key idea:

- static mut RAW_PERIPHERAL: RawPeripheral = RawPeripheral(());
+ static MUTEX_PERIPHERAL: Mutex<RawPeripheral> = const_mutex(RawPeripheral());

That way you do have a locking mechanism that guards the &mut accesses to the RawPeripheral needed to manipulate the API, and all is good.

Except that having to .lock() such a thing is a bit cumbersome:

  • people may try to lock the stuff in a finer-grained fashion, which is not optimal, performance-wise.

  • the types involved (MutexGuard<'static, RawPeripheral>) are a bit unwieldy;

  • we are carrying indirection (the address of the static) around, at runtime, when the address of a static is "known at compile-time" (more on that below).

The correct pattern would be for users to, very early in the process, get hold of a MutexGuard<'static, RawPeripheral>, and keep that one around / keep passing it around. The DerefMut of such a guard is "zero-cost", since it knows it has exclusive access to the static MUTEX_PERIPHERAL, token which symbolically represents the contents of the peripheral.

So, to try and make people fall into the "pit of success" of correctly using our API, we make MUTEX_PERIPHERAL private, and expose a a newtype wrapper around that MutexGuard (by construction, it's guaranteed that there will be at most one instance alive at a time):

/* private */ pub(self)
static MUTEX_PERIPHERAL: Mutex<RawPeripheral> = const_mutex(RawPeripheral());

struct Peripheral(MutexGuard<'static, RawPeripheral>);

impl Peripheral {
    fn try_new ()
      -> Option<Peripheral>

    fn clear (self: &'_ mut Self) { ... }

And at this point you basically have already a singleton pattern, except for the const_mutex requiring a parking_lot impl, and the Peripheral token holding a runtime pointer / indirection inside it, when we know all such instance(s) will always be pointing to that static.

So we can optimize it a bit:

The singleton pattern (full snippet)

Assuming a raw API such as:

mod raw {
    fn clear ()
        /* ... */

we'd do:

use ::std::sync::atomic::{self, AtomicBool};

static SINGLETON_EXISTS: AtomicBool = AtomicBool::new(false);

struct Singleton(());

impl Singleton {
    fn try_new ()
      -> Option<Singleton>
        if SINGLETON_EXISTS.swap(true, atomic::Ordering::Acquire) {
        } else {
            impl Drop for Singleton {
                fn drop (self: &'_ mut Singleton)
          , atomic::Ordering::Release);
    fn clear (self: &'_ mut Singleton)
        unsafe {

So, I don't know if you implemented the singleton like I've showcased: I'd be interested in hearing more about the disadvantages.

There is an extension of my pattern to allow for a zero-cost unchecked API whereby you could offer an unsafe fn constructor of a Singleton that does not read the SINGLETON_EXISTS atomic. But since such singleton is still kind of expected to be instanced once per execution of the program, it does look like a terribly premature and overkill optimization, I'd say: they'd get UB rather than panics on misuse, just to skip a wait-free boolean check.

If you really want to, there is a way to kind of further nudge users into the "pit of success", whereby reducing the chances of a panic!: to offer a constructor that can only be called in one location of the code, by virtue of a macro that would reserve some linker symbol, causing a linker failure if the macro were to be called in two places.

Combined with an invocation inside an fn main() which is not recursively called, and you have a non-unsafe non-panicking API (assuming Singleton::try_new() is not exposed (e.g., #[doc(hidden)]))

macro_rules! new_singleton {() => ({
    #[export_name = "\n    \
        Error, called `new_singleton!` in multiple places of the code!\n\
    pub static __: () = ();
        .expect("Reached the `new_singleton!` code path multiple times: re-entrant `main`?")

Here is an example of calling lib::new_singleton!() in two lines of code:

error: symbol `
    Error, called `new_singleton!` in multiple places of the code!
` is already defined
  --> src/
50 |         pub static __: () = ();
   |         ^^^^^^^^^^^^^^^^^^^^^^^
59 |     let _ = lib::new_singleton!();
   |             --------------------- in this macro invocation
   = note: this error originates in the macro `lib::new_singleton`

I really like the approaches you've outlined here. Not my project, but a good read!

1 Like


thanks for sharing the details of how you'd approach this.

It's kinda neat :wink: -

My own singleton implementation could be found in the ruspiro-singleton crate. It uses the with_ref and with_mut functions to accept a closure that can inside it safely access the peripheral.

The drawbacks of this current approach are:

  • it's not that easy to implement an async version for it - especially if the closure could be an async function as well...
  • more importantly if the peripheral shall be "wrapped" into another type taking "ownership" of the peripheral - eg. Implementing a CONSOLE using an UART peripheral - the console should take ownership of the uart and the singleton should not allow using the uart outside of the console.

If I do understand your approach correct, the last item could be fixed by it as you would hand out a Guard to the peripheral which would mean, that as long as this is owned by the console in my example no-one else could ever get access to the peripheral. I'll try this approach and see if it is useful in my setup and addresses fully my concerns and use-cases ;).

Thx. again for sharing...


1 Like

:+1: Good luck! I hope the experiment goes well. Feel free to report back any API issues / rough edges you stumble on, so that other people can benefit from that knowledge as well :slightly_smiling_face:

Careful, that crate, as well as ruspiro-lock, seem to feature overly lenient Send-Sync bounds:

  • a Mutex-like lock ought to be Send only at least if the inner value is Send (shared mutability wrappers, whether Synchronized or not, allow "&mut forwarding" APIs): &mut Mutex<T> -> &mut T is possible, thus T : Send is required for Mutex<T> : Send to be considered.

  • Similarly, since &Mutex<T> -> &mut T is possible, T : Send is required for Mutex<T> : Sync to be considered.


  • sync::RwLock<T> : Sync is missing a T : Send + Sync bound (&RwLock<T> -> &T);

    • similarly, Singleton<T> : Sync is missing a T : Send bound (it already had the T : Sync);
  • sync::Mutex<T> : Sync is missing a T : Send bound.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.