What would the Rusty way to do this be?

I am struggling to find a nice structure for an internal library, probably because I am thinking about it in the way that I would write OOP code. The structure I end up with is terrible as a result. So my question is how would someone who 'thinks in Rust' approach designing an interface for this situation

As a side note any recommendations of blogs/books/talks that discuss Rust-specific patterns would be greatly appreciated.

The problem:

I have a library that exists to abstract over a variety of different key-value databases, consumed by applications that don't know what database they are going to use until they are told at run-time by a configuration file.

The basic structure would ideally be like so:

trait DStorable<T>: Backend1Storable<Item=T> + Backend2Storable<Item=T> {}

trait Database {
    pub async fn set_value<T: DStorable<Item=T>>(&self, key: String, time: DateTime<Utc>, value: T) -> Result<()>;
    pub async fn get_values<T: DStorable<Item=T>>(&self, key: String, begin: DateTime<Utc>, end: DateTime<Utc>) -> Result<Vec<T>>;
}

This is impossible because you can't have a generic method on a trait object, and because the database isn't known until run time it needs to be dynamic dispatch because it is genuinely dynamic.

The best solution I have is to create a wrapper with an enum like so

enum DBEnum {
    Backend1(Database1),
    Backend2(Database2),
    // .. and so on
}

pub struct DBWrapper {
    backend: DBEnum
}

impl DBWrapper {
    pub async fn set_value<T: DStorable<Item=T>>(&self, key: String, time: DateTime<Utc>, value: T) -> Result<()> {
        match self.backend {
            Backend1(b) => b.set_value(key, time, value).await,
            Backend2(b) => b.set_value(key, time, value).await,
            // and so on...
        }
    }
    pub async fn get_values<T: DStorable<Item=T>>(&self, key: String, begin: DateTime<Utc>, end: DateTime<Utc>) -> Result<Vec<T>> {
        match self.backend {
            Backend1(b) => b.get_values(key, begin, end).await,
            Backend2(b) => b.get_values(key, begin, end).await,
            // and so on...
        }
}

This obviously isn't ideal because every time a user wants to add a new backend, they have to modify every single method inside the DBWrapper struct to add the new call, and it just seems silly to have a struct that duplicates the contract being provided, just to get around the generic dispatch problem.

So what would the 'rusty' way to structure this be?

1 Like

Presumably, the code that uses this knows statically which types it needs to store. Consider lifting the T to be a parameter of the trait and having each backend provide multiple implementations, one for each storage type it supports. This would let user code ask for dyn Database<DateTime<Utc>>, for example:

trait Database<T> {
    pub async fn set_value(&self, key: String, time: DateTime<Utc>, value: T) -> Result<()>;
    pub async fn get_values(&self, key: String, begin: DateTime<Utc>, end: DateTime<Utc>) -> Result<Vec<T>>;
}
1 Like

The enum_dispatch crate is an option to help with those issues.

1 Like

This seems like a circular definition: DStorable depends on different databases, different databases depend on DStorable.

Is there a list of types that each database needs to be able to load and store?

How about something like this:

trait DatabaseGetSet<T> {
    fn set_value(&self, key: String, value: T) -> Result<(), ()>;
    fn get_values(&self, key: String) -> Result<Vec<T>, ()>;
}

trait Database: DatabaseGetSet<i64> + DatabaseGetSet<String> {}
3 Likes

This is very true it does make a lot more sense like this and prevents people from having to extend the DataStoreable trait with new bounds when they are adding a new backend so I shall definitely use this. Thank you!

That crate looks excellent thank you, preserves the same benefits as a trait object without the same limitations. I shall use this.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.