Advices on architecture and testing in rust

Hello.

This is a question more related to architecture and rust.

I have been learning and using rust kit a lot to build micro services and am facing some challenges in testing I'd like to ask for recomendations.

The architecture I am using is as follows:

main function - starts the servers (one or more kinds of servers) and set routes with handlers.

Handlers layer: responsible for defining the handlers configured for the servers

  1. Validate data with respect to the required format.
  2. Convert request data from the request specific formats to the required parameters for core function.
  3. Calls core function,
  4. Gets core function response and converts back to the expected format response.

core layer:

  1. Exposes and implements a set of functions to the handlers layer. These functions are the core module public API and represents the business logic.
  2. Create submodules for internal use, such as repositories and others.
  3. creates a public sub module where input and output parameters to work with the public functions are made available.

This creates separation between layers and allows me to have, for example, handlers from graphql, grpc and AMQP protocols all calling the same core functions, and the core layer is not even aware of the existance of all these handlers and such.
This also separates the internal layers of core functionality from the outside world, so that handlers have no database access and changes in the requests / responses do not affect the core (core could be even extracted to a new crate but, because core functionality is kit specific for each microservice it is being maintained as an internal module with pub fns exposed to the outside).

See, I am not saying that this is the best approach, I am only trying to explain my way of thinking. Now, here is what I am trying to figure out:

For unit testing the core functionality, I need to mock somehow things that the core functions use, the internal modules such as repositories and alike. I have no idea aon how to perform such mocking, because the public api does not receive these dependencies as parameters. The premise is that the handlers layer don't have to know nothing about the internal implementations of core functions. If I coded the public api to receive, say, the repository they would use, then the handler layer would have to inject this, and it is not the handlers layer's responsibility to know what repository the core functions should use as part of its work.

In javascript, I would patch the module loader to load a mocked repository module, so that when the core function requires the repository module it would get instead a mocked module, all of that without compromising the way the handler calls the core function.

In java, probably the dependency ingector would create an object with the dependencies in place, but I have been trying to not rely that much on object oriented paradigms when using rust.

Question is: how to solve this in rust?

A typical core module is as follows:

core/accounts/mod.rs

mod repository;
pub mod model;

use repository::*;
use model::*;

pub fn create_account(agency: i32, account_nunber: i32, context: Context) -> Result<String> {
  let conn = context.get_connection();
  let new_account = repository::create_account(agency, account_nunber, &conn);
  new_account.public_id
}

core/accounts/repository.rs *** database access

core/accounts/model.rs *** module where the Account struct, returned by repository::create_account function returns.

The handler function would then:

handler.rs

use crate::core::accounts::create_account;

pub fn handler(req: request, shared_state: SharedStat) -< Response {
  let (agency, account_nunber) = request.getparams();
  let context = Context {
    connection: shared_stat.get_connection(),
  }
  let public_id = create_account(agency, account_number, context);
  Response::ok(public_id)
}

I realize that even trying to not "polute" the handlers layer with something it shouldn't take care of, I still have the context creation. However, the context creation is generic, in the sense that it does require some parameters, but those are the same for every core function. If these were repositories, the handlers would need to know which repository to inject for each function call. Now, all they know is that they have to provide a context containing a database connection pool.

I think I was able to show how things currently are set and what my problem is. I thank you for any advices on this topic.

Why couldn't you provide two core modules - one for test and one for production?

#[cfg(test)]/#[cfg(not(test))] can be applied to anything: types, use directives, etc.

Hello,

By using your suggestion, the core module would need to import two repository modules, because I am testing here the core module:

  1. On a non test environment, it would use the repository mod in the file /core/accounts/repository.rs
  2. On a test environment, it would need to import a module with the same functions, but defined elsewere, so that the compilation succeeds.

Now, we have another question: is it possible to mock a whole module?

I could hand code the functions as mocks, but it would be nice to have this somehow resolved for me so I can check if a given function hon the mock repository module has been called, which parameters were passed and control which response to return.

As far as I can tell, there is no way that I know of mocking a whole module.

No. You skipped one very important question: why do you want to mock the whole module.

I can understand that you may want to define certain small things differently for tests and production.

But that attempt to mock literally everything kinda reminds me “Enterprise Java” at it's worst: we need to unit-test everything (why?), for that we need to mock everything (why?), and for that we need to introduce inefficient DI-system in our project (:confounded:).

This all sounds so incredibly logical till you try to answer the very first question: why can't you just use core module (maybe with some minor modifications)? Why do you want to fully mock it? What's the point? What are you trying to archive?

Hello,

I will try to explain why.

The core module is where business logic is handled. I want thus to unit test it.

May be that unit testing at all is not the answer, but let's consider for a moment it is.

I therefore want to test the create_account function in the core/accounts module.

If you look at my example, you will see that the core/accounts/create_account function does three things:

  1. extracts a connection from the context object.
  2. Calls the create_account function defined on its private repository module, which will access the database and create an account.
  3. Extracts the public_id from the account the repository::create_account returned.

if I am unit testing the crate::core::accounts::create_account function, I will need to mock the repository::create_account function because:

  1. I don't want to access the database in a unit test.
  2. I want to check that the repository::create_account function has been called, to make sure that the behaviour of the function core::accounts::create_account I am testing is the right one.
  3. I want to control response of the mocked repository::create_account to allow the core::accounts::create_account function I am testing to continue its processing.

In real life though the processing of core::accounts::create_account isn't only calling the repository, a lot of other things are envolved in the processing. The example here is reduced to better point my specific issue. I don't want to mock the whole repository module at once, but I would need to mock functions on there to achieve my goals.

Notice though that this is considering unit tests needs. Whether unit tests are worth or not is another totally different issue, one with its own set of issues I am also encontering but that will be asked in another forum topic.

Then I would suggest to start with that one because in my experience languages with proper static typing (Haskell, Rust, even C++ to some degree) don't need as much unittesting as dynamically typed languages or “Enterprise Java” (where the anwer to to “why do we need these unittests” are often “we need them because it's written in our requirements document”) and that's why these languages don't include extensive frameworks for doing these.

Case to the point:

Why can't you just use in-memory database for that and verify the important functionality which you care about (crate::core::accounts::create_account actually properly creates an account) without testing minor implementation details (e.g. why do you care about whether it uses repository::create_account or repository::create_account2 ?).

The way I would approach it:

Put a transparent SharedContext into shared_state, where:

type SharedContext = Arc<dyn Context + Send + Sync>;

trait Context {
  fn get_persistence() -> SharedPersistence;
  fn get_metrics_handle() -> SharedMetricsHandle;
  ...
}

trait Context here is basically an interface, and SharedContext is a "pointer to a shared, thread safe and thread sendable instance of Context". SharedContext is kind of like a normal interface instance in Java.

You can have multiple context kinds if you have certain groups of handlers that need (/don't need) different things.

The SharedContext's job is to give out instances of other things that handler/core code might need. Note that they are also dynamically polymorphic. (I really need to write a Rust proc-macro library to make this pattern easier, as I need it so often auto_derive was kind of almost there.)

In your tests you can just use implementations of Context that give out mocks/fakes that you need.

If you need transactions spanning across repository boundaries you might want to look at sniper/persistence.rs at master · dpc/sniper · GitHub and the whole project as a whole.

I generally don't like automated DI (like SpringBoot's @Autowired), but you could put that SharedContext in some lazy_static, make it global and avoid having to pass it around between layers.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.