Exporting function from crate to only be used in tests

This is not possible as far as I know but you can mark internal items with #[doc(hidden)] to not appear in the doc.

1 Like

@pcpthm: Thanks for the reply!
Actually I want the item to show up in the documentation.

The real world use case for what I'm asking here is with exporting random generation testing facility safely (at the Ring crate). This is my current attempt:
https://github.com/briansmith/ring/pull/912/commits/aa5ebf1b430a5dba86bc674db89887eb7efdb868

To be able to write tests deterministically, the user should be able to use a deterministic testing random generator facility, however, I want to make sure the user never accidentally uses this deterministic random generator for non-testing code.

Another idea I had was marking the function f (As in my simplified example) as unsafe, but I am not sure if this is a reasonable solution.

Not entirely understanding the situation but here is my thought on the specific case of ring:

Because SecureRandom says it is a "secure", it is conceptually not right to implement this trait for such insecure RNG even only for testing purpose.

What about this way: instead of using alternative SecureRandom implementation externally, each algorithm e.g. sign could expose a corresponding version which uses deterministic nonce but otherwise the same algorithm e.g. insecure_deterministic_sign for testing purpose.

  • I am using some algorithms provided by ring to do cryptography (key exchange, for example).
  • The algorithms provided are hardened to only take SecureRandom as argument, which is reasonable for safety reasons (To make sure a user doesn't do something he is not supposed to do).
  • SecureRandom is private to ring, this means that I can not implement it for a dummy deterministic random in my codebase.

As a result, it is impossible for me to write deterministic tests, to test the part of my code that does key exchange, for example.

Internally, ring uses FixedSliceSequenceRandom to write deterministic tests for anything random related. However, using this structure requires ahead of time knowledge of the sizes of requested random chunks. These chunk sizes will most likely change when the implementation of some algorithm changes, and therefore I can not use it in my own code.

You can read more about this issue here: Adding deterministic SecureRandom for testing · Issue #661 · briansmith/ring · GitHub

I understand this proposal. I have no control over the implementation of ring, but even if I did, I don't think that I would have wanted to extern a duplicate function for every algorithm. Same goes for my external code base: I prefer to have just one implementation, and have means of testing it.

Given that you propose the insecure_ prefix, what is your opinion about having an unsafe trait TestRandom?
Thinking about it, is there a compile-time way to enforce that unsafe doesn't show in my code base, except for inside cfg(test) code?

Thanks for the description of your situation.

I understand this proposal. I have no control over the implementation of ring ,

Yes, I'm suggesting to PR to ring to add the testing functionality to its API.

but even if I did, I don't think that I would have wanted to extern a duplicate function for every algorithm.

It can be implemented without code duplication by internally using a wrapper for either SecureRandom or a deterministic one.

Same goes for my external code base: I prefer to have just one implementation, and have means of testing it.

You can make code change local.

#[cfg(not(test))]
fn should_use_deterministic_algorithm() -> bool { false }

#[cfg(test)]
fn should_use_deterministic_algorithm() -> bool {
    true /* or configure by TLS etc. */
}

/* ... */

    let result = if should_use_deterministic_algorithm() {
        rsa.insecure_deterministic_sign(...)
    } else {
        rsa.sign(...)
    }

Given that you propose the insecure_ prefix, what is your opinion about having an unsafe trait TestRandom ?

Because it is not unsafe in terms of Rust's safety guarantee i.e. no undefined behavior, I suggested to use insecure_ prefix for user's awareness but not using unsafe keyword.

Thinking about it, is there a compile-time way to enforce that unsafe doesn't show in my code base, except for inside cfg(test) code?

I afraid to say this is an XY problem but you can

#![cfg_attr(not(test), forbid(unsafe_code))]

to forbid unsafe codes not in a #[cfg(test)] block.

This is important if you expect others to maintain or reuse your code. unsafe in Rust has a specific meaning – that the programmer has taken over some of the UB-avoidance proof responsibility from the compiler. It does not mean anything else, such as your intended advice to potential users.

I think that we have moved a bit from the original intent of this question, but at the same time your comment is very interesting.

I think that the ring crate specifically uses many rust type safety features of rust to gain cryptography safety. For example, ephemeral keys can never be cloned, and any algorithm that uses them takes ownership over them.

You could claim that this shouldn't be done, because ephemeral keys can actually be cloned if we wanted them to be cloned (They are just an array of bytes), but ring won't let you do that. You can also claim that the algorithm using the ephemeral keys should take the ephemeral key by reference, because it doesn't need to take ownership over the memory. But it does take ownership, to make sure nobody else is using it again: This is what making it an ephemeral key.

So here, taking ownership and not allowing to clone are not features designed for memory safety in the old school senes in Rust. Instead, those two together provide some kind of cryptographic safety for the user.

About adding unsafe for the random trait: The ring crate does not allow external users to implement SecureRandom, because that might be unsafe. This is why I think adding unsafe for the TestRandom trait is an interesting idea. Whenever the user implements TestRandom for some structure, exactly what you said happens: The programmer takes over some responsibility from the compiler.

If the user never uses unsafe in this case, he can only use the random generator provided by ring. However, by using unsafe and implementing TestRandom it allows the user to take responsibility over the contract made with the compiler.

Stated otherwise: ring tries to promise me that if I use only safe code, I am not likely to make cryptographic mistakes. However, if I choose to implement TestRandom myself (thus having to use unsafe), it means that I take responsibility over implementing a secure random myself, and if I mess it up my cryptography will be unsafe.

To summarize, my claim is that maybe unsafe can have a meaning larger than memory UB.

Thanks for sending this example!
I think it is another possible solution, but I prefer not to write my code this way. It feels like it is too easy for me to make a mistake, and the compiler will not be able to protect me here.

Is it an option to not export these private, testing-only functions at all, and write your tests within their containing modules? I'm not particularly fond of this approach (as it scatters domain knowledge and invariants to be upheld across several files), but it certainly does work.

I think that what you propose is the current state of things, and it doesn't work very well for me.

I develop Offst, and Ring is one of my dependencies. Therefore I only have control over the code in Offst.
The cryptography related code in Offst relies on primitives from the ring crate. Many of the cryptographic parts rely on generation of random entities (Like key exchanges, random nonces, etc). To be able to test the code in Offst deterministically, I need to be able to produce deterministic random. (Otherwise, every test will come up with different exchange keys and different nonces).

The interface provided by ring indeed does not allow me to have deterministic random.
On the one hand it makes sure I don't accidentally use bad random in production, which is good for security. On the other hand, I can not test my code properly, which probably mean it will have bugs. This, in turn, is bad for security.

When you use unsafe you instruct the compiler to not insist on proving certain classes of non-UBness. Is that really what you intend to do? The security advisories about Rust crates are almost all due to someone declaring code unsafe that really was unsafe, in that it lead to UB under some circumstances even though the programmer believed that it did not. Therein lies the path to malware exploitation of your code.

Well... personally, I'm generally pretty adamant against misuse of the unsafe keyword. And I'm not sure that I buy the argument about SecureRandom (I've never heard such an argument before). So one might expect me to take a negative position here.

...but I confess there is a loophole.

Technically, you can always just make up a reason for a trait to be unsafe. All you need to do is document that implementations must have such and such qualities "or else behavior is undefined"—the justification simply being that unsafe code would be allowed to trust these impls when it matters for memory safety.


For instance, imagine you're working on some sparse vector format, where indices of values are stored in a vec that is always sorted without duplicates. You might write a trait like this:

pub trait SparseIterator
    : Iterator<Item=(usize, <Self as SparseIterator>::Value)>
{
    type Value;
}

/// Marker trait for strictly sorted sparse iterators.
///
/// This trait indicates an `Iterator` satisfying the property that,
/// if two successive calls to `next()` return `Some((i, _))` and
/// `Some((j, _))`, then `i < j`. (note: for non-fused Iterators, no
/// guarantee is provided about a `Some(_)` after a `None`)
///
/// **This trait is unsafe.**  Implementing it for types that do
/// not satisfy this property is undefined behavior.  In turn, unsafe
/// code is permitted to trust implementations of this trait for the
/// purposes of memory safety.
pub unsafe trait StrictlySortedSparseIterator: SparseIterator
{ }

Perhaps the crate contains no unsafe code that depends on this. Perhaps it never will. That's okay! The point is, it plausibly could contain unsafe optimizations in the future, (e.g. match branches that produce std::hint::unreachable_unchecked(), or code that would do double-destruction/uninitialized reads if given duplicate indices). Such optimizations could also exist in downstream code. And using the trait even in purely safe code is necessary if we want to enable these optimizations to be introduced backwards compatibly. Therefore, in the strictest sense of unsafety in Rust, there's nothing wrong with this.


So, does that solution apply here? To be honest, I'm not sure.

The trick here is that, in the StrictlySortedSparseIterator case, there is plausible reason to believe that there may exist unsafe code which relies on these implementations for memory safety. Without such plausibility, it raises red flags about abuse of the unsafe keyword. It's hard for me to picture what it means for a type to implement TestRandom, and what promises this trait could potentially make to unsafe code.

2 Likes

@realcr as an less strict alternative to unsafe you could use diverging function syntax, it will compile but compiler will warn user that function never returns, e.g.

mod crate_a {
    #[cfg(test)]
    pub fn f() -> i32 {
        1
    }
    #[cfg(not(test))]
    pub fn f() -> ! {
        unimplemented!()
    }
}

use crate_a::f;

fn main() {
    let i: i32 = f();
    println!("{}", i+1); // warning: unreachable statement
}

#[test]
fn test1() {
    let testnumber = f();
    assert_eq!(testnumber, 1);
}

By the way, does the making of trait or function unsafe count as semver-breaking? Or, in other words, if one is anticipating future unsafeness of something, they can make the thing unsafe right now, so they won't have to bump major version later because of it?

1 Like

Thanks for the idea! My issue is that this will only work on the boundary between modules. (Note that in your example code you call the module crate_a, but it is actually a module)

It seems like currently there is no way to do what you are proposing on the interface shared between two separate crates.

@realcr never is a rust type and it is shared between crates as a function type definition. I've just validated, rustc warns that code is unreachable if I import diverging function from external crate.

Yep

Regarding the OP, I think that the best is to define a feature unsafe-internals (wording of features is more lenient than wording of functions).

You then export these functions only under that flag (#[cfg(feature = "unsafe-internals")] (you can still activate that feature to https://docs.rs to get these functions added to the documentation, with a docstring stating that they are only supposed to be used in testing and nothing else, and that the library makes no cryptographic guarantees to those who use these internals).

And then when testing that crate you do require that feature.

4 Likes

Thanks for everyone who replied. I really appreciate it. I learned a lot from this thread.

I think that the last idea given by @Yandros is a pretty good idea #[cfg(feature = "unsafe-internals")]. Thanks for that!

@dunnock: I didn't manage to make the diverging function idea work. It seems to me as if only the non-test function becomes exported, and the other one never gets exported from the crate. Maybe I am missing something in how to implement this.

I also understand now why using unsafe for the TestRandom testing trait is somewhat strange. However, I think that maybe having unsafe over the sealed SecureRandom trait could be a good idea, because it does convey that the implementor of SecureRandom knows what he is doing. This is probably not much different from implementing Unpin for a structure.

In any case, I think that having unsafe on something in Rust always makes people a bit nervous, so I believe that this is not something Ring maintainers will be willing to merge.

2 Likes

It is different if implementing SecureRandom incorrectly may lead to weak key material etc., but not to memory unsafety which is definitely the case with Unpin and the intended use case for unsafe. The difference between safe and unsafe Rust should not be abused as a hint for code that is critical for domain-specific correctness as it will make it harder to audit code bases and their dependency trees for memory safety issues.

4 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.