Module wide type-parameters, thoughts

"Regarding the full safety “hinderance”,

part 2- one thing i’m repeatedly trying to explain: there are conversely times (and areas of a program) where flawed behaviour is better than no behaviour (because you’re trying to learn something else).

the most direct example is debug code; I don’t get why this isn’t clearer. Unless your brain has an embedded digital computer , you’re eventually going to write some sort of tests alongside your regular program - as such having a “productive” language embedded right in the same source files as your 'performant/‘safe’ language is useful

the missed opportunity here is the whole program inference - if you’re writing a test for a function next to it, one or the other is going to give the information needed

If you want syntactic sugar for defining maps it’s not overly difficult to implement yourself with a macro.

I know you can do that for declaring maps
I mean for maps in the function signatures; apple do this [K:V] .
function signatures are important because thats what you search

(repeating myself) the original rust with the ~T ~[T] etc showed unique blend between readability and performance… it was just right. You still have a concrete symbol telling you the important information (‘this is a pointer to an allocation…’) but it’s light enough to melt away and let your mind focus on what you’re actually doing. I’m amazed that people complained about it (‘too many pointer types’…) … they were very logical (‘owned allocated version of…’) Box<T> is more like 2 extra mental steps because it’s a word and a nesting level (and yes I hate writing unique_ptr<T> in C++… there’s always a serious temptation to revert to raw pointers. if the thing you’re supposed to use is syntactically light… thats very helpful)

there are little tweaks that could recover this. if we had the option for more inference, the signatures that you do write wouldn’t matter so much;

one idea that would turn the traits from a hinderance to a virtue (in my mind) is: if when impl-ing, you could elide the types … just copy them from the trait. Then I wouldn’t mind them. (I might write an RFC for this). Haskell of course lets you work like this - thats why I didn’t find the ‘typeclasses’ as oppressive over there

The frustrating thing is seeing this amazing underlying language engine (coming from C++ I do find the level of inference very impressive … and I do start to miss it back there ) … but then some of these decisions taken toward verbosity

Well if you've got numbers like that then you've already profiled and know that bounds checks are hurting your performance. I was saying to profile first because so many people jump to conclusions on why they think something could be slow, without any real scientific evidence.

That said, if bounds checks are hurting you what's wrong with the unsafe get_unchecked() method? It's not the default because it voids memory safety guarantees, but doesn't it do exactly what you are looking for?

I can understand why you feel it's deciding towards verbosity over convenience. Forcing you to write out and understand the full type signature is verbose and requires more characters, even if it does make things more readable for others.

You might want to post a thread on the internal forum to see what people think about expanding type inference to inside a module. Maybe limiting it so that things being exported with pub are required to have a full signature, but internal stuff can be inferred. I'm not too sure how successful this would be though because it's a slippery slope to having APIs with auto in them, leaving users of libraries guessing what type of thing they're getting back. It's also going to be really hard to define an exact boundary because there will be edge cases which blur the line.

yes, you’re right that profiling is useful, because in many cases the real issues are counter-intuitive or subtle.

But what I’ve also seen is this philosophy results in bloat : “According to the profiler… bounds checks don’t slow it… virtuals don’t slow it… the dynamic allocations don’t slow it… the lack of LOD doesn’t slow it”

yet it’s slow.

no one thing shows up… because you’ve taken those slow architectural decisions across the board, and your code is drowning in them.

I know most cases are not so specialised.
I don’t doubt there’s times when bounds checks are the right choice, and I can certainly accept them being default. but when push comes to shove, a systems language suitable to replace C or C++ in every possible niche needs to let the user reason explicitely about every possible operation… or lack of.

Forcing you to write out and understand the full type signature is verbose and requires more characters,

One suggestion I have - I’ve made an RFC - is to allow ommitting the types when you ‘impl’ a trait; there the types have already been defined by the trait (haskell allows it).

This would make the trait more obviously useful (“it defined the pattern -> no need to write the pattern out again”).
That would be an example of synergy. You need the original trait for reference; rust’s nice syntax does make searching for traits easy aswell.

slippery slope to having APIs with auto in them,

Tair enough, thats a problem. Perhaps full inference for private is a reasonable compromise. What I would want is types for crate level exports, and the option of full inference everywhere else

With more options, you can pick the correct tradeoff per situation.

1 Like

one thing about verbosity - back in C++ there is still the option of falling back to raw pointers which combine ‘Box’, ‘Option<Box>’ , ‘Option<&T>’, maybe even a List / Option … with an intuitive guess as to which being possible (get_ vs find_ vs create_… vs take_…). (You might even argue that with 'unique_ptr and &T existing, the *T in C++ has a viable use as Option<&T>)

What I liked about old rust so much was that the ‘unique_ptr’ replacement ‘~T’ was as easy to read and write as *T . when it’s a single character it melts away. conversely the when it’s more verbose, it’s more irritating when it’s compulsory.

Now if you had whole-program inference, it wouldn’t matter so much if the types are more verbose; and of course encoding more machine-checkable information in the type system is definitely a good thing.

Is discussion about the mod qqq<T> { } going to continue or the thread is derailed irrecoverably?

1 Like

Is the discussion about the mod qqq { } going to continue

Well I’m still very interested in this, so if anyone else has any input on that it’s very welcome.

Regarding the original proposal, my main issue is that there’s been no suggestion or discussion on how this changes client code, so it’s really only half of a proposal which makes it hard to comment on meaningfully.

So assuming the use of this feature in your library looks like this:

mod<T> {
    struct S { t: T }
    fn foo(s: S) -> T { ... }
}

Would client code just be not affected at all?

use dobkeratops::mod;
fn main() {
    mod::foo();
    mod::foo<i32>(); // still valid, right?
}

This would mean some foo() call sites no longer match foo()'s signature in the library.

Or would the type parameter be applied to the module import instead?

use dobkeratops::mod<i32>;
fn main() {
    mod::foo(); // no type parameter allowed here
}
use dobkeratops::mod;
fn main() {
    mod<i32>::foo(); // <i32> must be on the mod, not on the foo
}

This seems to rule out ever importing a single item and passing different type parameters to it. You’d have to either import dobkeratops::mod<i32>::S and dobkeratops::mod<f32>::S separately or only import the mod and always say mod<i32>::S or mod<f32>::S instead of just “S”.

I’m certainly on board with the goal of reducing verbosity in Rust generics, but right now this idea is incomplete and I don’t see an obvious “completion” of it that does reduce verbosity without introducing other problems.

For comparison, implied bounds is an orthogonal but probably overlapping idea for verbosity reduction that seems to me like a huge improvement if we can just get the details right. I think a lot of the discussion on that thread is relevant to getting any similar proposal like this one into a state where it could be seriously considered.


P.S. Okay, I can’t completely resist the ideological arguments, so just one point there: In my experience I am far more productive with “strict” Rust generics than I ever was with C++ “duck typing”, because in Rust I rarely have to think about both the details of my generic function’s implementation and the details of its call sites at the same time (much less the implementations of several other generic functions that mine was calling). That increased modularity and encapsulation is the real benefit of enforcing proper bounds on generics. Rust having better error messages is merely a (very nice) side effect of that.

2 Likes

I freely admit I have not thought out all the details here; however the existing ability to nest parameters suggests there should be a valid path to discover here.

Or would the type parameter be applied to the module import instead?
I think it would be more like the former example, where the module typeparams mostly get inferred by context just as normal type-params do. However when you put it like that ('importing a module with dedicated specification'.. I think you'd expect to be able to do that aswell, just as you can manually assign other type-params when you want to. 'use vecmath::<f32>;' // default precision

I have another more unusual/speculative idea to discuss in another thread, i'll link back to here ,it's closely related. 'module wide' shared function parameters, TLS, dynamic scope emulation)

I support this.

I’m working on an interpreter. Programs read streams of bytes and write streams of bytes, but rather than forcing STDIN and STDOUT for IO, I’d like it to be parameterized over two types R: std::io::Read and W: std::io::Write, as this makes things such as testing easier. This means that almost every struct I’m defining in the program includes an <R: Read, W: Write>, which is quite noisy. I’d prefer if I could simply make the module parameterized over R and W.

I’d have two syntaxes for declaring parameters:

  • You can declare parameters from anywhere inside a module by writing

    mod<R: Read, W: Write>;
    
  • A module defined with the mod foo { } syntax can be given parameters as

    mod foo<R, W> { }
    

Say we have a parameterized module foo<R: Read, W: Write> defined in some other file. It would be imported just like any other module:

mod foo;

Any attempt to actually access items in this module, however, would require that it is somehow disambiguated:

    foo::<i32>::bar();
    use foo::<Option<Vec<usize>>>::Baz;

In the example for the interpreter, main.rs would look like:

use std::io::*;

mod interpreter;

fn main() {
    let src: String = /* read a file */;
    interpreter::<Stdin, Stdout>::interpret(&src, stdin(), stdout());
}

#[cfg(test)]
mod tests {
    fn run_and_collect_output(src: &str,
                              input: &str) -> Option<String> {
        let output: Vec<u8> = Vec::new();
        interpreter::<&[u8], Vec<u8>>
                   ::interpret(src, input.as_bytes(), output);
        output
    }
    #[test]
    fn test() { ... }
}

Perhaps there could be some inference for this, the way there is for type parameters in functions. That way perhaps ::<Stdin, Stdout> and ::<&[u8], Vec<u8>> could be omitted.

In addition (or alternatively), perhaps there should be some way to “distribute” the type parameters to a module’s constituent items. For example, say a math3d module was parameterized over F: Float and exposed

pub struct Point(F, F, F);
pub fn distance(p1: Point, p2: Point) { ... }

It should be possible to turn math3d's type parameter into a type parameter of distance so that you could import it generically.

use<F> math3d::<F>::distance;
/* `distance` will now behave as if it had a type parameter F */

To import math3d in such a way that all of its items had its type parameters distributed to them, you could write:

use<F> math3d::<F>;

This way, math3d would become unparameterized and all of its items would take the parameter instead (the way the module would be written today).

Syntax is up for bikeshedding, but that’s the general idea.

1 Like