Module wide type-parameters, thoughts

(see also another more speculative tangential idea: 'module wide' shared function parameters, TLS, dynamic scope emulation)

lets say i make a type in the root of my project, and I have submodules that use that type; so far so good,

lets say I then want to generalise this - re-use those submodules in different programs, with different base types…
At the source-file /directory level, I can just rip that out, and use that in a different program that uses a different ‘typedef’, and I presume that would work;
But for maximum convenience in re-useability, those would really want to be partitioned off into a crate…? …but then I’d lose the ability to just recompile with the different typedef.

I could go all the way and trace through the whole thing adding the type in question as a type-parameter… but this would be a lot of clutter.

hence the question:-
Would there ever be enough demand for ‘module-wide type-parameters’ , which I think would handle this situation …

e.g. the equivalent of saying mod foo<T> { .... any items referring to T behave as if they also took a type-parameter 'T' } … but quite how you’d specify those at the file or crate level, I don’t know…

(does ML have this? I heard of something that sounded vaguely similar but I haven’t looked into it… ).

(see also: the nesting of type-parameters possible in C++ classes, e.g. you can have an outer class with ‘T’ which is then refered to by inner classes, and write a lot of code that doesn’t need the outer type parameter manually written. of course C++ nested classes have some flaws; a completely general ‘module wide type parameter’ might be a lot better )

1 Like

So, I think without a more specific case I don’t see too many use cases where this is not achievable with standard generics. In general, most problems you would be able to write all classes with a generic parameter <T> however whenever two structs interact usually the methods will be implemented only for two structs which share the same generic parameter <T>

For instance

struct SymbolicInt<T> {

struct SymbolicFloat<T> {

impl Add<SymbolicInt<T>> for SymbolicFloat<T> {

I know its achievable with standard generics;
the opportunity IMO is to reduce repetition, and simplify/clarify (i.e. reducing the syntactic weight around the declarations)


mod vecmath<F:Float>{
     ... dozens of functions using F and more typeparams
     ... making versions of this library that will work for f32,f64
     ... but without having to explicitely write the <F> every time

     ... imagine if the code started out with 'F' merely being a 'type F=f32'

mod mesh_stuff<F:Float,VEC3:vecmath::MyVector<Elem=F>> {
    // implements mesh functions for any float type and any vector type
    // e.g. facilitating interop i with other libraries vector maths types

in C++ we get quite a bit of reduction in repetition when we write type-params for a class with inline functions; rust might be able to do something similar but cleaner

I’m not sure I understand the example correctly, as you for your second module instead of what you’ve written you can have

type Vec<F> = VEC3:vecmath::MyVector<F>>

mod mesh_stuff {
// In every place where you use that vector class now you use
// <F: Float, Vec<F>>

let me try , but what i had in mind was:

mod mesh<F,VEC,INDEX>{
   struct Mesh{  points:Vec<VEC>, tris: Vec<[INDEX;3]> }

   impl Mesh{
      fn get_radius(&self)->F{..}

   fn make_cube(size:F)->Mesh{..}

would behave like

mod mesh{
   struct Mesh<F,VEC,INDEX>{  points:Vec<VEC>, tris: Vec<[INDEX;3]>}

   impl<F,VEC,INDEX> Mesh<F,VEC,INDEX>{
      fn get_radius(&self)->F{..}

   fn make_cube<F,VEC,INDEX>(size:F)->Mesh<F,VEC,INDEX>{..}

I see, but isn’t this just only syntactic sugar without anything fundamentally different? One way to address is with a macro which you can build for this (although I would not recommend it and just stick with the extra source). Don’t get me wrong there is nothing bad about it, but I think this kind of features are on the low priority side for the Rust team (note this is an opinion as I’m not part of it), while other things which have a lot more important impact (more than just sugar) have higher priority.

there’s various ideas, but my observation is that when i’m writing rust, compared to C++, for various reasons there’s a lot of much heavier angle-bracket wrapped annotation required. in part for repeats (‘impl… for…’), in part for the lack of propation to inner elements, in part for the need for trait bounds.

There’s times when 2 lines of C++ (with the 1st just being a cut-paste ‘template<typename…>’ becomes 5+ lines of anglebrackets where it takes 4x as much thought (it’s like rewriting the function - which might be a single expression - in an obscure type language for the bounds… ‘fn…{a+b*c}’ …but then you have to write ‘ensued that the product of A and B has an addition operator who’s output matches the declared output of the whole expression…’ as a mess of nested anglebrackets… )

it does add mental steps when reading the code: especially with a flip in parameter ordering sometimes, needed because we lose overloadable free functions (it really bugs me how you often write ‘trait Foo for X{…}’ for a single function where the arguments flip i.e )

there’s other idea’s i’d take in preference to this (one of my highest requests would be to elide types altogether in trait impls - haskell seems to allow this, making their ‘class/instances’ seem a bit less oppressive), but I like to put them all out there.

this post might sound negative but sometimes things that seem negative are so because of omissions, not because they are inherently bad.

Whilst we lose classes, we get modules… but at first I found Rusts module system actually really annoying (excessively deep namespacing and priv default…) … but if they became a means of actually managing the heavier type-parameters they’d become much more of a virtue. (and indeed there’s plenty of times in C++ where I’m making a class, but what I really want is a module)
they’d also start to work as a means of controlling the indentation level. (if the file basically is a class,… )

I can almost imagine the concepts of modules, structs,traits, (even enums) blurring together, and those item names just happen to be ‘syntax sugar’ for declaring an item biased for one aspect (‘a trait is an item with functions…’ ‘a struct is an item with just fields’ … -> but then you generalise and say that both can have associated types, and that traits can demand the existence of certain fields, and that structs could have an embedded vtable ptr, being trait-like… which you could also ‘match’ against just like an enum as sugar for for a ‘if (…dynamic_cast…) {}’ ladder in C++. And finally a Module is just 'an entity that happens to, by default, just have associated types , and free functions '.

I think i’d even like to be able to put code in the root level of a file and use the module name as a function (or something equivalent… fn self(){}? )… there’s times when you use a module and it’s really there just to wrap one main type (examples: hashmap::HashMap , vec::Vec … i think these are examples of crossover between the idea of a module & class ), well, imagine if that module name just became the default constructor function and details were held within . imagine basically having said ‘these types all live inside the namespace of this constructor function’.

[quote=“Botev, post:6, topic:11792”]
but I think this kind of features are on the low priority side for the Rust team [/quote]

Also note that the community can collaborate to improve the language, its’ just the Rust team have a set of opinions that may block potential inputs… we need to get them to see the virtue of an idea, then anyone else can go and implement it.

I can’t be the only one in the world whose experiencing this

Ah, no worries if it comes as negatives, as long there is some form of valid/constructive criticism. However, I'm going to disagree on some of your points - I think a big part of what you are discussing are things which are more "ideological" issues on how one thinks it is correct to write ergonomic code. Concretely:

So this is a difference between C++ templates and Rust Traits. Note that the C++ template function will only be constructed upon actual usage. On the other hand, the impl of a generic Trait will be compiled upon compilation of the module. What this means is that the C++ compiler only checks the validity of the function at every single invocation, while the Rust compiler verifies the implementation ones and only check trait bounds when the function is invoked. What this saves is compilation time as well as much more accurate localisation of where the CODE error is. Consider if you have something like Eigen which is very heavily templated. The error of such mistake will be berried deep inside the template and you will get the usual unreadable errors. Rust, on the other hand, will directly stop at the line where you are invoking the trait incorrectly. This saves both compilation time and significant debugging effort, for the upfront cost of a bit more verbose code and thinking - something that Rust is doing all over the place for the right reasons.

As for the module/traits/structs merging, I personally would disagree. First, defending modules - you as a developer are allowed to construct the hierarchy as flat or deep as possible, that IMHO can not be an argument against it. Second, you are allowed to publicly re-export inner modules up to the chain, giving you very exact control of the API interface while allowing you to have a different inner structure of the crate. The private by default also has a very good reason for it - minimising name clashes when people import the namespace. I think that having good hygiene on imports is pretty good (I myself never use import foo::* but hey people do). Also, such complicate interwind of things can have both impacts on complication in the compiler as well as on low-level memory savings of objects if you need to store extra stuff per object to distinguish what it is etc... For me actually, one of the selling points of rust I like is that it is very strict and it makes you do exactly what you need and nothing else. Also allowing mixing between structs/traits/modules will just beg for abuse and you will end up with just having to have a single keyword, which I think is not a good idea.

On a general note I'm an amateur in Rust so myabe some more experienced people can tune in to give their thoughts.

... sure.. but whilst sometimes it clearly helps, it doesn't in all contexts. we just shifted the problem from error messages into writing vast trait-bounds.

STL goes into huge levels of abstraction, but simple templates aren't really hard to write at all (i.e when they don't have to handle every variation of every use case in the whole word..).. you get them working by example with one type, then stick 'template' infront, and hey-presto your proven code-path works with any suitable types instead of the one example you started.

The key thing is the philosophy of starting with working code then generalizing it, rather than 'considering everything that could ever go wrong and guarding against it (before it can even do anything).

Something where I find myself diverging from the rust community in ideas: we have to write tests for other reasons; the test itself does some of the work of validation - including in my example, figuring out that the types work.

Rust programs have bounds-checked arrays; To my mind that is a "Debug Build". if your program can generate an out-of bound array.. it's still 'incomplete' .. your engine control software for your aeroplane has to have been tested to the extent you know it will never happen (well, just like empiricism in science.. if we're 99.99% sure, we might as well say 'proven'), in which case you can get rid of the bounds check.

Consider if you have something like Eigen which is very heavily templated.

if it's what I imagine, you probably couldn't even write it in rust.

I'm thinking back to tricks we used with C++ maths types for console gamedev i.e. partioning off types that would handle 'vector select masks' to write 'conditional-select' code that stayed in the vector pipeline, and dimension checking/semantic difference between 'points'/'normals'etc... and exactly this kind of 'typeflow' is an absolute nightmare to do in here.
Moreover we were able to write classes with conversions to abstract all the low level that the machine handled , for compression (e.g. fixed point stuff). This stuff in particular - as soon as you say 'A*B->C' , the complexity of the traits explodes exponentially, you just need to have 2,3 operators and it's unmanageable..
In C++ it's got even easier with 'auto'(i'm thinking back to what we did 10 years ago)

I've never needed to do this, but consider a debugging option of slotting in some kind of debug variant, that prints whats going on.

Anyway C++ is going to get concepts eventually: I think it will be superior when it's an optional retrofit, whilst still being able to use 'duck typing/free-function overloading' where it is already helpful.

I also note that the situation isn't as oppressive in haskell because it does allow whole program inference, and it lets you elide the types in 'instances' (and yes i do end up going in and writing more signatures to get it compiling, but not for every last function)

First, defending modules - you as a developer are allowed to construct the hierarchy as flat or deep as possible

it's the fact that it's compulsory, (ok it's a lot better now that the globs work) but its confounded by the traits being part of the namespace; what I'm ending up with (in vector maths code) is a total cats-cradle mess; I want to think purely in function names really (the fact that the trait-bounds become critical to the namespacing is what I despise). they give the 'cowboy::draw" example.. i say in the case where you really do have 2 traits with clashing members on the same type: just rename it (and there's distinction in their parameters); not every trait will be crossed with every type in the entire ecosystem.

The private by default also has a very good reason for it - minimising name clashes when people import the namespace

what I'm experimenting with lately is (analogous to my finding -"i hate headers but actually making more - 1 per class, actually makes life easier") - is using more modules , and deliberately using the name the names are compulsory, I figure I will try to leverage them;

e.g. instead of glob-importing the modules contents, have a module 'draw' of rendering functions ('draw::line()', 'draw::curve()' etc) a module 'matrix' .. 'matrix::identity()', 'matrix::rotate()' etc etc. Even 'window::new()-> Boxwindow::Window' .. not needing to write 'BoxWindow::new() .. (I think 'things destined for the stack' and 'things destined for dynamic alloc' separate naturally)

(I'll see how it goes..)

but what if we could just set the default at the top of the module. 'pub self'. 'priv self'; if it's obscure syntax the compiler can guide you ('cant see x.. consider pub self')

And in turn that gets back to my notion here.. if we do have to have them - lets beef them up to make them as useful as possible.

as well as on low-level memory savings of objects

None of my ideas will impact low level implementation: i'm just saying that conceptually these concepts would be merged, then the compiler would instance what it needs (just like in C++, all 'classes' are conceptually the same thing, but not all have vtables).

there's other ideas on this; I can imagine generalizing the concept of the vtable and enum ; note that we already have special purpose logic for 'figuring out the variant from a single data element', i.e. Option(*T) uses the null pointer to represent the tag. there's talk of allowing the 'index or -1' idea. Imagine if that whole idea just became yet another thing you can overload, then in the case where you do have an 'internal vtable' (like a C++ object) you can seamlessly slot it into 'match' expressions. (conceptually think of enum Foo {Bar,Baz} as a shortcut for creating some 'class Foo; class Bar : Foo; class Baz :Foo' .. that just happens to have a tag instead of a vtable .. but you could still overload something to generate one. or you could let the compiler sift through the sourcebase and generate a vtable from all the match uses

What I note is there is a complete spectrum of permutations between 'sort by type, sort by function, closed/open, single/multi dispatch'.

I think the 'internal vtables' will be demanded for a similar reason to why we have 'anonymous functions' (lambdas) despite the fact they are just Trait-Object sugar in Rust (and 'class sugar' in C++) : the case 'where there is just one' is so common that it's worth having sugar for. Yes, fitting everything to single inheritance is bad, but there's enough cases where 'one main interface per object' is useful that it's worth having sugar (and the runtime optimization of embedding hence thin pointers) for.

While you voice many valid concerns about things which are easier to do in C++ than in Rust today, I honestly think that you heavily underestimate the maintainability, usability, and robustness benefits of the approaches that Rust has taken by brushing them off as unnecessary procedure and paranoia.

Speaking more specifically about generics and traits, since I think that's where these benefits shine most...

I agree that explicitly writing complex trait bounds is unpleasant. I disagree that it is in any way a comparable trade-off to the user experience of having to cope with template error messages in C++, for the following reasons:

  • Libraries are written once and used many times. In fact, they tend to remain in use long after their development has slowed down or ceased. So if we can (greatly) improve the experience of using a library at the cost of (slightly) degrading that of writing it, it is without any doubt a good usability trade-off.
  • Large libraries tend to be maintained over long periods of time by many different people. On this front, the fact that duck-typed C++ template code is full of unspoken assumptions that are invisible to anyone but its author (and even then...) is a serious issue.
  • Moreover, the roles of library developer and user are not symmetrical. A library has many more users than it has developers, and is in principle built for their benefit, not that of the developer. In this sense, whenever one has to choose between developer and user convenience, users should take priority.
  • The user experience of dealing with template error messages in C++ is purely and simply horrible. Just because you happened to violate an interface contract that the library's developer did not bother to document anywhere, you get flooded by megabytes of incomprehensible error messages originating from deep inside the implementation details of the library you're using, including sometimes from another library that it's using.
  • The user experience of spelling out trait bounds in Rust, in contrast, is not so bad. If you missed a trait bound, rustc will tell you in a concise error message, and usually provide good suggestions of traits that could fill the gap.
  • And if you find yourself needing a certain set of trait bounds many times, you can roll out a "larger" trait which accumulates all these bounds, and reuse that. So trait bounds do not need to be long and repetitive. If you need a lot of them, you have ways of streamlining them.

...and that works until someone tries to instantiate it with a non-suitable type, and gets at best thousands of incomprehensible error message and at worst an implementation which compiles but does not work. Stopping at this point and shipping the library as-is is irresponsible, as it is the generic programming equivalent of the "works for me" customer service antipattern. The fact that too many people do this is the reason why most C++ developers only approach templated code with fear, which I think is not something that the C++ community (which I also belong to) should take pride in.

You need to set a clear interface contract in order to produce robust generic code, and this is what Rust traits (or Ada generics, or the C++ Concepts proposal, for that matter) are about.

I do not think that there is anything which prevents you from following that methodology when writing generics in Rust. Can you clarify?

Unfortunately, duck-typed generic code cannot be very exhaustively tested by its developer. You cannot just test it for every possible input type, as there is an infinite amount of possibilities, and unlike with values there is no notion of "boundary" type that could save you in white-box testing scenarios.

This is where Rust's traits come in: they guarantee you that the compiler will test, at instantiation time, that the input type is right. Since you cannot test it yourself on the type which your user has in mind, let the user's compiler do it.

The thing is, any non-trivial piece of software has bugs. Testing does not reveal all the bugs because it only explores a tiny portion of the parameter space. And static analysis, while powerful, is only able to prove the absence of certain classes of bugs, and highly relies on developer-supplied metadata (like trait bounds in Rust) to do so. This means that some form of run-time analysis is also needed in order to ensure that on the day where the remaining bugs will manifest, the behaviour of the software will remain well-defined.

When a run-time check fails, your software crashes. Bummer. Best avoided, but usually fixable by restarting it. When there is no run-time check, however, you get undefined behaviour: the software can continue to run, but does something totally unexpected by its creators. Like blowing up some hardware, killing someone, leaking cryptographic secrets, or executing arbitrary code chosen by an attacker. This is why unless some absolutely critical performance concern demands the suppression of those run-time checks that guarantee the absence of undefined behaviour, they should remain on even in release builds.

On this matter, I should also point out that the performance impact of removing a run-time check should be measured, as modern compilers are quite talented at optimizing them away in common use cases like array bound checks in a loop. And I should clarify that I am not just talking about run-time checks that are automatically written by the compiler, like array bound checks, but also about those that are added by the developer to check application-specific contracts, such as assertions about the invariants of an object.

Looking at nalgebra, whose interface is conceptually very similar to that of Eigen, I would disagree. Although for this kind of library to feel more "natural", Rust could certainly use some const generics, which are only at the RFC stage at the moment.

Unfortunately, the usefulness of concepts is quite limited if they are an optional feature. We already have something similar to optional concepts today in C++ with things like SFINAE constraints or static_assert, and the end result is predictable: a few people who care about their users use them, most people don't because they can get away without it, and the end result is that the everyday user experience of using C++ template libraries remains terrible.

...and so, you get ill-specified interface contracts, with the same results as with C++ templates: errors get reported deep inside of the implementation, and any sufficiently large piece of code becomes so incomprehensible as to be unusable and unmaintainable. Inferring types in interfaces is okay for small scripts, but it does not scale well to large programs or interfaces which are intended for use by someone else.

1 Like

heh I should stick to the point of the thread but here goes…

Libraries are written once and used many times.
Moreover, the roles of library developer and user are not symmetrical.

I disagree:-

-[1] many competing libraries are written, only a few end up getting widespread use
-[2] any large program is itelf a layered set of ‘internal libraries’.
-[2.1] aren’t some of the best libraries are extracted from programs? … e.g. isn’t GTK ‘gimp tool kit’ … I’m guessing they basically wrote gimp, then decide 'ok, we can generalise our UI utilities for other users. There would have been many attempts at gui toolkits, but one became more popular by virtue of very useful 'demo-program.
as such the roles are symmetrical to me

One of the reasons I’m trying Rust is that I do want to use type-parameters/generics more than I do in C++. (hence the original suggestion, to keep things on topic)

The user experience of dealing with template error messages in C++ is purely and simply horrible

sometimes, but I often used simplified templates that only did what I needed. the error messages are not so horrible, and they’re easy to write. (…and we are going to get Concepts eventually). I’ve got another idea on filtering error messages too for some cases (version-control - find lines changed - filter vs errors )

‘std::vector’ handles every case. but I can write 2 much simpler ‘vectors’… one aimed at ‘tools’ (slower and more general) and one aimed at ‘runtime’ (… it doesn’t even need resizing, because the point of the tools is to preprocess, to vastly eliminate runtime dynamic memory allocation … but I still have both cases available as a fallback: runtime-libs to optimise slow parts of tools, or tools-libs when prototyping runtime. I’ve had enormous difficulty explaining this idea in the rust community (“it’s a systems language, you need everything to be efficient/safe”… actually ,no,not always; but you do gain by having the efficient and quick-and dirty parts sharing type information , being able to move back and forth, instead of adding the cost of cross-language interfacing. you write lots of exploratory code before you figure out what to keep. this is why we’ve ended up ‘over-using’ C++… for the convenience of having the ability to drop down and interface with the efficient/systems parts.

I have another thread where I suggest a customisation to get a parameterised index in the vector trait. I can do this sort of thing very easily in C++. I can also setup ‘dimension checked vector maths’ or vector maths that uses custom compressed types easily.

You’re suggesting here I have to wait for ‘a library writer’ (with aligned goals) to figure out the trait bounds … I’ve given up trying to replicate some of the things I already had working in C++ .

This also tempts me to abuse rust. What if I made an ‘everything’ trait with ‘unimplemented’ just to recover the experience of duck typing… Or what if I starting writing macros for things that should be functions (again, recovering duck typing) … but this starts to get messier.

The user experience of spelling out trait bounds in Rust, in contrast, is not so bad.
If you missed a trait bound, rustc will tell you in a concise error message

nope ; just like fixing template error messages, these can stil get increasingly arcane because they still effectively trace through the program (you’re baking information from the potential call graph into the traits) . I’ve had the thing crash spitting out huge errors due to recursion :slight_smile:
You’re dealing with the same complexity eventually.

the claim is: ‘spelling out a bit more makes it easier’.

  • but we can still do this in C++: we can write a simplified test to figure out what went wrong. . As I explain, my ‘path’ to end up with working code starts with code that does something , then you adding features and generality. I argue this teaches you more than compiler analysis can - because there’s other aspects that still can’t be expressed in the type system. ( I like the fact Rust has #[test] out of the box. I can imagine a bit more in that direction, e.g. ways of marking ‘this function should be the inverse of that, and here’s the empirical test cases’ … ‘the output of this function should satisfy these constrains, and here’s empirical test cases…’ )

I’m not saying compiler analysis is a bad idea at all; I do want as much as possible - but I also want the ability to disable it.

Imagine a rust with an option for whole program inference, and switching off the safety checks. This would be unambiguously superior to C++ in every use case. ; however,

I’m speculating the ‘core team’(and their financial backers/mozilla foundation) doesn’t want this because they want safe contributions to the ecosystem, as a trade for their work? (I get that people need some reward, even if indirect, hence certain aspects of how open source works).

but I would assert: you’ll still ‘grow mindshare’ faster - you’ll get more people using the language, learning the safe libraries. Any ‘unsafe’ and ‘duck typed’ crates can be clearly marked and filtered out (by default) for users who don’t want it. Or conversely, you might see ‘more useful functionality’ which gets the ball rolling and hence contributors…‘ok lets dive in and clean these up …’

right at the start of ‘jonathan blows language for games’, I made the suggestion to him: “just fork rust”. He’s a very smart chap and has ended up spending x amount of time building a whole new language with different choices, when really what mozilla has done - with a few minor tweaks , should have been perfectly sufficient.

I launched into rust with great enthusiasm, then gave up. I’m back again seeing whats changed (and sure there are some nice improvements). But I could have been a ‘ruster’ solidly for the past 3 years; and even if only 5% of what I wrote was safe, that would have been more than I’ve been able to contribute to date. (e.g. I did an HTML source viewer, but let it bit rot in the time I was away… if you’d kept me using rust for my engine or AI experiments I could have continued with that in the background, and it could have been integrated with ‘rust-doc’ … etc …)

nope ; just like fixing template error messages, these can stil get increasingly arcane because they still effectively trace through the program (you’re baking information from the potential call graph into the traits) . I’ve had the thing crash spitting out huge errors due to recursion

I disagree, being a heavy user of Eigen and have used Boost and other template libraries from very old times when the static_asserts tools were not used as much as nowadays I really would like an example where you can force the Rust compiler to reproduce you the 10MB garbage you get from a single wrong template invocation. The main difference is that with Trait bounds you are never tracing through the program as the Trait contracts apply at each level. If I have a function f<T: foo> and have recursively defined templates using it, each single recursive step would require in the higher generic function to have the bound T:foo and the error will pop out at the top call, it won’t dump me the Kraken it does in C++. Again, please do show me an example in Rust where you can do that, I think I can quite easily find a few of my old code where you can get a few thousand lines of error messages.

but we can still do this in C++: we can write a simplified test to figure out what went wrong.

Again, I highly doubt that. You can write a “simplified test” to reproduce to error not figure out what went wrong. This especially evident in very large and long-standing libraries, for instance like the Linux kernel, where errors can often be linked by the superposition of several different changes put together only. And if these changes span on the time axis up to 10 years back good luck reviewing 10 years of kernel code changes.

I’m speculating the ‘core team’(and their financial backers/Mozilla foundation) doesn’t want this because they want safe contributions to the ecosystem, as a trade for their work?

I would argue here, and this is expressing my opinion and impression, that it is not only the core team but a big part of the Rust community that wants safe contributions to the ecosystem - it’s pretty much the main appeal of using Rust in the first place. If there are going to be a bunch of very standardly/commonly used libraries in this language which is unsafe and 80% of all Rust code will rely on those, then why even bother using Rust? Yes traits and a few things, but at least for me having the significantly stronger guarantees that things are safe for using in production is the biggest selling point of why I would waste my time learning this significantly more complicated language (in terms of at the begging the effort is quite steep while you wrestle with the borrow checker and get the hang of it).

but I would assert: you’ll still ‘grow mindshare’ faster - you’ll get more people using the language, learning the safe libraries. Any ‘unsafe’ and ‘duck typed’ crates can be clearly marked and filtered out (by default) for users who don’t want it. Or conversely, you might see ‘more useful functionality’ which gets the ball rolling and hence contributors…‘ok let’s dive in and clean these up …’

And it would make such a huge mess and pandemonium for maintainability. Assume you use the “safe” library. Then the library adds 2 features, one of which you want to use and need and one which now uses some other “unsafe” library. Suddenly, in your use case of the library, you get a security vulnerability cause something can segfault, but the original “unsafe” library has not been intended for this so they are obviously not responsible. Also on how much fine-grained control, you will have over the “safe” library thick? Are you going to have to decide for every single of the whole 1000 crates in the downstream dependency tree if you are going to use its “safe” or “unsafe” mode?

At the moment, nothing stops you from creating an unsafe library just to showcase an idea and then if you think its cool you can convert it to safe, but only then publish it. I don’t think this harms the process of developing new things.

Finally, I want to discuss all this things about template vectors. I personally am really strongly against having anything like what Eigen is in general. Why? Because they are essentially writing their own compiler in the Templates of the source language. The result - you get this horondous messages. What one should be doing, for instance in Rust, is basically write the compiler properly, without the templates, and than make a compiler plugin, which to essentially optimize the intristics of your structs. Than you can get very nice messages and get the exact same performance with compilation. The tempalte solution in my opinion is just a bad work around what you wanted and should be doing, just so that it can blend in the language. With compiler plugins in Rust that is pretty easy to do without even needing templates what so ever. So to the point that you can’t write Eigen in Rust - probably not, but thank god for that. On the other hand you can write a compiler plugin which to do the same as Eigen, but be a lot more maintainable and user friendly.

Part of the 10MB is your own source code. the process of doing the same thing, you have to express the same information.

I'm finding it harder to reach an end result, as such have given up trying to do some things that I find easy in C++.

some of the libraries are to big and bloated, it is the fact they try to be all things to all people that makes them hard to use.

I can write templates that handle the cases I need.

**My motivation to use Rust, sorted by priority:- **

  • [1] no header files
  • [2] better lambdas (via 2-way inference, and just plain better syntax |x|..)
  • [3] 'immutable by default' , unsafe globals (thats the one restriction I DO agree with)
  • [4] enum/match (this feature is awesome)
  • [4.1] 'everything is an expression', i'll list it here where it shines.
  • [5] better macro system/ 'reflection' type uses cases, e.g. easy serialisation.
  • [6] tuples (I really enjoy the ability to anonymously group values, in a syntactically lighter way than 'std::pair()')

But to get these - I'm finding the traits (loss over general overloading) and full safety a hinderance; and I can't just wrap unsafe{} and use raw pointers because the remainder of the raw pointer use has been (seemingly) . Either one of those on their own might be ok; it's the combination of both that begins to seem oppressive.

C++ is used for control, but damaged for productivity in awkward ways by mere syntax (headers, misuse of premium , chars). I really do just want C++ cleaned up. rust came with a load of 'bonus features' (like match) which are very interesting.

It should be possible to pick and choose exact blend of features I want. Software is the most malleable medium on this planet.

What happened along the way is some features I liked got removed (~T ~[T] .. syntactically light smart pointers and vectors counted for a lot, playing well with () to make light function signatures ... I would have gone further and added [K=>V] for maps giving very light signatures for a lot of common code, note Swift does also have [K:V] so my notion isn't entirely spurious .. a tech giant agrees; 'do notation' for internal iterators.. dropping the nesting level for lambda-based code; Whilst some features landed in C++ .. which is why I didn't stick with it originally)

At the moment, nothing stops you from creating an unsafe library

(addressed above: beyond merely needing 'unsafe', the language syntactically discourages raw pointers in other ways)

for a better insight into my POV, I agree with about 90% of what he says Ideas about a new programming language for games. - YouTube .. he discusses how he considered Rust,D,Go as alternatives and why he rejected all 3 to continue.

All it would take is a few options to relax things and it would open this language up as a definitive choice. Just unleashing the full inference alone would make things more pleasant ( i note in haskell, you don't actually have to implement all the functions in a typeclass; we can do 'unimplemented!()' but we could recover the C++ use case as an 'opt-in' if there was a 'compile time failure' - a stronger 'unimplemented' ).

Sure, having the types in functions is a reasonable compromise but sometimes you have working code, but then you want to extract a function... but that's hard to do because you now have to figure out the signature (which might have complex intermediate types)..

As it stands.. I have to keep going with C++ (modules will fix [1], I can resort to a #define for compact single-expression lambda), and possibly wait for his language (his goals align more closely) ... or even continue with my own (which would get no other users, but I could inter-op with C++ better by supporting more of the feature set GitHub - dobkeratops/compiler: C/C++ subset resyntaxed like Rust,+ tagged-union/Pattern-Matching, UFCS,inference; LLVM . example:).

I hear that Swift is actually going to get move/move-semantics, that might be another option

Then the library adds 2 features, one of which you want to use and need and one which now uses some other "unsafe" library. Suddenly, in your use case of the library, you get a security vulnerability

I don't see that happening, surely the 'unsafe' can be correctly flagged through the calligraph and or module dependancy graph if it really is an unsafe module. (linking to unsafe crates would be blocked unless you deliberately asked for an unsafe build).

But 'unsafe crates' continue to exist of course, i.e. all the bindings to existing C/ C++ projects.. they're not going to re-write OpenGL, Vulkan, all the video codecs etc etc etc in Rust.

bear in mind this conversation is conflating 2 things, unsafety, and Rust other 'oppressive choices' (compulsory traits and no overloading). just loosening one of those would ease things a lot. the 'unsafe' case will remain; C++ wont go away, there's massive sourcebases in regular use, and for most use-cases the solution is to layer an 'application language' ontop (most of my friends are busy using C# now and loving it)

I'll check how the 'intrinsics story' is going in Rust BUT
the fact is machines change, ISA's change.. in the past it was console gamedev; in future it might be new chips for AI (what if they start making RISC-V with custom accelerator stuff, what about the movidius chip..) C++ with intrinsics gives you a blend of customisability and optimizability out of the box that is hard to match; not everyone has time to customise their compiler or study it's internal API

I can tell you that the work i've done 'in anger' was all about 'getting shit done' on new platforms before the tools were ready, hence having a competitive advantage (being on a platform before rivals) That meant being able to drop down to ASM for custom instruction use, Microsoft took a step forward by actually enabling C-like intrinsics but originally those weren't accessible if you wrapped types in 'classes', only a pure intrinsic typedef. They fixed it eventually but if you architected your code reliant on that.. you missed the critical first/second wave window .. your product releases along with a flood of competitors.

I'm not doing this right now, I am looking into 'other languages' for interest.. but the point is if we are talking about the ambition of a C/C++ replacement - it must be able to handle the same use-cases.

If i just wanted to 'ship apps' like anyone else using existing libraries there's swift/C# etc .. and you get the use of optimised OS services (if you really want to defer all that to some one else).

I think it's possible to improve on 2 of 3 axes - performance,productivity,safety, but maybe not all 3 simultaneously.

On that last point about new chips its pretty much why the LLVM is split to front-end - middleware - backend so I don’t see that ever being an issue. You can do the same thing on various levels to squeeze that performance - specifically, since you are talking about AI look at Theano and Tensorflow - they are compilers embedded in python.

It is still heavier work to customise a compiler. The scenario I describe was before LLVM but microsoft always had their own compiler team .. the point is it's another thing you have to wait for, it takes a finite amount of time to fix. Intrinsics are actually a nice level to work at IMO because they expose a closer mental model to what you're trying to optimise for.. without having to go all the way dropping to ASM (I actually didn't like wrapping this stuff in C++ classes so much, but eventually it was possible and we did it, and many people did like it. Conversely ASM still got used for squeezing every last drop out..). Anyway thats just one of several cases. The other is dealing with a multitude of compressed data formats . expressing the types and their transitions can be hard but the consistent rules of conversion operators can handle things for you , and you can insert checks in the debug build. this is a philosophical point but as my "airplane control software example" shows, Rusts talk of safety is still one point on a sliding scale with omissions.. it doesn't eliminate the need for empirical tests.

what we also had was a situation with branches being performance killers.. runtime tests had to be minimised.

This sort of thing still happens, e.g. the move from GL to Vulkan is partly about reducing the amount of runtime validation being done.. a lighter driver that can assume correct state, putting the responsibility for buffer management in the hands of the application, because the 'nice safe' interface can't exploit whole-program assumptions

if you have a potential hazard like 'divide by zero', the program has to avoid it through high level logic (e.g. filtering out degerate triangles before you compute normals, then you know 'compute normal' doesn't need a divide by zero check. etc etc)

[quote=“dobkeratops, post:17, topic:11792”] since you are talking about AI look at Theano and Tensorflow - they are compilers embedded in python.

these are pretty cool,
but what my thinking relates to is at the level of the work google must have to do to get this all to work in the first place.

there’s their ‘TPU’ which is a low-precision matrix multiply engine; I anticipate that we will see more units (more like the Movidius vision chip) … at the minute the world is using GPUs but they come with a load of extra complexity geared for rendering; an AI chip would still want more versatility than the plain TPU (think about the potential ways in which weights could be compressed) . they’ll go through their own path of evolution. personally I think the ideal device will be like the ‘adapteva epiphany 5’ wth more control over local memories , but i’ve no idea what the state of the art is inside MS/google/apple at the minute.

some people think plain CPUs and GPUs will continue to rein supreme, but I’d argue AI/vision is going to be a big enough area to get it’s own fully dedicated units.

I know Rust is of interest for IoT (intersection of online and embedded).

the gamedev case is interesting: jonathan blow explains very well the ways in which rust still hampers the ‘exploratory/arty’ coding. It doesn’t need to be 100% safe or performant at every step, but we do need to be able to handle both extremes (performant, and productive) and alternate between them; we do make big programs (100klocs), but not huge (mlocs ). Some of rusts decisions are all about things that matter for mlocs but not klocs.
The “exploratory” side was important enough to integrate scripting languages (Lua etc) for , but to have that ‘inline’ , sharing type information , with no ‘interfacing’ overhead (either performance or tooling/boilerpllate) would be awesome.

Mutliple types of code in gamedev…

  • Tools both UI, and asset conditioning pipelines
  • gameplay (scripting , ‘high level’ c++)
  • engine (low level C / C++ )

… but you might need to migrate between use cases, which is why we over-use C++ (as needed for the Engine).
from what I’ve seen swift would be superior already for the first 2 cases… but Rust could be too with some changes.

this is why I miss the Sigils so much. they made the common smart-pointer types ‘melt-away’ enough for old-rust to ‘feel like’ a productive language.

r.e. the 3rd case, one interesting thing Jonathan Blow says is he doesn’t even consider std::vector / Vec performant enough; he explains the need for raw pointers for ‘joint allocations’. the ‘blob’ approach of loading precompiled levels. you could of course do a validation pass to verify things are ok, but you’re into a realm where ‘runtime safety’ has become less clear cut

I think a lot of the friction you are experiencing is because when people say "Rust is a replacement for C++", you feel like it should be a drop-in replacement and should be able to use exactly the same patterns (e.g. advanced templates, as has already been discussed). Instead, Rust is its own language with its own way of doing things and has its own opinions.

Regarding the full safety "hinderance", unless you have done extensive profiling and can definitively point to places where things like bounds checks are having a negative effect on your program, I don't think your argument holds much water. Improper indexing is a programming error, and as such I would much prefer my program to blow up at run time (yes, even in production) instead of silently accessing the next bit of memory, opening the door to things like buffer overflows. These kinds of safety guarantees are the reason people will switch to Rust from C/C++ in the first place and are integral to the language.

If you want syntactic sugar for defining maps it's not overly difficult to implement yourself with a macro.

macro_rules! hashmap {
    ($( $key:expr => $value:expr ),*) => {{
      use std::collections::HashMap;
      let mut map = HashMap::new();
      $( map.insert($key, $value); )*

fn main() {
    let some_map = hashmap!("foo" => 1,
                            "bar" => 2,
                            "baz" => 3);

    println!("{:#?}", some_map);

If you are talking about stuff like using auto as the return type from a function, I believe the language team (and I agree with them) said that Rust would never support this because you should only ever need to look at the signature for a function to determine what it does.

Trait bounds also help ensure this, you can tell at a glance that fn for_each<I, T, F>(iter: I, predicate: F) where I: Iterator<Item=T>, F: FnMut(T) takes an iterator and a function which can be run on the items. I'd argue that this is a lot more explicit than C++ templates and prevents the massive error messages because it's easy for the compiler to say "you gave me an u32, but I expected an iterator".

The existing inference system is more than capable of inferring function signatures for us, but then if it does that it means you'll need to sift through an entire function's source code to see what types it uses. Code tends to get read a lot more than it gets written, so Rust decides to trade short term developer convenience for readability, long term maintainability, and usability.

please please please stop right there.

In my historical use case, a comparison and branch instruction out of place was enough to cripple performance (because it prevented other optimisations) - routinely 10x, and as much as 50x in the most extreme cases.

Now, this might not be the case today when you run your program on a typical big CPU, but the point of something as universal as C andC++ is their ability to be used in every conceivable niche; and machines continue to change, not just get ‘bigger and faster’, but more parallel , or smaller .

Note now that we do have intel trying to generalise their instruction set for general purpose vectorisation (e.g. vgather) .

I don’t know what situations the future will bring, but if i’m going to replace C or C++, I must know that it can handle the situations I’ve dealt with in the past.

there are situations where an ‘error message’ is unacceptable behaviour … and there are situations where conditional operations cost way more than you might think (because of branching / vectorization / deep pipelines … ).

if you use this philosophy “don’t fix it unless it shows up in the profiler” … you can end up with a program thats slow because everything is done like that , lol, … no single part shows up…

you only ask for the operations you need. and in a complete/‘finished’ program, you don’t need error checks

(edited for less shouting lol)

". Improper indexing is a programming error, and as such I would much prefer my program to blow up at run time (yes, even in production) "

… then thats a debug build, not a release build
you can certainly switch the naming
’C++ extreme debug’ = ‘rust debug’
‘C++ light debug’ = ‘rust release’
‘C++ release’ = ‘An option yet to be added to Rust’

when we did games on these consoles, we had soak tests and debugging and we simply weren’t allowed to release the product until there were no such failures. (People are looser today with downloads and PC platforms). safety was due to the platform being a walled garden; you wouldn’t get content onto the machine without going through their channel.

“These kinds of safety guarantees are the reason people will switch to Rust from C/C++”


if you really want bounds checked arrays, they’re trivial to implement in C++. (and sure, we got things working by implementing that and way more besides… floats that would check themselves for being non-Nan, to track down the problems).

People aren’t seriously considering throwing away their sourcebases and experience over something that simple.

we can also retrofit static analysers, we could even label the references if we really needed to.ref<T> alongside a the successful default assumptions rust has discovered (‘most are accessors where you assume the lifetime of the first parameter…’)

Conversely , if you want to get rid of header files , or get a match template to infer the return result … thats really hard to get in C++.