Hello and some notes

Hello, since some days I've started to study Rust, and I have being amazed by how many Rust design decisions I agree with, this is unusual. I am still a Rust newbie, so far I have read just the very nice "Rust by Example" (http://rustbyexample.com ), several blog posts and some Reddit threads.

I usually program in D and Python, and I also like the Haskell, ATS2 (www.ats-lang.org ), Whiley (whiley.org ), Julia (julialang.org ) and Ada languages, for different reasons. I have opened more than one thousand of enhancement requests/bug reports for D, I maintain about one thousand RosettaCode (rosettacode.org ) solutions for D (many entries have multiple solutions), and I have given a hand to design the D Range algorithms (std.range - D Programming Language ). English isn't my native language so if you spot some English mistakes feel free to note them.

Below there's a first batch of notes and small questions (most links here don't have the https: because as first post this system doesn't let me write more than two links...).


  1. Do you know some Rust programmers/users in Italy? I'd like to meet some of them, to create a little meeting. (I haven't visited a Rust IRC channel yet).

  1. So far (I am still at an early stage of Rust study) the only Rust language design decision I have found that I don't like much regards the % operator, that behaves like in C99/D (it computes the dividend):

    fn main() {
    let n1 = std::env::args()
    .nth(1)
    .map_or(Ok(10), |n| n.parse::())
    .unwrap();

     let n2 = std::env::args()
              .nth(2)
              .map_or(Ok(20), |n| n.parse::<i64>())
              .unwrap();
    
     println!("{}", n1 % n2);
    

    }

Rust outputs:

>test1 10 3
1
>test1 10 -3
1
>test1 -10 3
-1
>test1 -10 -3
-1

Similar operations on the Python Shell give:

>>> 10 % 3
1
>>> 10 % -3
-2
>>> -10 % 3
2
>>> -10 % -3
-1

I can undertand a desire of C99 semantic compatibility, and regarding performance I know what result the CPUs gives. But in most cases I need the divisor when I'm using signed numbers. In some languages the C99 % operator is also bug-prone (I am not yet sure if this is also true for Rust).

Example: if you want to simulate a simple 1D cellular automaton using a 1D Von Neumann neighborhood (that is, reading the current cell, the cell before, and the successive cell, to compute the next generation of automation), with 1D toroidal arrangement (wrap-around), you can access the three needed cells like this in Python2:

V = [10, 20, 30, 40, 50]
N = len(V)
for i in xrange(N):
    print V[(i - 1) % N], V[i], V[(i + 1) % N]

That outputs:

50 10 20
10 20 30
20 30 40
30 40 50
40 50 10

(In Python you can even write just "print V[i - 1], V[i], V[(i + 1) % N]" because negative indexes wrap-around from the array end).

Here in D language you can't use the % directly, you have to use something like this (same output):

enum mod = (in int n, in int m) pure nothrow @safe @nogc =>
    ((n % m) + m) % m;
void main() {
    import std.stdio;
    immutable V = [10, 20, 30, 40, 50];
    foreach (immutable int i; 0 .. V.length)
        writefln("%d %d %d", V[(i - 1).mod($)],
                             V[i],
                             V[(i + 1) % $]);
}

In Rust you can't use % here (also because the index 'i' is unsigned, so i-1 underflows at the beginning of the loop). One solution (same output):

fn main() {
    let v = [10, 20, 30, 40, 50];
    let n = v.len();
    for i in 0 .. n {
        println!("{} {} {}", v[if i == 0 { n - 1 } else { i - 1 }],
                             v[i],
                             v[(i + 1) % n]);
    }
}

Do you have better ways to write that code in Rust?

One of the very few languages that I like on this is Ada, that has the "mod" operator to compute the divisor, and the "rem" operator to compute the dividend. This avoids problems with signed numbers and makes the performance implications clearly visible.

Perhaps a second operator for the divisor could be added to Rust, or at least a small function in the Rust standard library (if not already present).


  1. This D code allocates a fixed-size array of i32 on the stack and then tries to fetch an item past the end of the array:

    void main() {
    immutable int[3] arr = [10, 20, 30];
    const size_t IDX = 3;
    immutable r = arr[IDX];
    }

The D compiler gives this compile-time error:

test.d(4): Error: array index 3 is out of bounds arr[0 .. 3]

A similar Rust program compiles without errors:

fn main() {
    let arr = [10, 20, 30];
    const IDX: usize = 3;
    println!("{}", arr[IDX]);
}

Something similar happens to slices with statically-known bounds.

Is this nice improvement planned for Rust, or is it already in some enhancement request, RFC, or something similar? (In a successive post I'll discuss a generalization of this idea).


  1. This Rust program gives a "non-exhaustive patterns" error if I comment out the last impossible case:

    fn main() {
    let n : u32 = 10;
    let r = match n % 3 {
    0 => 10,
    1 => 20,
    2 => 30,
    //_ => unreachable!() // ?
    };
    }

Even this code gives a similar error:

fn main() {
    let x = 0u8;
    let y = match x {
        0 ... 255 => 1, // error?
    };
}

I don't expect a Rust compiler to infer exhaustiveness when there are match cases with an "if", but I think in the common match{} situations where the "if" is nowhere present the Rust compiler should be smarter and avoid asking for a useless "_ => unreachable!()" case. Is this enhancement request desired and already present online?


  1. This D code defines two types of mutable-values stack-allocated fixed-size compile-time-fixed-length arrays, the second with the same length as the first:

    void main() {
    alias Data1 = int[100];
    alias Data2 = uint[Data1.length];
    }

How do you write the same in Rust? This fails:

fn main() {
    type Data1 = [i32; 100];
    type Data2 = [u32; Data.len()];
}

test.rs:3:24: 3:28 error: unresolved name Data [E0425]
test.rs:3 type Data2 = [u32; Data.len()];
^~~~
test.rs:3:24: 3:28 help: run rustc --explain E0425 to see a detailed explanation
error: aborting due to previous error


  1. Why aren't Rust associative data structures (like the hash and binary search trees) using the handy syntax to get an item?

I mean something like (D code):

void main() {
    int[int] aa; // Built-in D associative array.
    aa[1] = 10;
    assert(aa[1] == 10);
}

Instead of "aa.insert(1, 10);" to set an item, and similarly to get an item.

Finding an item in a well designed hash should be amortized O(1). D language conventions allow to use the syntax if an operation is O(ln(n)) or faster. So both hash and search trees qualify.


  1. This Rust program:

    #![feature(box_syntax)]
    fn main() {
    let x = box 10;
    println!("{}", x);
    println!("{:?}", x);
    }

Outputs:
10
10

But if I have understood what I have read, the difference between {} and {:?} are similar to the differences between str() and repr() in Python, so I expected an output more similar to:

10
box 10

Or perhaps:

10
box(10)

What do you think?


  1. Can you ask (compactly, in few tokens) to std::collections::HashMap to use a faster (not cryptographically safe) hash function (expecially for strings)? Something like:

    let mut aa: HashMap<String, i32, Hash::Fast> = HashMap::new();

Using a cryptographically safe hash function on default is acceptable, but in some cases I need higher performance, and I've experimentally seen that D built-in associative arrays (and sometimes even Python maps) are faster than a Rust HashMap at building an hash of strings (D associative arrays in past used to be safer against attacks because, despite the usage of a cryptographically unsafe hash function, they used a Red-Black tree for each bucket. But later the trees were replaced by faster linked lists).


  1. On this Rust program:

    fn main() {
    printnl!("{}", 1);
    }

The rustc compiler gives:

test.rs:2:5: 2:12 error: macro undefined: 'printnl!'
test.rs:2 printnl!("{}", 1);
^~~~~~~
error: aborting due to previous error

On a similar D program:

import std.stdio;
void main() {
    writenl(1);
}

The D compiler gives a more useful error message:

test.d(3): Error: undefined identifier 'writenl', did you mean template 'writeln(T...)(T args)'?

Similarly, on this Rust program:

fn somefunc() {}
fn main() {
    somefun();
}

The rustc compiler gives:

test.rs:3:5: 3:12 error: unresolved name somefun [E0425]
test.rs:3 somefun();
^~~~~~~
test.rs:3:5: 3:12 help: run rustc --explain E0425 to see a detailed explanation
error: aborting due to previous error

While on this similar D program:

void somefunc() {}
void main() {
    somefun();
}

The D compiler gives:

test.d(3): Error: undefined identifier 'somefun', did you mean function 'somefunc'?

The D compiler catches similar small mistakes (up to a Levenshtein distance of 2 or 3, divided by kind) for user-defined identifiers. Is this enhancement request desired and already present online?


  1. Do you know of a method/function like "each()" that consumes iterators, a bit similar to .inspect(), to be appended at the end of chains of map/filter:

    fn main() {
    (0 .. 10)
    .map(|x| x * x)
    .inspect(|x| println!("{} ", x))
    .each();
    }

That code should behave about like:

fn main() {
    for _ in (0 .. 10)
             .map(|x| x * x)
             .inspect(|x| println!("{} ", x)) {
    }
}

An alternative design (perhaps nicer looking):

fn main() {
    (0 .. 10)
    .map(|x| x * x)
    .for_each(|x| println!("{} ", x));
}

Scala sequences have "foreach":

But in one case in another language it was removed:
social.msdn.microsoft.com/Forums/en-US/758f7b98-e3ce-41e5-82a2-109f1df446c2/where-is-listtforeach


  1. I have done a small experiment regarding RVO (Copy elision - Wikipedia ), this is Rust code:

    const N : usize = 1000;
    type Data = [i32; N];

    #[inline(never)]
    fn pippo() -> Data {
    let mut data: Data = [0; N]; // Initialize.
    for i in 0 .. data.len() {
    data[i] = i as i32; // Initialize again.
    }
    return data;
    }

    fn main() {
    let data2 = pippo();
    println!("{}", data2[0]);
    }

Asm, compiled in release mode (rustc 1.6.0-nightly (8ca0acc25 2015-10-28)):

_ZN5pippo20hbf0a92f9b1becfacmaaE:
    pushq   %r14
    pushq   %rbx
    subq    $4008, %rsp
    movq    %rdi, %r14
    leaq    8(%rsp), %rdi
    xorl    %ebx, %ebx
    xorl    %esi, %esi
    movl    $4000, %edx
    callq   memset@PLT
    movdqa  .LCPI0_0(%rip), %xmm0
    movdqa  .LCPI0_1(%rip), %xmm1
    .align  16, 0x90

.LBB0_1:
    movd    %ebx, %xmm2
    pshufd  $0, %xmm2, %xmm2
    movdqa  %xmm2, %xmm3
    paddd   %xmm0, %xmm3
    paddd   %xmm1, %xmm2
    movdqu  %xmm3, 8(%rsp,%rbx,4)
    movdqu  %xmm2, 24(%rsp,%rbx,4)
    addq    $8, %rbx
    cmpq    $1000, %rbx
    jne .LBB0_1

    leaq    8(%rsp), %rsi
    movl    $4000, %edx
    movq    %r14, %rdi
    callq   memcpy@PLT
    addq    $4008, %rsp
    popq    %rbx
    popq    %r14
    retq

The same in D with ldc2 compiler, release noboundscheck mode:

alias Data = int[1000];

Data pippo() {
    Data data;
    foreach (immutable i; 0 .. data.length)
        data[i] = i;
    return data;
}

void main() {
    import std.stdio;
    immutable data2 = pippo();
    data2[0].writeln;
}

Asm (LDC2, 0.15.2-beta2, based on DMD v2.066.1 and LLVM 3.6.1):

__D6test315pippoFZG1000i:
    pushl   %esi
    subl    $12, %esp
    movl    %eax, %esi
    movl    %esi, (%esp)
    movl    $4000, 8(%esp)
    movl    $0, 4(%esp)
    calll   _memset
    xorl    %eax, %eax
    movdqa  LCPI0_0, %xmm0
    movdqa  LCPI0_1, %xmm1
    .align  16, 0x90

LBB0_1:
    movd    %eax, %xmm2
    pshufd  $0, %xmm2, %xmm2
    movdqa  %xmm2, %xmm3
    paddd   %xmm0, %xmm3
    paddd   %xmm1, %xmm2
    movdqu  %xmm3, (%esi,%eax,4)
    movdqu  %xmm2, 16(%esi,%eax,4)
    addl    $8, %eax
    cmpl    $1000, %eax
    jne LBB0_1
    
    addl    $12, %esp
    popl    %esi
    retl

I think the "callq memcpy@PLT" near the end of the Rust asm shows that the RVO isn't happening.

I have seen this issue, perhaps it's the same problem?
github.com/rust-lang/rfcs/issues/788


  1. As more general comment, I think Rust should try to improve on several things:
    a) Stive to reduce the amount of unsafe{} code needed in most programs, adding some new compiler checks that run on unsafe{} code to make it less unsafe, improving its type system, introducing standard library things that safely wrap some unsafety, and so on.
    b) Adding things to the standard library (and language, if necessary) that make Rust more handy, quick and natural to use, almost as a script-like language. To be used when you want to write such kind of code, usually in smaller programs (This is also named "scaling down" in the Scala community);
    c) Adding things that allow the programmer to specify types and behavous more precisely, like in Ada. This is at odds with the precedent desire, so the use of such things can be opted out. Such things get used when the code needs to be fully specified and as bug-free as possible. Even this kind of code should be sufficiently succinct (unlike Ada);
    d) I think the Rust design should also take a look at the languages used for high performance numerical computing: perhaps there are very few things that could be added/modified that improve this usage of Rust;
    e) Improve the type system (like integrals and enums used as template arguments, higher order generics, CTFE) to reduce the code repetition, remove some usages of Rust macros, without introducing too much unsafety, keeping the language sufficiently small for normal programmers to learn, and keeping the language still focused on practicality and safety.

(I am not suggesting to turn Rust into a scripting language, or into a high performance language, or a high integrity language. I am saying that often a small amount of things can improve those usage needs signficantly without changing the overall language much, like fulfilling the 70% of those needs with a 10% of added things).


Thank you,
later,
leonardo
http://www.fantascienza.net/leonardo/

Hi, thank you for writing this down. It's always interesting to see new perspectives and how people with different knowledge backgrounds see the language. I'm not one of the language developers, myself, but I have been using it and following its growth for some time, and I would like to give some answers/comments/something for some of your points. Something of a "quite-long-time-user" (if that's even possible with Rust) perspective. :smile: It may, at least, point you to some further reading.

  1. Sorry, no idea.

  2. Rust uses LLVM as its compiler backend, so I suspect that the way % works depends on how LLVM does it.

  3. This would, indeed, be a nice feature, even if it may be seen as a bit of a corner case. The tricky part is that Rust uses the Index and IndexMut traits for the indexing operators, and they doesn't provide any sort of static size hint. I would guess that the stabilization of associated constants, together with type level integers, may make it possible to add optional size bounds to the traits. Hopefully.

  4. This is also a bit of a corner case and I can't remember when I did this in a way that could be statically determined. I do, however, agree that it would be nice if the compiler was somehow aware of the min/max bounds of integer types. That would make some sense, since it knows that checking unsigned integers for positivity is unnecessary.

  5. This is also a case of "lack of type level integers and/or associated constants", as I understand it. Having those would probably make this possible.

  6. I guess you are talking of HashMap and friends. They (or at least it) implements Index for immutable access, but not for mutable or insertion. I don't know about mutable, but insertion is a bit trickier. The interface for IndexMut requires that a reference to a value is returned, meaning that some value has to exist or be created when index_mut is called. Creating a dummy value would be an idea, but what should it be initialized as? An other would be to add an IndexInsert trait with an index_insert(&mut self, Index, Item) method, but I suspect that it may conflict with IndexMut somehow. I don't know. The arrival of placement new may open some new doors.

  7. I'm not sure. The type of something is usually known and the debug print is more like a non-pretty show-me-what-you-got thing. The content is what's important, as I see it.

  8. Yes, soon. It exists behind a feature gate.

  9. Rust does this to some extent. I have seen it happen with methods and properties in structs.

  10. This has been discussed before, but the opinions seems to be divided and it hasn't made its way into the standard yet. Here is the most recent RFC mention I could find, and I wouldn't be surprised if some library implements it. Anyway, what you can do is to put the last operation (printing, in this case) inside the for loop, but I'm aware that it's not always practical.

  11. I can't really comment on this, but LLVM should do some RVO in some cases, as far as I know.

  12. a) The use of unsafe is already very low, but it's always nice if it can be even lower. Rust is still new and I'm sure more things like this will pop up in time. Are you thinking of anything specific with "improve its type system"?

b) The standard library is supposed to be more "bare essentials", but an additional library with more high-level stuff would certainly be nice.

c), d) and e) The type system is already very powerful and traits, generics (templates) and associated types (also kind of generics) allows a lot of complex behaviours. That doesn't mean that it's perfect, and something that I can miss sometimes is custom integer bounds. Something like `Uint<0, 100>`, but that requires type level integers, once again, or some [creative use of the type system](https://users.rust-lang.org/t/typenum-has-hit-1-0-0/3332?u=ogeon) and a shoehorn. CTFE is also kind of in process. See [#322](https://github.com/rust-lang/rfcs/issues/322) and [#911](https://github.com/rust-lang/rfcs/pull/911) for related info.

I hope this will make some things more clear and give you some interesting things to read. :smile:

1 Like

itertools

2 Likes

Oh, yes of course. :stuck_out_tongue_winking_eye:

  1. Rust uses LLVM as its compiler backend, so I suspect that the way % works depends on how LLVM does it.

I think it's not a matter of LLVM, but a matter of what the CPU gives, and how the language operators are designed.

(3) even if it may be seen as a bit of a corner case.

Yes, for Rust I'd like something more general. I'll discuss it in a separate thread after I've studied more of the Rust manuals.

  1. This is also a bit of a corner case

This is a sufficiently common case. All the C code contains just such kind of switches.

I do, however, agree that it would be nice if the compiler was somehow aware of the min/max bounds of integer types. That would make some sense, since it knows that checking unsigned integers for positivity is unnecessary.

This is another topic I'll better discuss in a separate thread.

  1. I'm not sure. The type of something is usually known and the debug print is more like a non-pretty show-me-what-you-got thing. The content is what's important, as I see it.

If you have two different ways to print something, it's handy to give two different outputs, making them fitter for different situations.

I agree that in Rust you usually know the type (unlike Python), but for me a "show-me-what-you-got" means showing all the information, and being in a box is an important information to show when you use a {:?}.

In Python the str() function (and corresponding __str__ standard method) gives a nice visualization of the data, while repr (__repr__) show all the information, so you are often able to take the string output of repr(), copy it into Python source code, and build the same data structure that repr() has printed. A simple example of the difference with Rationals:

>>> from rational import Rational
>>> str(Rational(10,3))
'10/3'
>>> repr(Rational(10,3))
'Rational(10, 3)'

If you paste "10/3" in Python code you don't get back a rational. But repr() returns something that contains the full information, and you can paste it in Python code.

When you print or you use str() on a Python collection, the items inside the collection get converted using __repr__:

>>> str([Rational(10,3), Rational(1,3)])
'[Rational(10, 3), Rational(1, 3)]'
  1. Rust does this to some extent. I have seen it happen with methods and properties in structs.

There's some space for improvements :slight_smile:

(12 a) Are you thinking of anything specific with "improve its type system"?

Trying to make unsafe{} code fully safe is hopeless, and probably it's a waste of work (unless you're willing to go the ATS2 language way, and that's a big jump). But there are many ways to bite away some chunks of that unsafety.

Example: Clang offers various sanitizers to catch some bugs in the (unsafe) C++ code. Running some Clang-like sanitizers on unsafe{} Rust code in debug mode is an option that could require no changes in the Rust language, but perhaps Rust can design and offer some hooks that help, simplify and make more transparent the job of similar sanitizers on unsafe{} Rust code. In this case such hooks and the sanitizers become essentially some parts of the Rust type system.

(12 b) The standard library is supposed to be more "bare essentials",

I like the Python "batteries included" standard library. But this is a Rust standard library design choice that I am not entitled to question.

(12 b) but an additional library with more high-level stuff would certainly be nice.

I have some doubts on this: when you want to write script-like code you want to go fast and also means using language/std lib features (unless you need a plotting library, a numerical library, and so on).

and something that I can miss sometimes is custom integer bounds. Something like Uint<0, 100>, but that requires type level integers,

I think type level integers aren't enough to implement good enough (like in Ada 2012 + SPARK 2014) ranged integrals. I'll discuss this in the thread spin off the point (3) above.

CTFE is also kind of in process. See [...] #911 (https://github.com/rust-lang/rfcs/pull/911 ) for related info.

Perhaps a tag like "const" is unnecessary to run a function from constants contexts. D language doesn't use any tagging, you can call a function from static or dynamic contexts (and you have a __ctfe boolean that's true if you are running a function at compile-time, used with "static if" and other constructs when you want to better specialize parts of the code for both run-time and compile-time).

uint fib(uint n) {
    uint a = 1;
    uint b = 1;
    foreach (immutable i; 0 .. n - 1) {
        immutable aux = a;
        a = b;
        b = aux + b;
    }
    return b;
}
uint[fib(7)] arr;
pragma(msg, arr.length); // Compile-time printing.
void main() {}

I hope this will make some things more clear and give you some interesting things to read. :smile:

Thank you very much for the answers :slight_smile:

It's of course different from case to case, but how many of those cases could use an enum instead of an integer? Rust is very enum heavy, since they are so useful, and an enum can be exhaustively matched.

Being in a Box or not is usually known, since you put it there or required it to be there. I can't really argue for why it's invisible in debug prints, but I guess it's due to its pointer-like transparent behaviour. They are like ghosts. The majority of types are printed more verbosely, I think.

Oh, yes!

Sounds a lot like what lints does in rustc. It's possible to write custom lints, but the interface is unstable and requires the nightly version to be used. You may be interested in clippy, which has a substantial amount of extra lints.

Using a compiled language is already a big speed bump :wink: Using additional libraries is at least quite trivial when Cargo is used.

Probably, but I think you can get relatively close.

Maybe, but wouldn't the const-compatibility then be dependent on the definition of the function? Or does does the D support the full language in constants context? It feels a bit risky, in the same way that inferred return types are.

One example of something integer parameters cannot necessarily get you: widening the range with no bounds checking. For example, going from Uint<5, 10> to Uint<0, 100> should be a no-op at runtime.

Since comparing the destination range with the source range numerically is necessary, doing that kind of type right requires specialization and CTFE.

That's true. It was more a sort of hypothetical example and type level integers was just something I knew, there and then, that would be missing for that particular example to even remotely work. :smile:

an enum can be exhaustively matched.

(Perhaps I am missing your point here, because of English not being my native language).
Integral values too can be exhaustively matched by a well implemented match{} feature.

Being in a Box or not is usually known, since you put it there or required it to be there.

I understand. In statically typed languages like Rust the types are statically known, and often you can read the source code (or use the IDE) to know what type you are printing.

But think about a Rust interactive shell, or about adding many debug prints that generate a very long debug log. Printing a bit more information about the type of the data can be useful (and when you don't want it there's the basic pretty printing).

but I guess it's due to its pointer-like transparent behaviour. They are like ghosts.

In a language like Haskell it's OK to ignore pointers (self-managed indirection) and show the value. But in a system language like C/C++/D/Rust the indirection is sometimes important. That's also why Rust makes data referencing by pointer visible in the code (unlike D, where class references are used transparently).

Sounds a lot like what lints does in rustc.

Some things are better kept inside the built-in type system, while for practical reasons some other things are OK to move outside, in lints.

In my opinion the type system features that catch hard errors in unsafe{} code should be built inside the main compiler.

You may be interested in clippy, which has a substantial amount of extra lints.

Clippy looks nice and interesting. I have read what Clippy does, and I think it's OK keeping in an external lint most of the things Clippy looks for. But I think few of those things are better moved in the compiler as compilation errors:

bad_bit_mask:

D language makes this an hard error:

void main() {
    int x;
    auto r0 = x & 2 == 3;
    auto r1 = x & 2 < 3;
    auto r2 = x & 1 > 1;
    auto r3 = x | 1 == 0;
    auto r4 = x | 1 < 1;
    auto r5 = x | 1 > 0;
}

temp.d(3,19): Error: 2 == 3 must be parenthesized when next to operator &
temp.d(4,19): Error: 2 < 3 must be parenthesized when next to operator &
temp.d(5,19): Error: 1 > 1 must be parenthesized when next to operator &
temp.d(6,19): Error: 1 == 0 must be parenthesized when next to operator |
temp.d(7,19): Error: 1 < 1 must be parenthesized when next to operator |
temp.d(8,19): Error: 1 > 0 must be parenthesized when next to operator |

no_effect:

Currently D language makes this an hard error for expressions and a warning for pure nothrow functions:

int foo() pure nothrow {
    return 5;
}
void main() {
    foo(); // Line 5
    int x;
    x + 5; // Line 7
}



temp.d(5,8): Warning: calling temp.foo without side effects discards return value of type int, prepend a cast(void) if intentional
temp.d(7,5): Error: + has no effect in expression (x + 5)

shadow_reuse:

This sounds bad, I will need to learn more Rust to undestand why you people think this is acceptable in Rust (you can also explain it to me here, if you want). D of course disallows this:

void main() {
    int x = 1;
    immutable x = x + 1;
}


temp.d(3,5): Error: declaration temp.main.x is already defined

This also seems fit to become a compilation warning (similar to the warnings against unnecessary mutable local variables):

unnecessary_mut_passed:

Indeed this gives no warnings now:

fn foo(x: &i32) -> &i32 {
    x
}
fn main() {
    let mut y = 10i32;
    println!("{}", foo(&mut y));
}

Using a compiled language is already a big speed bump :wink:

I understand...

In this part of the discussion I am probably spoiled by the D compiler. Most times my script-like D programs compile very quickly, the D standard library (Phobos) contains lot of handy stuff, and the D languge can be sufficiently succinct.

As one example, I compile this D entry in 0.37 seconds with dmd (using -o-):
http://rosettacode.org/wiki/Huffman_coding#D

That little program is also sufficiently efficient at run-time, it can generates a huge Huffman tree in a second or so. No scripting language in that page generates huge trees nearly as quickly as that very short D program.

Using additional libraries is at least quite trivial when Cargo is used.

OK.

I think type level integers aren't enough to implement good enough (like in Ada 2012 + SPARK 2014) ranged integrals.

Probably, but I think you can get relatively close.

In good enough ranged integrals the compiler has to spot various usage errors at compile-time and it has to remove many run-time range tests where it knows the bounds can't be violated at run-time. To do this type level integers aren't enough, you need a way to add some compile-time tests to the compiler. Ada2012 has constructs to allow the user to define some of such compile-time tests. Rust has to get such things right (unless you want to use another compiler plug-in). I will discuss this topic in a separated post.

Perhaps a tag like "const" is unnecessary to run a function from constants contexts.

Maybe, but wouldn't the const-compatibility then be dependent on the definition of the function? Or does does the D support the full language in constants context? It feels a bit risky,

I see. I guess on such things D is a bit more careless compared to Rust...

D language allows almost everything to run at compile-time, including exceptions, dynamic memory allocations, and so on. But not everything, for example you can't run SIMD instructions at compile-time. So sometimes you write D functions like:

int myNiceFunction(T1 arg, T2 arg2) {
    // Some code here
    /*static*/ if (__ctfe) { // Not a "static if"!
        // Don't use SIMD to compute the result.
    } else {
        // Uses fast SIMD to compute the result.
    }
    // Some more code here    
}

(Later the dead code remove pass of the compile will remove the dead branch of the if() if you call that function at run-time. This is a bit of a hack and I don't like it much, but allows to keep the D compiler developers saner).

This function can be run at both run-time and compile-time. Putting a "Const" tag on it isn't much useful.
But I undestand your concerns (also in C++ there is the "constexpr" to denote, so Rust isn't the only one willing to tag such functions), so this is not a Rust enhancement request.

in the same way that inferred return types are.

Are inferred return types coming to Rust?

Inferred return types are very very useful in D when you return very complex types generated by chains of ranges (like map, filter, etc), the more of those you add, the more complex the resulting type becomes, and past a certain point you can't write down the type of the result.

In a program like this you can't write down the type of the result of the foo() function, it's too much complex:

import std.stdio, std.algorithm, std.range;

auto foo() {
    return 100
           .iota
           .map!(x => x ^^ 2)
           .filter!(x => x > 20)
           .take(10);
}

void main() {
    foo.writeln;    
}


Output: [25, 36, 49, 64, 81, 100, 121, 144, 169, 196]

At best you can add a compile-time constraint to the output type, inside the function post-condition, to make sure foo() returns a range of ints:

import std.stdio, std.algorithm, std.range, std.traits;

auto foo()
out(result) {
    alias R = Unqual!(typeof(result));
    static assert (isInputRange!R && is(ForeachType!R == int));
}
body {
    return 100
           .iota
           .map!(x => x ^^ 2)
           .filter!(x => x > 20)
           .take(10);
}

void main() {
    foo.writeln;
}

English is not my main language, either, so I may also be misunderstanding. Anyway, I did not mean that you were wrong. Not at all. I just tried to explain why I thought that it could be a less common case in Rust than in C. I admit that it was not a good explanation at all, so let me try again.

What I meant was that it's common to use constants to enumerate things like directions, days, months, modes, etc., according to my far from great knowledge. This is far from as common in Rust, where an enum type would ideally be used instead, and the problem with non-exhaustive matches would, at the same time, shrink. That doesn't invalidate your proposal. It's just the reason why I mentioned that it could be a bit of a corner case, but I do not have any statistics to back it.

Oh, I see. I just thought about its current use :smile:

You could say that Rust does both. They are visible in code and for type checking, but very transparent when they are dereferenced.

What I meant was more that you will need to do some project setup if you want to use more than what's preinstalled. It was also a bit of a bad wording. Rust may be relatively slow, but it's fast enough for small script-sized projects. I just remembered that cargo-script exists, so my point may be less valid than I thought.

As I mentioned above, I didn't mean that it was the only requirement.

The way I understood it is that Rust will limit them to only a constant compatible subset of the language. I think I read something about safety concerns regarding dynamic memory, as well.

No. That would be problematic, since a change of the function body could accidentally break code. There are, however, work being done to add "anonymous" or "abstract" return types. Something like fn make_me_a_closure() -> impl Fn(usize) -> bool.

I undertand and I agree. Still I think purely numerical (or purely char) cases are still sufficiently common in low level code, so improving the match{} on them, to not require an useless catchall case, is a no-brainer, because it helps avoid some bugs (because I am sure sometimes the catchall case catches by mistake more numbers/chars than you want it to catch!), it's easy to enforce by the compiler, makes the code tidier and shorter, and so on.

As I said in my complete post. I don't disagree at all. :smile:

Hi, there. I'm a former D programmer, so thought I'd just chip in a little. Just to be clear, I don't work directly on Rust, so what follows is merely the words of an interested user who has been here a little while.

Quite a few of your questions are simply "hasn't happened yet", so I won't touch on those. Rust 1.0 was really just a starting point for backward compatibility; it doesn't in any way mean the language is finished or even close to it.

That said, % annoys me, too. :stuck_out_tongue:

One thing Rust does not seem very interested in is being "scripty". Rust cares a lot about making code maintainable and readable by default, even if it makes code harder or slower to write. That's not to say Rust never gets syntax sugar (see if let and while let), but it's definitely a secondary concern.

To put it another way: Rust is not a language you pick when you want to write code quickly. It's a language you pick when you want to come back six months later and immediately pick the codebase back up and understand it all. Features that make the code harder to read would likely be frowned upon.

I have old D codebases where I was "clever" at the time, and now don't quite remember how it all works. I have a project with three magic serialisation libraries involved, which basically embodies the idea of "spooky action at a distance". Very easy to add more code to, but very hard to understand just what the hell is going on and why.

CTFE and return type inference also suffer because of this. CTFE because it necessarily and invisibly depends on implementation details to determine whether or not your codebase compiles; return type inference because it means you can no longer tell what the return type of a function is without looking at the source and running the type system in your head.

Personally, I don't want Rust to turn into D. D wants to be a halfway house between C++ and Python. Rust has the potential to be a better C: the language you turn to when you need to write low level, high performance code, where what's really going on matters.

Rust also doesn't (so far as I've seen) care much about being a jack-of-all-trades language. To pick a random idea: I wouldn't say that there's no chance of Rust ever getting a matrix-multiply operator like Python recently did, but it's not a memory or type safety issue, so I wouldn't expect it to be a high priority.

All that having been said, Rust is definitely not ignorant of its neighbors. Heck, one of the experimental branches for anonymised return types was called "calendar-driven-development" after the D calendar example; Rust's version of the same program is pretty hideous without return type inference (made worse by closure type being unnameable).

Something as simple as 42 takes several seconds to compile and run on my machine... the first time (anything with dependencies is whole orders of magnitude worse). After that, it's pretty much instant (cargo-script caches the executables is builds).

2 Likes

Hi, there. I'm a former D programmer,

I remember your name well :slight_smile:

Quite a few of your questions are simply "hasn't happened yet", so I won't touch on those. Rust 1.0 was really just a starting point for backward compatibility; it doesn't in any way mean the language is finished or even close to it.

I am aware of this, and I respect it. I ask those things for various reasons: to learn how Rust works, how the Rust community thinks, if those ideas are rustacean enough, if there's an open RFC on them or if it's a good idea to open it, how much interest there's for those topics, and so on. So even those answers help me.

One thing Rust does not seem very interested in is being "scripty".
Rust also doesn't (so far as I've seen) care much about being a jack-of-all-trades language.

Being jack-of-all-trades is a matter of quantity, not a binary thing. I don't yet know how much generalist Rust wants to be. I don't expect Rust to become script-like or numerical computing like, but making those kinds of usages "acceptable" in Rust could be a good idea. It's a situation where small improvements can give lot of return of investment. But the details are better discussed in more specific threads.

Rust cares a lot about making code maintainable and readable by default, even if it makes code harder or slower to write. That's not to say Rust never gets syntax sugar (see if let and while let), but it's definitely a secondary concern.

Having maintainable and readable code that contains a minimal number of bugs are some of my reasons for learning Rust.

Rust has the potential to be a better C: the language you turn to when you need to write low level, high performance code, where what's really going on matters.

In my opinion Rust is clearly trying to enter the C++ field too. Example: You write a browser in C++ or Rust, but probably not in C. And Rust allows to write high level code compared to C, when you use iterator chains like D Ranges.

All that having been said, Rust is definitely not ignorant of its neighbors. Heck, one of the experimental branches for anonymised return types was called "calendar-driven-development" after the D calendar example; Rust's version of the same program is pretty hideous without return type inference (made worse by closure type being unnameable).

I see. I'll search for that experimental branch. In most cases I don't use the auto return type in D, because having an explicit return type is more clear for the person that reads the code later. So I use the auto return type only when writing down a very complex type doesn't improve readability (sometimes a huge type isn't better for the reader than just an "auto"), in some golf-like demo code (like some of the code on Rosettacode), in throwaway code, or when I can't specify the type (Voldemort types and similar cases). So I agree with the Rust desire to write top-level function signatures, I write down the top function signatures even in Haskell. So far I agree with nearly all Rust design decisions.

Thank you for your comments :slight_smile:

Oh shit I've been found out! *flees*

Not to respond to anything in particular, but I just wanted to clarify that I'm not at all saying Rust can't have scripty or numerical features. It's just that one thing I noticed about Rust is that the core devs develop the language with a cautious, measured approach that feels fairly unusual to me. Especially after D [1].

I know my first thought after I got to grips with Rust was "gee, it'd be great if Rust had $FAVOURITE_FEATURE_X!". It's a major reason why I'm so glad the core devs are as cautious and measured as they are. I mean, sure, I want generic value parameters and type reflection and CTFE and variadic templates and anonymised return types... but I'm very happy there are people in charge who will oppose those ideas purely from a "this has to prove its worth the tradeoffs" stance.

I'd say that Rust is trying to enter some areas that C++ is used for, yes. But then again, C++ is sort of the Perl of the compiled world: it's not so much designed as accreted. It also gets used for a lot of things, in some part due simply to inertia and lack of options.

Also, years of D taught me to be very leery of the phrase "a better C++"; these days, that usually seems to boil down to "we stole a slightly different set of features from other languages and haphazardly mashed them together lol".

I can save you some trouble: eddyb's calendar-driven-development branch and the thread that started it.

Just to quickly summarise, since it seems like you might be interested: one of the possible solutions to this problem (and the one in that branch, more or less) is to allow this:

fn evens() -> impl Iterator<Item=u64> {
    (0..).filter(|e| e & 1 == 0)
}

Which is to say: the actual return type isn't specified, but you do specify what you're allowed to assume about it. In this case, the return type (whatever it is) will implement Iterator<Item=u64>, and that's the only interface you're allowed to use.

This would allow you to return instances of "Voldemort" types whilst still specifying what you can and can't do with them. Most of the benefits of auto return type inference, without the interface documentation issues!

Again, this whole approach to the evolution of the language is one of the big reasons I'm a big fan of Rust. :slight_smile:


[1]: I was, in a very small way, to blame for tuples in D. I was trying to write a library for exposing functions to Python, and had to make a script to generate pages of templates to extract function argument types. Walter saw and said "you know, I could make tuples a thing" and that was pretty much that.

It was quite a while later when I realised you could literally "unparse" a type tuple to reconstruct the original names of function arguments, and even their storage classes (which aren't even part of the damn type system) that I began to think that such a cavalier approach to language design was maybe not such a great idea. :stuck_out_tongue:

2 Likes

"this has to prove its worth the tradeoffs" stance.

The GHC compiler and Haskell face this problem keeping experimental features for lot of time disabled on default, and usable with an annotation or a compiler switch. Rust seems willing to do the same, and I think it's a good idea.

fn evens() -> impl Iterator<Item=u64> {
Most of the benefits of auto return type inference, without the interface documentation issues!

Nice, it's an idea similar to the D code I've written above in this thread:

out(result) {
alias R = Unqual!(typeof(result));
static assert (isInputRange!R && is(ForeachType!R == int));
}

Regarding 9, this was pointed out a long time ago but it just hasn't been fixed yet.