Actually the performance would be greater with my proposal.
We should still have enumerables for the immutable case of algorithms that fit the enumerable model, as I already explained that they offer the collection implementer more information to optimize with.
You do not understand what I proposed. I proposed that we expose traits for algorithms that programmers need to implement that would normally require iterators since they can't be modeled with an enumerable.
These traits could be implemented by experts employing iterators or even in some cases natively implemented by the collection implementer, so then we get the best of both worlds. And greater performance all around.
No need for benchmarks since the worst case implementation in my proposal is still iterators. So we are sure to get at least the same performance, and in some cases better performance when there is a native implementation or the algorithm model fits the enumerable model.
The groupthink is extrapolating that correct sentence into arguing that the guy who doesn't want imperative spaghetti has nothing important to say and that if he doesn't show code and a benchmark, then he isn't worthy.
And also extrapolating that correct sentence into a myopia that blinds to "win-win" designs that weren't considered because of the obsessive focus on that correct sentence.
You're making claims about performance in the abstract, and I simply don't understand them. That's all. It's hard for me to grok your proposal without some real code demonstrating its usefulness and benchmarks showing its performance characteristics.
My apologies if I implied that, as it certainly was not my intent. It seems reasonable to me to request benchmarks for claims about performance. It's certainly also reasonable respond with, "not yet." And it also seems reasonable for me to then respond, "OK, I won't be able to understand it until then."
I realized through our group discussion, that an enumerable trait is preferred when the consuming side of the algorithm (i.e. the callback function that is iterated) does not need an iterator (or other) model, because as Oleg explained, some collection enumerations can be possibly more optimized with the enumerable model, e.g. a remote database with latency overhead. Also afaics the enumerable model can encapsulate resource lifetimes entirely when the callback is a pure function without any need for exposing any low-level lifetime checking in user-land code.
But @keean is correct that certain algorithms can't be achieved with an enumerable model, so we also need iterators. But iterators expose imperative spaghetti resource lifetime gridlock tsuris on the programmer. So I offered the "win-win" suggestion that we could promote the concept of creating traits for popular algorithms that we would normally implement with an iterator. These would simply be a function call same as the function that we would implement with iterators. So then these trait functions can be implemented either with iterators or in some cases the collection implementers might optimize and provide implementations which don't use iterators because they are more optimized without using iterators. This provides more flexibility but obviously it can't be any lower performance than iterators alone. There is nothing I've stated which is abstract.
In many cases, naive programmers could simply call the trait function instead of implementing their own algorithm.
Additionally, iterators can't model every interface between a collection (which is a data provider algorithm) and the consuming side algorithm. So we will need traits for algorithms any way.
Yes programmers will still need to use iterators when they roll their own algorithm and iterators are the appropriate interface for their consuming side algorithm. Hopefully they will be encourage to put that implementation behind a trait, so other collection implementers can decide to optimize their collection's implementation without iterators if that provides a performance benefit.
In all cases, collection implementers would normally choose to offer an implementation of iterators and enumerable also.
I am not arguing that imperative spaghetti can be replaced entirely. Rather that we can hide it from many/most users, so why shouldn't we.
I fundamentally disagree with this characterization, so it's hard for me to follow your motivation. My suspicion is that this puts us at an impasse, and that's OK. It happens. (I note that "disagree" does not mean "Rust is a panacea.")
I think I understand your proposal as much as I can at this level.
Are you claiming there is no tsuris at all compared to simply calling a function? Are you claiming that if don't encourage encapsulation of resource lifetime management we not likely to infect all the libraries and user-land code with this increased tsuris?
Did you see the following edit:
Also why not provide the opportunity for increased performance. Seems like a no brainer decision to me.
To the first question: I can't ever recall experiencing tsuris while using iterators in Rust.
To the second question: I'm not claiming that, but that doesn't mean I think your claim is true. Where's the evidence? Is this happening today? What code is impacted? Any crates in particular?
I have no counter-argument except the woeful one that I expect in retrospect, if we don't encapsulate the resource lifetimes mgmt as much as possible. And that is my experience versus your experience. We will see...
Btw, I offer the evidence of all the problematic questions on this forum about lifetime management. And this is teaching the early adopter super-experts. Just wait until the junior programmers come in.
And not encapsulating means lower performance in some cases. I thought Rust's claim was maximum performance and safety, not just safety.
In the section The Basics of Explicit Lifetimes and Lifetime Parameters, if we don't define a lifetime annotation for the fn get_x then we get a compile-time error because x references a resource that has gone out-of-scope and thus is no longer guaranteed to be allocated. No code is marked unsafe.
My point is that either Rust catches all such possibilities (which means there is no unsafe code in the entire program which is unlikely), but this can cause several forms of coding gridlock, such as chasing down the source of and fixing the error if it is nested or not local and the gridlock you admit of for example getting caught in a circular reference where need two mutable references.
Or alternatively, we can't entirely trust the Rust resource lifetime checking because unsafe code will forever be a reality of programming, thus my point that encapsulation is best when possible. So this problem doesn't infect everything.
If by "remember to enforce" you mean "deal with", then sure, having to think about lifetimes is a fundamental negative consequence of Rust's borrow checking system. I interpreted the phrase as meaning that you could forget to enforce it and accidentally write unsound code.
Can you elaborate what specific cases of superior performance with enumerables you think apply to Rust? As I said upthread, the particular claim of his I read regarding remote databases made little sense at all, and most of the rest of the paper was fairly specific to immutable GCed languages or Scheme in particular.
Afaics, you are oversimplifying and/or maybe misunderstanding the issue, which I understand to be that the imperative algorithm may need to know in advance whether there is another iteration in the queue before actually consuming it. For example, lets say the collection is an unbuffered stream (for performance reasons) over a remote high latency connection.
There is often error when we prematurely specialize our analysis.
I'm pretty sure Oleg wasn't referring to anything of the sort - at least in the paragraph I quoted. Also, while what you said may be a concern in some cases, it seems to me that it broadly argues for iterators providing a 'pull' API (i.e. cursors) over 'push', not the other way around.
I've heard that Oleg is very smart guy. I'd be careful with assuming that Oleg's mental model isn't quite more complex than he is going to write down in an informal document that isn't a published research paper. The numerous examples of people misinterpreting my posts because my mental model backing my comments is more informational than I can lay down in one bang on the keyboard.
Any way what Oleg intended is irrelevant. Only we need to analyze the subsequent elucidations that have been stated.
I am claiming that even enumerable and iterator won't model every interface needed.
Impossible that LLVM can optimize all cases of the bound check. LLVM can't decipher imperative spaghetti to determine the bounds check is orthogonal. You are fantasizing about Super Optimizers, which can never exist.
I believe you entirely misrepresented his point. I believe he is pointing out that a suitably sophisticated DSL could convert that functional expression into the appropriate SQL query, entirely eliminating the enumeration. It is a special case, but I think he is making a general abstract point that enumerables are more informational to the data provider's (aka collection's) algorithm than iterators are.