Any possible improvements to this functional program?

I do appreciate you point.

When I speak of "obfuscation and added complexity" I do not mean "different to what I'm used to". Admittedly I might have an issue with that because of my background and experience.

However I think it is actually measurable, quantifiable. All we have to do is count the number of concepts required to understand each of the various solutions presented here.

I'm not sure of a good way to count those concepts but anything outside of simple arithmetic expressions, sequence, selection and iteration is excessive baggage. Lambdas, closures, really?

Functional programming is certainly more abstract in some measurable, quantifiable sense. As a mathematician I am indeed very familiar with how there are many increasingly more abstract ways of looking at many things, but I will allow myself to claim that sometimes more concepts come with increased clarity, especially if those additional concepts are understood deeply enough.

2 Likes

My background is in math and physics, and from there I'm not sure you'd want to optimize conceptual complexity that way: the world is a complex place, and minimizing complexity in your language necessarily inhibits your ability to accurately model highly complex systems. Software is no different: if you only want to write trivial programs, then sure, use a simple language. But if you want to model problems that have never been solved, you should expect the language used to solve them to entail a large number of complex concepts. Good abstractions make this more manageable, and that's why most of math and physics is done using functional language -- because it is good at revealing important structure.

2 Likes

while suppressing somewhat-irrelevant implementation details. Like @ZiCog I have little practical experience in functional programming, but like you and @alice my formal background is mathematics and physics (albeit with subsequent decades of computer architecture/software/hardware/chip-design experience). Thus I appreciate the elegance and concise expression of functional programming, even though I often am not yet able to arrive at that form of code myself.

Ah ha, a mathematician. Excellent.

I am hopeless at mathematics but endlessly curious about it and totally love what little I understand of it.

Over the decades I have come to believe that there are a lot of beautiful and pretty simple concepts in mathematics that are totally hidden from most people by the wall of symbology and equations, abstractions, between them and the concept in question. A wall they cannot penetrate.

I'm not sure I can express what I mean very well.

A simple example might be the PID control algorithm (Proportional, Integral, Differential). You can pretty much explain that to a smart ten year old and have them understand why you are taking differences and sums of the error signal to produce a control signal. There is an intuitively obvious idea there. But said ten year old would never see that from stumbling into a book on control theory with it's integral and differential calculus.

I'm not against all this abstraction. But I do feel ones code should be readable by as many as possible without putting that wall of symbology and abstraction between them and what your code actually does.

On the bright side, being of a practical bent, I have all the solutions suggested in this thread run under "cargo bench".

Interestingly they all come in at about the same speed on my x86 PC:

test tests::burjui_b   ... bench:          91 ns/iter (+/- 2)
test tests::exphp_b    ... bench:          92 ns/iter (+/- 5)
test tests::jethrogb_b ... bench:          88 ns/iter (+/- 2)
test tests::marcianx_b ... bench:          93 ns/iter (+/- 4)
test tests::thor314_b  ... bench:         112 ns/iter (+/- 5)
test tests::zicog1_b   ... bench:         114 ns/iter (+/- 4)
test tests::zicog2_b   ... bench:          93 ns/iter (+/- 4)

Seems jethogb comes in the winner here. With that danm mem::replace thing :slight_smile:

There are a couple of functional solutions that loose badly. But otherwise they beat my imperative loop.

1 Like

One thing to consider (which I feel is often misunderstood) is that abstractions are not their to prevent you from having to learn the underlying details. Variables have to be mapped to memory addresses (or registers), but because the address is arbitrary, we prefer to refer to them by name instead, which abstracts over the address. But you are still advantaged to know that the underlying model is that of a memory address. So the purpose of the abstraction is to allow you to go farther with less knowledge, by allowing you to focus on the most relevant information. It is not to allow you to go on with no knowledge.

I do feel ones code should be readable by as many as possible without putting that wall of symbology and abstraction between them and what your code actually does.

Code becomes less readable when you include noise. By learning good abstractions, you learn what is noise and what is not. Of course if you lack the background to tell the difference, and everything looks like signal to you, then the abstraction will simply look like it is hiding information.

The symbology of calculus hides those sums and differences because those things are not necessary for calculus to work. They're simply one (arbitrary) way to apply that structure, just like a specific set of memory addresses are just one arbitrary way to store variables.

2 Likes

You have possibly said something profound there that I have not understood or not seen before. Despite years of studying maths in school and university.

The calculus that I know of uses symbols like dy/dx and that big integral sign, ∫. It very clearly speaks of differences and sums. It's there in the very names, Integral and Differential Calculus.

The wonder of differential calculus is that, in the limit, it reduces those differences to zero and produces a meaningful result for what would otherwise be 0/0. Magic!

Is there another way of thinking about calculus we should be aware of?

Yes, Exactly what I have been saying all along.

For the problem as stated in this thread I can almost be sure there would be less "noise" in the solution if I wrote it in assembly language. Let me work on that ....

There are plenty of ways to think of calculus without ever using limits. The idea is to create an abstract language that obeys the foundational rules (exponent, quotient, product, and chain rules, etc.) (Incidentally, this technique is also how we define exponentiation on real numbers, which you're probably more familiar with.) Analogously, you can define a programming language's semantics without ever referencing memory addresses, by highlighting the relationships between the language's structures.

Functional languages tend to define primitives that allow abstracting over variables (pipelines) or function arguments (point-free style,) and those languages can be given clear semantics without referencing the underlying machinery of addresses, indexes, or even statement order. This is very useful if you're modelling problems that look like pipelines (such as in this thread) or express declarative relationships, such as parsers.

2 Likes

I've been learning from this thread and playing with the code. Here's a version that abstracts out an "iterize" function that makes such Iterators for you.

fn main() {
    dbg!(fib().take(6).collect::<Vec<i32>>());
}

// Returns an Iterator for the Fibonacci sequence: 1 1 2 3 5 8 ...
fn fib() -> impl Iterator<Item = i32> {
    iterize((1,1), |(a,b)| (b, a+b))
}

// Produces an Iterator by induction.
// Given an initial state of type (R,S) and a function that produces
// the next state from an existing state, return an Iterator for the Rs.
// So in (R,S), R is the part that gets (R)eturned by the Iterator,
// and S is any additional (S)tate used internally.
fn iterize<R: Copy, S: Copy>(s0: (R,S), f: impl Fn((R,S)) -> (R,S)) -> impl Iterator<Item = R> {
    let mut state = s0;
    std::iter::repeat_with(
        move || { state.swap(f(state)).0 }
    )
}

// a.swap(b) sets a to b and returns the old value of a.
pub trait Swap: Sized {
    fn swap(&mut self, value: Self) -> Self;
}
impl<T> Swap for T {
    fn swap(&mut self, new: Self) -> Self {
        std::mem::replace(self, new)
    }
}

I don't know how well it benches.

This is long, of course, but in addition to fib() it offers two abstractions (a.swap(b) and iterize()) which are hopefully useful in other situations.

I think R and S need to be Copy for std::mem::replace.

Credit to KrishnaSannasi for the Swap trait, which cleans up the replace:

1 Like

Bind blowing.

I really don't want to bash on your code or Rust or anything.

I'm just wondering if I will ever live long enough to have that not look like line noise. Seems more an more unlikely...

I know what you mean. It's so easy to understand the use of it:

iterize((1,1), |(a,b)| (b, a+b))

but the function signature is just awful!

I'm not sure about that. I have no idea what the intention is there.

I'm wondering why there are tuples in there. Why not an array?

Oh, sorry -- I've been looking at it for so long....

It's basically induction. You provide an initial state s0 and a rule that produces s(n+1) from s(n).

The state, (R,S), is in two parts -- the part you want the iterator to (R)eturn, and any additional (S)tate. In this case (1,1) is just the first two numbers in the Fibonacci sequence.

What is returned by iterize() is an iterator over the R values.

So as you iterate, the state is getting modified like this:

(1,1) [the initial state]
(1,2) and the iterator produces 1
(2,3) and the iterator produces 1
(3,5) and the iterator produces 2
(5,8) and the iterator produces 3

R and S are both i32 here, but you could have an iterator that needed to keep around additional state.

As Alice pointed out (in another thread), the fact that R and S must be Copy is limiting, but it still covers a lot of simple cases. You could make a similar facility using Clone, but because std::mem::replace seemed to perform especially well in your benchmarks I wanted to stick with that.

iterize would be more intelligible if the above three comments were included as a prefix to its definition:

Without that explanation, R and S were no more intelligible to me than would have been left and right.

Thanks for the feedback. I modified the comments in the code of that earlier post.

1 Like

Would you mind sharing your benchmark setup?

Who me?

OK, I put all the codes we have in this thread into a repo here: https://github.com/ZiCog/rust_functional_fibo.git

The various functions are named after the forum user that posted them.

Test methods have a "_t" appended. The benches have an "_b".

However I did take the liberty of changing the codes to use u64 to stretch out the execution time a little bit.

2 Likes

Much appreciated.

Oddly, the iterize() version comes out to be slightly more efficient than all the rest, every time I run it. I don't know why; I was expecting it to be exactly the same as jethrogb's version, but it seems to be 1 or 2 ns/iter faster than the next best contender every time.

For example:

test tests::amigonico_b ... bench:          76 ns/iter (+/- 3)
test tests::burjui_b    ... bench:          78 ns/iter (+/- 11)
test tests::exphp_b     ... bench:          78 ns/iter (+/- 5)
test tests::jethrogb_b  ... bench:          78 ns/iter (+/- 3)
test tests::marcianx_b  ... bench:          78 ns/iter (+/- 4)
test tests::thor314_b   ... bench:          90 ns/iter (+/- 4)
test tests::zicog1_b    ... bench:          85 ns/iter (+/- 5)
test tests::zicog2_b    ... bench:          78 ns/iter (+/- 4)