Éole - A Lévy-optimal lambda calculus evaluator without oracle written in Rust

Hi there!

Hope you are all doing well. First, some warnings:

Warnings:

  • Éole is experimental and not proven (yet...)
  • "Lévy-optimal" does not means "efficient"

For the impatient:

  • Untyped Lévy-optimal lambda calculus evaluator
  • Can create a representation of the internal structure with graphviz
  • 2 reduction strategies (normal form[default], weak head normal form)
  • An optional GC [on by default]
  • An optional memory compactor [off by default]
  • Some limits (reduction steps, "readback" depth of the result, no limitby default)

A bit of context/theory:

I have been working on a Lévy-optimal lambda calculus evaluator. "Lévy-optimal" means that a beta reduction is never duplicated if it can be shared. The theory comes from Lévy in the 70s, and the first algorithm was done by Lampig in 1990. So being "Lévy-optimal" can roughly be summarized as "doing the least possible amount of beta-reductions".

The main idea is the following: lambda expressions can be represented with interaction nets, a computational model working by graph-rewriting. There are nodes for the application and the abstraction, and an occurrence of a variable is represented by a link to the corresponding abstraction. This "part" is called the "abstract algorithm".

In order to share part of lambda expressions rather than duplicating them, there are special nodes called "fan". The big problem is how to pair fan. When two such nodes have to interact, we must decide if they are "related" (working together for a specific share) or not (distinct overlapping share). The mechanism deciding this is called the "oracle".

So, Lamping's algorithm can be seen as "abstract algorithm + oracle". The first one is not really a problem (other than that it's not well adapted to the usual hardware). However, the second one is problematic. The original solution needs to add nodes in the graph, which triggers more graph rewrites, which induces an overhead that is sometimes exponential when the number of beta reduction is linear (the paper discussing that has been awarded the title of "most influential paper of 1996" by the ACM: Most Influential ICFP Paper Award bottom of the page).

I tried a new idea which relies on lazy-labeling of fan. The theory behind it is not proven (WIP), but until now no counter-example has been found. Give it a try (and if you can find a counter example, that would settle the question of the proof)!

The Rust part

The project is done in Rust (obviously :wink: ) and uses LALRPOP to parse the lambda expressions, and clap for the command line. It is quite modular, so we can change the garbage collector (GC) and the compactor. The GC being a critical component, it is designed to work by static specialization (rather than using trait object).

I was on and off with Rust since 2012, never really doing/finishing anything, but I always liked the idea of it. So this is my first Rust "real project". Comments and critics are welcomed (but let's avoid bikeshedding).

Take this project as what it is, an experiment that should not be used for anything else than curiosity and fun! Hope you enjoy it!

Cheers,
Matt

3 Likes

See also: OPTIMAL REDUCTIONS IN THE LAMBA-CALCULUS - Jean Jacques Lévy

I don't begin to understand a word of it.

Is this something to do with programming :slight_smile:

That paper is indeed complicated, and I'm not fully qualified to explain it. I based my work on the book on the subject by Asperti & Gurrini "The optimal implementation of functional programming languages". The gist of this is that you can consider families of beta-reductions rather than the beta-reductions themselves, and reducing a family reduce all the member in one step. Here is an example.

Id = \i.i            // Identity function
Delta =  \d.d d      // Delta function
M = (\x. x I)(\y. Delta(y z))
  // Reduction of M:
  = (\x. x I)(\y. (y z)(y z))
  = (\y.(y z)(y z)) I
  = (I z)(I z)   // a)
  = (I z) z      // b)
  = z z

You can see that, at the step 'a)', we have to do the same job twice. But actually, the two 'I z' are from the same family. The idea is that reducing one should also reduce the other, so that you don't have to do the step b (the only way to do that is of course through sharing).
The step 'a)' could be rewritten like this, with the label ¹

(I z)¹(I z)¹

to indicate the sharing.

In essence, that is what Éole is implementing, based on the work of Lévy with some tweaks to make it more efficient (less work done managing the shares).

Hope that help gives an idea!

Cheers,
Matt

Thank you for the explanation HerrmannM.

Sadly it's a bit like trying to explain calculus to a cat. An MRI would show that nothing in my brain lights up with even the faintest hint of recognition.

Is it possible to write functional programs in a syntax that looks like, well, functions?

I know, silly question.

Just a note that Aaron Stump gave a couple of screencasts on this subject last year,

1 Like

@ZiCog
You have:

Id = \i.i      <=>   fn id(x){ return x; }

@ratmice
Thank you for the links!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.