Hi there!

Hope you are all doing well. First, some warnings:

## Warnings:

**Éole is experimental and not proven (yet...)****"Lévy-optimal" does not means "efficient"**

## For the impatient:

- Untyped Lévy-optimal lambda calculus evaluator
- Can create a representation of the internal structure with graphviz
- 2 reduction strategies (normal form[default], weak head normal form)
- An optional GC [on by default]
- An optional memory compactor [off by default]
- Some limits (reduction steps, "readback" depth of the result, no limitby default)

## A bit of context/theory:

I have been working on a Lévy-optimal lambda calculus evaluator. "Lévy-optimal" means that a beta reduction is never duplicated if it can be shared. The theory comes from Lévy in the 70s, and the first algorithm was done by Lampig in 1990. So being "Lévy-optimal" can roughly be summarized as "doing the least possible amount of beta-reductions".

The main idea is the following: lambda expressions can be represented with interaction nets, a computational model working by graph-rewriting. There are nodes for the application and the abstraction, and an occurrence of a variable is represented by a link to the corresponding abstraction. This "part" is called the "abstract algorithm".

In order to share part of lambda expressions rather than duplicating them, there are special nodes called "fan". The big problem is how to pair fan. When two such nodes have to interact, we must decide if they are "related" (working together for a specific share) or not (distinct overlapping share). The mechanism deciding this is called the "oracle".

So, Lamping's algorithm can be seen as "abstract algorithm + oracle". The first one is not really a problem (other than that it's not well adapted to the usual hardware). However, the second one is problematic. The original solution needs to add nodes in the graph, which triggers more graph rewrites, which induces an overhead that is sometimes exponential when the number of beta reduction is linear (the paper discussing that has been awarded the title of "most influential paper of 1996" by the ACM: https://www.sigplan.org/Awards/ICFP/ bottom of the page).

I tried a new idea which relies on lazy-labeling of fan. The theory behind it is not proven (WIP), but until now no counter-example has been found. Give it a try (and if you can find a counter example, that would settle the question of the proof)!

## The Rust part

The project is done in Rust (obviously ) and uses LALRPOP to parse the lambda expressions, and clap for the command line. It is quite modular, so we can change the garbage collector (GC) and the compactor. The GC being a critical component, it is designed to work by static specialization (rather than using trait object).

I was on and off with Rust since 2012, never really doing/finishing anything, but I always liked the idea of it. So this is my first Rust "real project". Comments and critics are welcomed (but let's avoid bikeshedding).

Take this project as what it is, an experiment that should not be used for anything else than curiosity and fun! Hope you enjoy it!

Cheers,

Matt