For those who have manually transpiled Haskell code (lots of monads, but not much worry of laziness leading to infinite loop if strict) to Rust code, what lines / day were you able to achieve ?
Is the set of people who have done so successfully nonempty? If so, that's quite a nontrivial feat.
And I'd especially be interested in how to compile monads to native, non-lazy, non-garbage-collected code.
- I genui
I genuinely do not know.
Here's the thing. If someone can hand me Rust code that (1) has approximately the same struct / enum / traits, (2) infinite loops due to being strict instead of lazy (3) eats infinite memory due to lack of gc -- I'd consider that extremely valuable .. . like maybe 80% of the work.
Then I can just refactor to great the infinite loops / memory leaks due to Arc/Rc cycles.
Sounds to me like you need to treat the Haskell sources as some kind of requirement specification and develop the Rust code from it's requirements as you would any other project. (That is to say projects created via the waterfall method where we used to have such requirement/high level design specs. Not agile).
Back in the day IBM reckoned a software engineer creates 10 lines of code per working day averaged over the lifetime of a project (including testing, debugging, reworking etc). Sounds low at first sight but every time I checked it has been not far out. I see no reason why your case should be different.
Of course that is lines of delivered Rust code not lines of Haskell "specification". Makes me wonder how much bigger, or smaller, the code might become.
Also, treating the Haskell as a design specification and writing Rust to satisfy it might save you from churning out tons of horrible Rust in an attempt to "transpile" it line by line.
I can tell you that manual translation of even C code is quite difficult.
Getting not awful Rust out of existing code is practically it's own skill set, independent of just writing good Rust code in the first place.
I would suggest taking roughly the approach of:
- if you don't have one already, getting an independent integration test harness set up so you can validate both implementations with the same test.
- Again, if you don't already, cutting test entry points into your source (ie. Haskell) code to target low level sections you can run and translate independently
- work your way up the code base, keeping all the tests passing as you go
I would avoid writing unit tests at first if you didn't write the original code, even if there are original ones to translate: you can too easily simply translate the incorrect understanding twice, and then you've wasted twice the time. Once you have it passing integration tests feel free to do some green-green refactors if you're into that.
But your actual question, what progress to expect, is very tricky to guess even with original source, and I've found it to be largely dependent on how interdependent the source code is (and therefore generally really slow, because most sources are mudballs)
Reminds me of an interesting old project. Back in '96 I was hired for nearly two years to recreate the functionality of thousands of lines of assembler in C so that it could be run on processors other than Intel. No documentation, no tests available, unintelligible short variable names, very few comments in the actual code and they were in Swedish!
So I stuck sheets of printout together into yards long scrolls, drew lines on it to trace out the control and data flows (there is a reason they speak of "spagetti code"), turned that into some kind of structured flow charts, then wrote the C code.
Sorting out the data model/flows was another story...
The only way to know it did what was expected was to have the last remaining member of the team that wrote the assembler review and test it for himself. Amazingly that code is still in use today having migrated through various chip architectures.
End of rambling. Good luck!
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.