[PROPOSAL]: RLLVM (Rust implementation of LLVM) - aka Just use Cretonne


#1

TL;DR - based on the feedback of @sunfishcode below (a core developer on Cretonne) and his clarification of the goals of Cretonne with respect to being the back-end for the Rust compiler here as well as the general community feed-back that a reimplementation of LLVM in Rust would likely NEVER be able to gain sufficient traction, it seems that the idea of RLLVM is not something useful to the community nor would it be the best investment of time for someone looking to work on compilers and Rust. At least that’s the conclusion I come to based on the overall feedback. Thanks to the community for your opinions and insight!


I’ve been considering the idea of working on an implementation/port of LLVM to pure, idiomatic Rust (RLLVM). I made a post on r/rust (https://www.reddit.com/r/rust/comments/81iwxg/proposal_rllvm_rust_implementation_of_llvm/) where I attempted to solicit some input from the community regarding both the viability and applicability of such a thing.

So far, the feedback has been as follows:

Reasons NOT to create an RLLVM

  • Too Big
  • Too Complicated
  • Companies (like Google and Apple) are building LLVM and individuals/small team cannot possibly compete with that
  • NOT USEFUL TO THE RUST COMMUNITY OR THE COMMUNITY AT LARGE (is this true? opinions?)
  • Cretonne (UNABLE TO ADD LINK SEE REDDIT) is a thing and seems to be a good start on a replacement of what LLVM accomplished (with different trade-offs) written in Rust:
    • UNABLE TO ADD LINK SEE REDDIT
    • UNABLE TO ADD LINK SEE REDDIT

Reasons to create an RLLVM

  • Help evolve the Rust language by using it for something as low-level as LLVM thereby helping to identify any weaknesses or possible improvements to the language
  • Create a more robust LLVM (due to Rust’s safety guarantees, ergonomics, and verb(algorithm)-focused programming style)
  • Make Rust truly “Self-Hosting” without dependency between itself and the metal other than assembly (as opposed to having a hard dependency on C/C++ for compilation to machine code)
  • SAFELY parallelize parsing/compilation, AST Optimization (by function/module), etc. - for example: use Rayon to implement work-stealing for all of this

Preliminary Plan

  • Create a crate that defines the structs and fundamental trait impls for the IR objects (Modules, Functions, Instrucitons, etc, etc)
  • Create a crate to parse LLIR into those structs
  • Create a crate to serialize those structs to LLVM Bit-Code and/or back to LLIR
  • Ensure can pass any valid LLIR through the parse-serialize cycle and then on to the current LLVM and have it work correctly
  • Create crate that defines transformation functions on the in-memory IR structs that can be leveraged by specific optimization steps/pipelines
  • Implement some trivial transformations/optimizations in a crate leveraging that
  • Ensure the transformed result serializes to LLIR/LLBitCode correctly and can be correctly consumed by the current LLVM
  • At this point, hopefully community will step up and begin implementing/porting transformations/optimizations for the LLIR
  • Begin work on LLIR to actual assembly for HW architectures in Rust
  • use BindGen to create C/C++ interface to the library (for clients)

In the plan, it would be a requirement that ALL code is implemented in safe Rust. Anything requiring unsafe should be wrapped in an appropriate safe abstraction that is ideally pushed into libstd or libcore (or at the least a special low-level unsafe abstractions crate). The goal would be to have no direct unsafe code anywhere in the RLLVM library.

Ideally, the code-base would avoid boiler-plate through extensive use of “derive” and/or procedural macros and/or Macros 2.0. It would seek to leverage the latest innovations in Rust, like const generics (when they become available) as well.

The code-base would NOT be a gradual in-place replacement of the C++ code, but, rather an entire re-write in idiomatic, safe, modern Rust that interfaced with existing LLVM stages through serialized LLIR or LLBitCode because, after having examined the C++ code of LLVM (which is really nice and very well organized), it doesn’t seem like trying to gradually replace pieces with Rust (like is being done for Emacs for example which is C code) would be at all useful/productive. It seems like there is just too much friction between how things are properly done in Rust and how things are done in C++ for that to be useful.

Benefits of the Plan

  • If all the “unsafe” bits are in well-defined safe-abstractions that can be fully unit tested and manually verified for soundness, then, perhaps the whole RLLVM could be “proven” sound using the techniques pioneered here: (www .reddit .com)/r/rust/comments/6m46is/rustbelt_securing_the_foundations_of_the_rust/
  • All the LLVM optimizations could be in Rust without first implementing all of the LLVM stack (optimized IR -> Machine Level ASM). This would mean that optimization could be more provably sound (lack of UB)
  • Through the use of Macros 2.0, Traits, and ultimately things like Const Generics, could potentially significantly reduce the line count of the code necessary to implement IR optimization passes making it all more maintainable, auditable, and robust.
  • Would not require any changes to the front-end Rustc which could continue to generate LLIR and rely on the optimizations and downstream infrastructure of RLLVM as if it were LLVM. RLLVM would pipeline to LLVM for stages that weren’t yet implemented in RLLVM until those stages could be implemented (obviously, this would potentially make the overall compilation slower, but, that isn’t necessarily the case and would be irrelevant to the overall ultimate goal which would be the whole chain in RLLVM)
  • Things like code-coverage instrumentation could be implemented in Rust (more easily) using the RLLVM crates
  • Adding new optimization passes to RLLVM would use Rust instead of C++ potentially allowing a larger pool of optimization implementers due to the safety guarantees and ergonomics of Rust (think Ruby/Python/Perl programmers who are not C++ experts who have now learned Rust)
  • Other benefits???

What about Cretonne?

There was some feedback on the above referenced Reddit post saying that investing time in Cretonne would be time better spent than time spent on something like RLLVM. After I looked into Cretonne more, I’m not immediately convinced of that fact for the following reasons:

  • Cretonne is explicitly NOT intended to be the new back-end for Rust static compilation
  • Many of its design and architecture choices seem to be motivated by JIT compiling JS and statically/JIT compiling WebAssembly and it doesn’t seem to want to be a general purpose IR -> Machine Assembly static optimizer and compiler (that can be used for any language like LLVM)
  • Significant investment in Rust (things like code coverage) currently rely on LLVM being the back-end for Rust. Changing to Cretonne (which is not 100% compatible from and IR or goals perspective) would undermine that investment whereas reimplementing LLVM as RLLVM would get all the benefits of Rust while maintaining all the existing investments in LLVM tooling, hooks, extensions, etc.

My understanding of Cretonne, its goals, and the degree to which it can and wants to become the ultimate replacement for LLVM could be completely off the mark. If that is the case, I’d be interested in any feedback, particularly from the Rust compiler team, the Cretonne team, those involved with Rust code coverage, etc. that helps me understand why investing time in RLLVM would be better spent on Cretonne or otherwise.

Thank you in advance for any input or feedback you can provide. If there is anything you feel I’m completely ignoring, please let me know.

NOTE: Apologies for the HTML links that aren’t links, but, as a new user I’m not permitted more than 2


Compile Toy language to LLVM IR, then JIT the LLVM IR
#2

LLVM is 17-year-old project with 2M lines of code, 170,000 commits, and took an estimated 617 years of effort.

It would be absolutely awesome to have pure Rust LLVM, but that is a massive undertaking. Even with help of Rust and drawing experience from the existing LLVM project, it’s still going to be many years of effort.

Your plan sounds like “hey, let’s just jump in and start coding”, which to be fair is how many large projects have started, but do you fully realize the scale of the project?

You’re proposing a full rewrite, not a gradual replacement, so it will take years before the project is a worthy replacement of LLVM, and you’ll have not only catch up with LLVM of today, but develop faster than hundreds of active contributors of LLVM to keep up with its continued improvement.


#3

Agree. Is it an even bigger project than servo?
Even Apple hasn’t planned to reimplement LLVM in swift.
Better save time in other smaller but still useful projects, because our community is not big enough.


#4

“The code-base would NOT be a gradual in-place replacement of the C++ code, but, rather an entire re-write in idiomatic, safe, modern Rust that interfaced with existing LLVM stages through serialized LLIR or LLBitCode because, after having examined the C++ code of LLVM” - OP

while he is talking about a complete rewrite, his plan to pass off the byte code he generated to llvm to then be processed. Then once that’s implemented pass the next thing on and on tell it’s completely ported.


#5

Why? I understand how develop OS on Rust, or running Rust program on bare metal can help Rust evolve. But what is the principal difference between llvm algorithms and algorithms for example in rustc?


#6

I understand how develop OS on Rust, or running Rust program on bare metal can help Rust evolve. But what is the principal difference between llvm algorithms and algorithms for example in rustc?

In that case, why is it so out of left field to try to reimplement in Rust if LLVM is just a bucket of fairly standard algorithms that aren’t really doing anything special? Now, I truly believe that is primarily the case. A HUGE bucket of pattern matching and replacement algorithms that need to be extremely efficient (to have good compile times). The last bit is what might help to evolve the language.


#7

LLVM is 17-year-old project with 2M lines of code, 170,000 commits, and took an estimated 617 years of effort

OK, let’s try breaking this down. What is the core of LLVM?

  • Serializing/Deserializing LLIR
    • Read LLIR (Machine Independent Assembler) and create an in-memory representations of structs/objects that correctly reflect the structure of the given LLIR file (AST)
    • Serialize In-Memory Representation of LLIR back to LLIR and/or a binary bit-code format
  • LLIR AST Transformation
    • Algorithms to perform selection/matching and fundamental valid transformations on the AST (not talking about specific optimizations, talking about basic building blocks that optimization passes can be built from)
    • A BIG set of largely independent optimizations that use the selection/matching primitives and fundamental transformation building blocks to transform the AST to an equivalent more optimized AST
  • Transformation of LLIR AST to MSA AST
    • A largish set of fundamental transformation building blocks to transform the machine independent AST nodes into Machine Dependent (CPU/Architecture Specific) AST nodes
    • A mapping for each back-end architecture from MIIR (Machine Independent IR) to MSA (Machine Specific Assembler)
    • Serialization of MSA AST to Machine Specific Assembly for consumption by the Target Assembler

Now, of that 617 man-years you speak. Let’s assume for the sake of argument that we can reimplement in Rust in 2/3rds of the time (which I don’t think would be an unreasonable assumption). How much of the ~400 man years is the core LLIR->AST->LLIR/Bit Code? How much is the fundamental AST matching and basic transformation? How much is the Optimization Passes? How much is the MIIR AST->MSA AST fundamental building blocks? How much is the Machine Specific Mappings? How much is the MSA AST->Machine Assembler Serialization?

My thesis is that the bulk of the man hours is in the optimizations and machine specific mappings. Let’s say 80%. That means, there is about 80 man years of implementing the LLIR->AST Parsing, LLIR AST->Serialized LLIR/BitCode, and Fundamental LLIR AST Matching and Transformation Algorithms as well as the fundamental MIIR->MSA AST matching and transformation. Let’s say 1/2 of the 80 man years would be the Machine Independent portion. That’s 40 man years.

First of all, that is absurd. There is no way that it would take a single motivated, intelligent person 40 years to implement that. Especially considering the fact that the correct algorithms are already available in C++ so one does not need to spend too much time thinking of best approaches (only how to best implement the algorithm in idiomatic Rust). My gut feeling is that it is no more than 2 or 3 man-years of work (which would benefit greatly from multiple people working on it, so, 12 people could likely implement it in 6 months or so).

Now, once that part is done, creating optimizations can begin. This is the BULK of the first stage work and is an inherently parallel development process. The more people implementing/porting Optimizations the faster the work gets done. Given that Rust should be easier to code optimizations reliably and safely in (or at least that’s the promise of Rust as I see it and is in fact Rusts claim to fame) there should be able to be an extremely large pool of contributors able and willing to port optimizations. So, even if optimizations were 400 man years of work, 1200 developers giving 4 months of time each would get it done in 4 months. 600 developers 8 months. 300 developers 16 months. So not unreasonable given the swarm ability of the community and the highly parallel and well-defined nature of porting optimizations. Similar arguments can be made for the MIIR -> MSA stage of LLVM as well.

When put it terms such as these, the 617 man-years doesn’t seem insurmountable

EDIT: So, 4 to 6 years doesn’t seem unreasonable to have LLVM fully reimplemented in Rust. Possibly less. The question is, what value for having an RLLVM in ~5 years time-frame have to the Rust ecosystem and to the long-term viability, applicability, and market penetration of Rust?


#8

I think just go for it and let us know how you get on. You’ll surely learn something.


#9

I’m a Cretonne developer. I don’t have time at this moment to give a detailed response, but a lot of your assumptions appear incorrect. For example, we are actually planning a static rustc backend. So I’m interested in learning what we’re saying that’s leading to such misunderstandings so that we can correct it.


#10

@sunfishcode - Thanks for chiming in.

The main thing that led me to believe that Cretonne is not intended to be a full replacement for LLVM is, https://github.com/Cretonne/cretonne/blob/master/rustc.rst where it says:

The Cretonne project does not intend to compete with LLVM when it comes to optimizing release builds, but for debug builds where compilation speed is paramount, it makes sense to use Cretonne instead of LLVM.

Also, from the linked discussion:

Naturally the existing LLVM targets (both debug/release) would continue to be supported in the meantime (and probably forever; certainly for the foreseeable future).

And from the comparison of LLVM to Cretonne:

Since Cretonne instructions are used all the way until the binary machine code is emitted, there are opcodes for every native instruction that can be generated. There is a lot of overlap between different ISAs, so for example the iadd_imm instruction is used by every ISA that can add an immediate integer to a register. A simple RISC ISA like RISC-V can be defined with only shared instructions, while an Intel ISA needs a number of specific instructions to model addressing modes.

The above seems to indicate that representing the full gamut of op-codes that a specific HW architecture supports is not (and will not be) a goal of Cretonne and would in fact be counter-productive to its goals. I read this to mean that Cretonne defines and abstract machine that has a well-defined set of instructions that can be mapped to equivalent instructions (or sets of instructions) on the particular HW back-end, but, any functionality that is not representable on all back-ends, would not be representable in Cretonne at all. So, if you were compiling for some architecture that had some strange supported op-codes that did something really powerful, if those weren’t representable in some meaningful/useful fashion in all back-ends, then support for it would never be included in Cretonne.

From my reading of the goals and architecture as explained, it seems like Cretonne would have a common set of instructions that could be mapped to all supported back-ends. Compilers would emit only those instructions. Cretonne may then re-map some instructions to architecture limited instructions before machine-code emission, but, ultimately only the semantics of the instructions that can be supported on all back-ends could be usefully leveraged by front-ends.

Am I misunderstanding?


#11

@gbutler69, I’m curious - what’s your experience/background with Rust and compiler engineering?


#12

As a Rust enthusiast and someone who frequently works with and on many parts of LLVM, this topic obviously catches my attention. I was and remain very unconvinced, though. I’ve not been able to synthesize a coherent thesis, but here’s a scattering of relevant aspects:

  • One of the biggest boons of LLVM is that it is a de facto standard in industry and research. It’s hard to overstate how many people are agreeing on using LLVM and how much this consensus helps all involved: there’s mountains of experience, shared code, interoperability, cooperation, etc. in and around LLVM and its community. Any rewrite that does not have the full backing of the LLVM community automatically loses this.

  • LLVM’s design is not bad, don’t get me wrong, but it’s far from perfect. Without even deviating from its “core design philosophy”, there are many things in all parts of LLVM that would have changed for the better if not for inertia. Even the improvements that were decided on sometimes stall (e.g., removal of pointee types). A from-scratch rewrite that unquestioningly inherits LLVM’s design would waste a great opportunity to learn from hindsight (just as LLVM benefited from hindsight wrt to earlier compiler projects).

    • And this is without even going into the many ways one could overthrow fundamental assumptions of LLVM (e.g., the “linear” IR vs a sea-of-nodes IR, or three address IR vs RTL).
  • While LLVM IR and the target-independent passes operating on it is obviously a big and important part of LLVM, the target-dependent parts are usually under-estimated. Code passing through LLVM spends a huge portion of its time in IRs other than LLVM IR, a lot of optimizations happen here. Even if one could perfectly copy all LLVM IR related parts, the result would at best be a third of a production quality compiler backend.

  • LLVM has many problems, but none of the biggest ones are at all related to its implementation language. That is not to say I believe the LLVM code base to be memory safe or very parallelizable or anything, in fact I don’t. But the most frequent and the most serious issues are unrelated to that. Rewriting the same algorithms in a different language does nothing to fix miscompiles, improve compile times, categorically prevent certain missed optimizations, make backend work less manual and error prone, or help with any of the other issues that keep LLVM developers and users up at night.

  • LLVM is a moving target. It continually receives improvements, bug fixes, new features, refactorings, etc. so if one takes a snapshot of LLVM today and toils to rewrite that 1:1, the result will be a lot worse (on many axes) than LLVM is by the time the rewrite is finished.

And this is all without even going into estimated about who would have to work for how long, or how such a process could be structured. These are just fundamental issues that any project trying to replace LLVM must face.


#13

Virtually none. Hence, why I’m asking questions.


#14

That’s wrong. The very first sentence says “there are opcodes for every native instruction that can be generated,” and the last says “an Intel ISA needs a number of specific instructions.” The rest of the paragraph is saying that, for example, there’s only one iadd_imm instruction as opposed to one for every architecture that has one- not that these common instructions are the only ones Cretonne supports.

Here is where those x86-specifc instructions are defined: https://github.com/Cretonne/cretonne/blob/master/lib/cretonne/meta/isa/intel/instructions.py


#15

@rkruppe - Thank you for the detailed response. This is exactly the kind of information I’m looking for (the lay of the land as seen by those more knowledgeable). That being said, I have a few counter-points (or perhaps caveats) to your points. Please don’t take this as me arguing with you and definitely don’t take it as me thinking I know better than you (as I’m 100% sure that is not the case), but a few things that come to mind are:

One of the biggest boons of LLVM is that it is a de facto standard in industry and research. It’s hard to overstate how many people are agreeing on using LLVM and how much this consensus helps all involved: there’s mountains of experience, shared code, interoperability, cooperation, etc. in and around LLVM and its community. Any rewrite that does not have the full backing of the LLVM community automatically loses this.

I definitely agree that this is a HUGE issue. Probably the biggest issue. And insight regarding this is what I hoped to solicit before I put significant time into something that is unlikely to ever garner sufficient community involvement. That being said, I’m wondering how much, “If you build it, they will come!” plays into this calculus.

Without even deviating from its “core design philosophy”, there are many things in all parts of LLVM that would have changed for the better if not for inertia. Even the improvements that were decided on sometimes stall (e.g., removal of pointee types).

It may not (and probably wasn’t) have been clear by my furtive attempts at a “Preliminary Plan” that the idea would be to initially pretty much follow the implementation of LLVM, but, be free to internally make different choices in how the IR is represented, how the transformation algorithms interact with the AST, etc. Preserving compatibility and the ability to gradually replace LLVM by way of considering the serialized LLIR and the command-line arguments to LLVM to be the defined fixed interface, but, the internals could change wildly. Only once existing functionality of LLVM were complete, would there be discussions of changing the interface, but, the implementation could be optimized in Rust however was befitting.

the target-dependent parts are usually under-estimated. Code passing through LLVM spends a huge portion of its time in IRs other than LLVM IR

This I was not completely aware of. My thinking was that machine specific optimization and transformation was limited to final optimizations of the specific machine-level ASM after all LLIR optimization had completed and was translated to machine-level ASM AST tree and was fairly limited. I’m wondering how important this is ultimately? If RLLVM didn’t have the same level of Machine Specific AST optimization as LLVM would that be terrible? Could a slightly modified design of the internals of how optimization is performed and represented in the LLIR internally in Rust could obviate most of the need for machine specific optimizations?

My thoughts here would be that MSA -> Optimized MSA (for pipelining, cache coherency, CPU bugs, etc) would happen in the Assembler. Now, I understand that optimizations made in LLIR before translation to MSA may result in code that is sub-optimal for the CPU in question after instruction reordering and things like that, and had the LLIR optimization stage made different choices, then the final ASM stage could do better. This then requires some sort of feedback to try different combinations of LLIR optimizations potentially to end up with something the ASM can deal with the best.

To what degree could RLLVM pursue a genetic algorithm that would try some candidates in the search space, invoke the ASM stage, count the resulting finals costs (expected/estimated clock-cycles and/or dead-cycles due to cache misses etc) and feedback to next generation with a limit on the number of generations to attempt to arrive at an “Optimal/Most Fit”. Could this made to be deterministic? Would non-deterministic be OK? These would be some interesting things to explore.

NOTE: This last paragraph, after I’ve looked into the concept a little, seems to be me self-discovering the concept of “Stochastic Super-Optimizer” in my imagination, so, nothing of merit as it is already solidly in the literature. :disappointed:

That is not to say I believe the LLVM code base to be memory safe or very parallellizable or anything, in fact I don’t. But the most frequent and the most serious issues are unrelated to that. Rewriting the same algorithms in a different language does nothing to fix miscompiles, improve compile times, categorically prevent certain missed optimizations, make back-end work less manual and error prone, or help with any of the other issues that keep LLVM developers and users up at night.

This is one of the areas I was imagining room for improvement beyond simply porting the algorithms to Rust from C++ (which is a motivated me to say that it wouldn’t be an in-place replacement method-by-method of LLVM, but, instead would rely on the abstraction of LLIR to provide staged replacement of functionality) so that the opportunity would exist to optimize the internal AST representation for Rust and permit more parallel processing (using things like work-stealing through Rayon for example).

LLVM is a moving target. It continually receives improvements, bug fixes, new features, refactorings, etc. so if one takes a snapshot of LLVM today and toils to rewrite that 1:1, the result will be a lot worse (on many axes) than LLVM is by the time the rewrite is finished.

This would definitely be a problem that would need addressed in the on-going development. The way I would hope that this would ultimately play-out is:

  • First, RLLVM is able to handle all the same LLIR optimizations (for the most part) even if all the optimizations aren’t yet implemented
  • then, as optimizations are ported, choices are made to optimize differently in RLLVM as opposed to LLVM (not having the exact same internal representation for example)
  • new features in LLVM are evaluated while RLLVM is under development, and ported or not based on their applicability and usefulness to Rust first, then, to their overall applicability and usefulness second (with perhaps different trade-offs made in the implementation)
  • at some point, hopefully, sufficient progress is made to “Tip the Balance” where more contributions come in to RLLVM, some from previous LLVM contributors, but, hopefully also from Rustaceans who now feel empowered to contribute to something as Low-Level (pun intended) as ®LLVM.

Now, all of what I’ve just said is probably naively optimistic and I am definitely (through feed-back such as yours) coming to believe that that is the case. I’d still like to hear from anyone “In the Know” who might have a different take on it who might feel differently about the viability and usefulness with some, if not concrete, then hopefully hard-dirt, reasons that the idea of RLLVM would make sense given the whole issue of the overall LLVM community situation.


#16

I read it that way because it seemed like Cretonne wants to have JIT of JS and JIT/Compiling of WASM to CIR at run-time and be fast. I read that to mean that all machine-specific stuff would happen at the CIR (non-machine specific) -> CIR (machine-optimized) -> Machine Code Emission within Cretonne and any machine-specific instructions of CIR would be reserved for the latter 2 steps, but, compilers would only emit non-machine specific CIR to Cretonne (so that they wouldn’t need to worry about machine specific details).


#17

That does look likely to happen, but that hardly makes arch-specific instructions “not representable in Cretonne at all.”


#18

To make a humorous example of what I’m trying to say. Imagine if some future CPU implements the DWIN (Do What I Need) instruction. No other CPU implements the DWIN instruction. Since DWIN is oh-so powerful, any current instruction, or sequence of instructions in CIR can be safely condensed to DWIN on that target CPU, because, what the hell, it’ll just DO WHAT I NEED. However, the DWIN instruction would never be allowed as input to Cretonne because only this one omniscient CPU architecture supports DWIN and so it wouldn’t be useful across the full set of back-ends (and in fact there could be no possible mapping of the DWIN instruction to the other back-end architectures) and so no front-end like JS or WASM could rely on DWIN existing on the back-end and so could not emit DWIN to Cretonne. In other words, the input to Cretonne, architecturally and purposefully would be limited to those instructions that could be meaningfully represented on all back-ends.

That would be different from what LLVM seeks to do (from my understanding) which is to permit usage of even the most esoteric of back-end optimizations if the front-end has some knowledge about the back-end capabilities.

If Cretonne is NOT going to be so limited intentionally (perhaps only limited for run-time JS/WASM but not otherwise), then that would be a strong argument that work on RLLVM would be better spent on Cretonne. That’s what I’m looking for clarification on.


#19

Thanks for the feedback. I’ve now written up a new draft for rustc.rst, here. I’d be happy for any input.

To briefly address another question here: Cretonne’s IR will not be limited to platform-independent instructions. Supporting Rust well is a goal, and since Rust is a systems language, Rust programs often need extensive access to platform-specific features. That includes specialized instructions, as you describe, and also specialized ABI features. Cretonne doesn’t have a lot of support for these kinds of things yet, but we know that this is where we need to go.


#20

@sunfishcode - Thanks for this. This is what I was looking for, “clarity regarding the goals of Cretonne and how that would play into any decision to pursue something like RLLVM”. Given your clarifications of the goals, I truly believe I’d prefer to invest my time in trying to help out with Cretonne rather than jump off this cliff. I really appreciate the feedback and guidance from everyone in the community including yourself.