Good strategy for porting embedded software to different architectures?

So, for a while now I've had a bear metal kernel targeting x86-64. However, I'm curious about RISC-V, so I thought I might try my hand at it to see how it goes. But I don't want to get rid of all the work I've put into my x86-64-only kernel, so I'm trying to port it to RISC-V. It uses cargo-make, which currently targets x86 only. I could make the target an environment variable, but the difficulty comes when I need to pass in linker scripts. I saw that cargo build has a --config parameter, but is there a way I can make the build section target-specific? (This is particularly important since my x86 build needs to explicitly specify the relocation model since it only works properly with PIE code.)

One thing you could do is have a "core" kernel crate with most of your main logic and then create an architecture-specific executable which depends on your core and has its own .cargo/config file and/or linker scripts.

Depending on how things are designed, that might also give you a nice place to put architecture-specific setup or functionality without polluting the main kernel with lots of conditional compilation directives.

1 Like

I like that idea! A lot! But its going to require some major, major restructuring and re-architecting. The "architecture-specific code" is mixed in with everything else, so it'll take a lot of time to pull out all the components into architecture-specific crates/binaries. Its possible, but it'll take a while. Perhaps its time to publish this project and welcome contributors... I do have everything pretty much ready to go and that would make the entire process a lot easier.

Yeah, that's what I had in mind when I said "depending on how things are designed". Unless you know you will be supporting multiple architectures from the start and are quite disciplined, it's really easy to be like "I'll just wrap this with a #[cfg(target_arch = "x86")]" and have the multi-arch support creep in organically.

The way I would do it is to pull architecture-specific code out into its own trait(s) and pass that into the places that need it. The extra generics will probably be annoying to carry around, but I would interpret that as my code saying the business logic is too tightly coupled to architecture-specific code, but that attitude can be a bit untenable (i.e. some code is just naturally very specific to one architecture).

Another dependency injection technique is to avoid traits and let the linker wire up your dependencies. For example, you might have a core crate with some platform.rs module declaring certain functions exist.

// kernel-core/src/platform.rs

extern "C" {
  fn initialize_mmu();
}

Then in the architecture-specific executable you provide the implementation.

// kernel-riscv/src/main.rs

fn main() -> ! {
  kernel_core::init(); // transitively calls intialize_mmu() at the right time
  ...
}

#[no_mangle]
pub unsafe extern "C" fn initialize_mmu() {
  // bit-bashing code for setting up the MMU on a RISC-V machine
  ...
}

I haven't had to do this on any Rust projects, but it's a convenient way to mock things in C.

Of course, the downside is that failing to provide all the required functions will result in linker errors, and messing up the function signatures can result in UB. Both issues can be avoided using a simple macro that defines the platform-specific functions and enforces the correct signature.

It might expand something like this:

declare_platform_functions! {
  initialize_mmu: my_initialize_mmu_impl,
  initialize_frobnicator: |_| { /* unused */ },
}

Into this:

#[no_mangle]
pub unsafe extern "C" fn initialize_mmu() { 
  my_initialize_mmu_impl();
}

#[no_mangle]
pub unsafe extern "C" fn initialize_frobnicator(f: *mut Frobnicator) {
  (|_| { /* unused */ })(f);
}

I've got no idea if it'll work for you, but it's a useful trick for your toolbox anyway.