Bindgen and enum types

Hello,

I'm currently working on a CEF wrapper and ran into a big problem when going cross platform.

When I use bindgen on the CEF headers, it generates enums that vary by type, depending on the platform. On Windows, all enums are i32, while on macOS and Linux, they vary depending on whether they contain negative numbers (which is clang's expected behavior, as far as I can tell). However, I haven't found a way yet to unify this behavior to be the same on all platforms (I actually don't care either way, as long as it's the same).

I'm using llvm 9 on all three platforms. I have to generate them separately, because the headers differ (for example, the native type for the window CEF renders into is different).

Does somebody have an idea how to force llvm to behave the same on all platforms?

Is it i32 or u32 on every platform?

Here's the Windows output:

pub mod cef_termination_status_t {
    pub type Type = i32;
    pub const TS_ABNORMAL_TERMINATION: Type = 0;
    pub const TS_PROCESS_WAS_KILLED: Type = 1;
    pub const TS_PROCESS_CRASHED: Type = 2;
    pub const TS_PROCESS_OOM: Type = 3;
}
pub mod cef_errorcode_t {
    pub type Type = i32;
    pub const ERR_NONE: Type = 0;
    pub const ERR_IO_PENDING: Type = -1;
    pub const ERR_FAILED: Type = -2;
    pub const ERR_ABORTED: Type = -3;
…

Here's the Linux and macOS output:

pub mod cef_termination_status_t {
    pub type Type = u32;
    pub const TS_ABNORMAL_TERMINATION: Type = 0;
    pub const TS_PROCESS_WAS_KILLED: Type = 1;
    pub const TS_PROCESS_CRASHED: Type = 2;
    pub const TS_PROCESS_OOM: Type = 3;
}
pub mod cef_errorcode_t {
    pub type Type = i32;
    pub const ERR_NONE: Type = 0;
    pub const ERR_IO_PENDING: Type = -1;
    pub const ERR_FAILED: Type = -2;
    pub const ERR_ABORTED: Type = -3;
…

For the ones with only positive variants, I would just use u32 even if bindgen generates i32. Hopefully this would clear up any discrepancies.

I have constants like this:

#[repr(transparent)]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub struct cef_transition_type_t(pub i32);
…
impl cef_transition_type_t {
    pub const TT_CHAIN_END_FLAG: cef_transition_type_t = cef_transition_type_t(536870912);
}
impl cef_transition_type_t {
    pub const TT_CLIENT_REDIRECT_FLAG: cef_transition_type_t = cef_transition_type_t(1073741824);
}
impl cef_transition_type_t {
    pub const TT_SERVER_REDIRECT_FLAG: cef_transition_type_t = cef_transition_type_t(-2147483648);
}
impl cef_transition_type_t {
    pub const TT_IS_REDIRECT_MASK: cef_transition_type_t = cef_transition_type_t(-1073741824);
}

in the original C file, those are

typedef enum {
…
  TT_CHAIN_END_FLAG = 0x20000000,
  TT_CLIENT_REDIRECT_FLAG = 0x40000000,
  TT_SERVER_REDIRECT_FLAG = 0x80000000,
  TT_IS_REDIRECT_MASK = 0xC0000000,
  TT_QUALIFIER_MASK = 0xFFFFFF00,
} cef_transition_type_t;

Since they're signed and out of bounds of i32, clang helpfully converts them to the same bit pattern as the original u32. However, when I cast this back to u32, I don't know what would happen.

Here is what happens.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.