Borrow static HashMap resources

I've this:

use std::borrow::Borrow;

static mut GLOBAL_DATA: Option<HashMap<String, BasicLanguageInfo>> = None;

pub fn global_basic_locale_data() -> &'static HashMap<String, BasicLanguageInfo> {
    unsafe {
        if GLOBAL_DATA.is_none() {
            GLOBAL_DATA = Some(serde_json::from_str::<HashMap<String, BasicLanguageInfo>>(&String::from_utf8_lossy(include_bytes!("basic_language_info_data.json"))).unwrap());
        GLOBAL_DATA.unwrap().borrow() // ERROR!

Verify errors:

  • cannot move out of static item GLOBAL_DATA
  • cannot return value referencing temporary value

Before, instead of storing static data this way, I've stored it in a structure managed by the callee, which carries static resources and provides parsing methods on its instances (Intl::new(), intl.parse_locale(), intl.parse_country()), but it seems easier to be able to do parse_locale()/parse_country() without an Intl.

Another way I'm thinking is maybe translate JSON resources into match blocks. Still nice to be able to continue lazy parsing JSON.

Help appreciated.

It's because unwrap consumes the thing it is called on. Consider calling as_ref().unwrap().

Of course, be aware that a static mut is dangerous.

1 Like

Thanks. Everywhere I use this function I'm calling hashmap.get() directly over the borrow.

I'd recommend you use one of the crates that puts a safe abstraction over this pattern, like lazy_static or once_cell.

If you want to do it with just std, though, you can do a couple of things to protect against accidental misuse:

  • Move the static mut declaration into the function body so that it can't be accessed anywhere else
  • Use std::sync::Once to handle initialization, so that multiple threads don't try to initialize the data simultaneously

Nice, I didn't know you could run a code once for all threads... No, wait... I thought it could be run by default in a library crate (out of user's program main function), but it'll help anyway.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.