Hello.
I have been looking at rendering mustache templates and the results were performant when using small text samples. When increasing the test data size with a much bigger template and parameters, the performance is notably decreased. I have looked at other implementations and that is consistent of around one minute to render one million times with either test sample. For the Rust one it went from 2 seconds for the small to 90 seconds for the larger one, which is quite varied. It would probably be expected to increase in time with a larger data sample, but I wasn't thinking it would be that much different.
I was wondering if there were any suggestions for this sample, such as either buffering or using another approach to get it more consistent.
I am iterating over the items first and converting to string as some may be nil/null values which breaks the rendering. This would be an empty string. Maybe there is another way to only convert nils, as most items are strings already.
lib.rs
pub fn wrapper(template: String, params: RHash) -> Result<String, Error> {
let mut data: HashMap<String, String> = HashMap::new();
params.foreach(|key: Symbol, value: Value| {
data.insert(key.to_string(), value.to_string());
return Ok(ForEach::Continue);
})?;
return renderer::render(template, data)
.map_err(|e| Error::new(runtime_error(), e.to_string()));
}
renderer.rs
extern crate mustache;
pub fn render(template: String, params: HashMap<String, String>) -> Result<String, Error> {
let template = mustache::compile_str(&template).expect("Failed to compile");
let mut bytes = vec![];
template
.render(&mut bytes, ¶ms)
.expect("Failed to render");
return Ok(String::from_utf8(bytes).expect("Failed to encode string"));
}
Any suggestions on this is much appreciated.