How to optimally store data with hierarchical relations for comparison

I got a dataset that is parsed from JSON that looks like:

[
  {
    "type": "server",
    "id": "yb.tabletserver",
    "attributes",
    "metrics": [
      {
        "name": "mem_tracker",
        "value": 8005600
      },
      {
        "name": "mem_tracker_Call",
        "value": 0
      },
      {
        "name": "handler_latency_outbound_transfer",
        "total_count": 0,
        "min": 0,
        "mean": 0,
        "percentile_99": 0,
        "max": 0,
        "total_sum": 0
      },
      ....

There are 4 types, of which some types occur more than once, but then are identified by id.
I parse these using serde, which end up in a Vector of structs, which contain a Vector of structs for metrics, with the two types of metrics dynamically parsed via an enum of the two metrics types.

What I want to do, is obtain the same dataset at a later point in time, and then produce a Vector containing the differences of the values of the two. For the struct with the total_count and total_sum, these two need to be shown with the difference, for the others the latest figures only would make sense, because a difference of the others doesn't make sense.

My current idea is to use a series of nested HashMaps, so the values can be obtained easily and directly, and looping over one allows to directly find the figure in the other; so a HashMap which has the type name as key, and a value of a hash map of id, which has the id as key, and holds a hash map of metrics, which has the name as key, and a struct with the figures.

Using a for loop over the last one, it should be reasonably fast to find the exact same value metric, and produce the difference?

Is this the best way of doing this? I can't be the first who wants to do such a thing?

I "solved" the issue by going a different way, which is creating a vector of structs that can hold all the data.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.