Tokio_postgres: How to deserialize array of custom types?

This is an interesting issue that I am encountering already for the second time.

Basically, I have struct with a bunch of sub-structs in a vector. Storing this in postgres works out of the box with a simple insert generator and then using a simple execute query via tokio_postgres.
The struct is simple:

#[derive(Debug, Default, Clone, Eq, PartialEq)]
pub struct ServiceConfig {
    svc_id: ServiceID,
    name: String,
    version: u32,
    online: bool,
    description: String,
    health_check_uri: String,
    base_uri: String,
    dependencies: Vec<ServiceID>,
    endpoints: Vec<Endpoint>, 
}

#[derive(ToSql, FromSql)]
#[derive(Debug, Default, Clone, Eq, PartialEq)]
pub struct Endpoint {
    name: String,
    version: u32,
    uri: String,
    port: u32,
    protocol: ProtocolType,
}

#[derive(ToSql, FromSql)]
#[derive(Debug, Copy, Clone, Default, Eq, PartialEq)]
#[postgres(name = "protocoltype")]
pub enum ProtocolType {
    #[default]
    UnknownProtocol = 0,
    GRPC = 1,
    HTTP = 2,
    UDP = 3,
}

The DB Schema is pretty straightforward as well:

CREATE TYPE public.ProtocolType AS ENUM (
            'UnknownProtocol',
            'GRPC',
            'HTTP',
            'UDP'
            );

CREATE TYPE public.Endpoint AS(
            name       VARCHAR,
            version    INT4,
            base_uri   VARCHAR,
            port       INT4,
            protocol   ProtocolType
            );

CREATE TABLE IF NOT EXISTS public.service(
            id               INT4 PRIMARY KEY,
            name             VARCHAR  UNIQUE NOT NULL,
            version          INT2 NOT NULL,
            online           boolean NOT NULL,
            description      VARCHAR NOT NULL,
            health_check_uri VARCHAR NOT NULL,
            base_uri         VARCHAR NOT NULL,
            dependencies     INT2[],
            endpoints        Endpoint[] NOT NULL
            );

Here is the interesting thing. When I "flatten" the vector i.e. turn it into something like endpoint_health_fields.. endpoint_metric_fields.. etc, I can deserialize the record although with a custom from_sql. This is really verbose, but works since your read the result row literally column by column entry.

However, if I keep the vector with the custom type, and try to deserialize the array of endpoints returned from the query, I get the following error:

"error retrieving column 8: error deserializing column 8: cannot convert 
between the Rust type `alloc::vec::Vec<common_config::config_types::endpoint::Endpoint>` 
and the Postgres type `_endpoint`"

The custom from_sql method that triggers this error:

impl ServiceConfig {
    /// Converts a SQL row into a ServiceConfig object.
    pub fn from_sql_row(row: &Row) -> Self {
        let db_id = row.get::<usize, i32>(0);
        let db_name = row.get::<usize, String>(1);
        let db_version = row.get::<usize, i16>(2);
        let db_online = row.get::<usize, bool>(3);
        let db_description = row.get::<usize, String>(4);
        let db_health_check_uri = row.get::<usize, String>(5);
        let db_base_uri = row.get::<usize, String>(6);
        let db_dependencies = row.get::<usize, Vec<i16>>(7);
        let dependencies: Vec<ServiceID> = db_dependencies
            .iter()
            .map(|id| ServiceID::from(*id)) // Converts integer back to Enum
            .collect();

        let db_endpoints = row.get::<usize, Vec<Endpoint>>(8);

        ServiceConfig::new(
            ServiceID::from(db_id),
            db_name,
            db_version as u32,
            db_online,
            db_description,
            db_health_check_uri,
            db_base_uri,
            dependencies,
            db_endpoints,
        )
    }
}

The tricky thing, though, the row.get must be typed that means I can't really parse the underlying result string and reconstruct my Rust Endpoint type, but the macro generated ToSql and FromSql methods for the Endpoint type don't work either and that leaves me wondering if there is something I am missing?

Also, this issue does not occur with arrays of primitive types that have a default deserialize implementation meaning, there is something off in my custom implementation.

How do I deserialize my array of custom types correctly?

Any thoughts, ideas or links to share?

The setup you have should work - the issue is that the ToSql and FromSql derives are expecting the type to be called "Endpoint" (with a capital E) in Postgres. See here for how to adjust that: postgres_types - Rust

Thank you @sfackler

When I adjust the naming, as you said, I get a slightly different error.

Specifically, when I add the annotation from the documenatin:

#[derive(Debug, ToSql, FromSql)]
#[derive(Default, Clone, Eq, PartialEq)]
#[postgres(name = "Endpoint")]
pub struct Endpoint {
    name: String,
    version: u32,
    uri: String,
    port: u32,
    protocol: ProtocolType,
}

The I get a similar error:

error retrieving column 8: error deserializing column 8: 
cannot convert between the Rust type `alloc::vec::Vec<common_config::config_types::endpoint::Endpoint>` 
and the Postgres type `_endpoint`

I am not sure from which field the problem is coming from, but I try to deserialize it without the protocol enum to so if its there.

@sfackler

Is there any possible way to retrieve the raw return string and bypass the macros to deserialize this stuff manually?

You can cast the column type to a string. The client uses the binary protocol - there is no raw return string.

When I do this, I get only the metedata from the column, but not the actual data of the row;
this is not working. I had this issue before and not once I was able to get the deserialization of custom arrays to work.

I think the only way forward really is to remove the array with custom types, normalize the type into separate table, and then resolve the relation and deserialize the new table type back into a proper Rust type.

Update:

I ended up re-implementing everything postgres with Diesel and along the way contributed a new tutorial to the Diesel project:

I should have used Diesel from day on, but as so often in life you only know after the fact.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.