I've been working on adding prepared statement caching to Diesel, and had an interesting idea. The initial implementation was similar to how we do it in Rails, where we construct the SQL string, and then hash that to determine a unique prepared statement name. However, the structure in Diesel can likely eliminate this cost entirely, as our queries tend to have unique types.
Our AST is primarily composed of zero sized types. Every column and table gets a unique type to represent it, such as users::id
. Most of our AST nodes are entirely generic, and sized based on their fields, such as And<Lhs, Rhs>
. As such a query like users.left_outer_joins(posts).filter(users::name.eq(posts::author_name))
would continue to have a size of 0, but be uniquely identifiable as a type.
I had originally thought that we could do this with TypeId::of
, but that function has the constraint that it's right side be 'static
. We have only one node where that isn't true, which is Bound<T, U>
. For Bound
, T
represents the SQL type (always 0 sized), and U
is the data being serialized. This is able to work with references, so I can't guarantee 'static
. Even if we removed the 'static
bound from TypeId::of
, presumably &'a i32
and &'b i32
would be considered different types (if this is incorrect, please let me know as TypeId::of
probably would work).
What's especially interesting about this case is that for Bound<T, U>
I actually would prefer to eliminate U
entirely. I don't care whether it's Bound<VarChar, &str>
or Bound<VarChar, String>
, as it doesn't affect the query as a whole. That said, I think having 2 different statements for those two types to be an acceptable cost as long as when lifetimes are involved, lifetimes don't result in an unbounded growth in the number of prepared statements.
I've been trying to think about ways to solve this, but without the ability to effectively control the return value for TypeId::of
for that specific type, I'm a bit at a loss. So I thought I'd reach out to see if there were any ideas.
Thanks for taking the time to read through this.