Another Elasticsearch Client (UPDATE)


#1

Hi everyone!

I thought I’d share a project I’ve been working on for the last few months; an elasticsearch client for Rust.

The API is generated as simple functions in the elastic_hyper crate for use on top of hyper, but an asynchronous rotor client is also in the works. Because the source is all generated by libsyntax there’s no issue maintaining a synchronous hyper client and an async rotor client.

My focus is on providing strong typing over the core parts of the Elasticsearch API that are used everywhere and don’t change much; the core types, and responses / errors. So the Query DSL is just provided as a compiler plugin that serialises rust-like structures into inline json:

#![feature(plugin)]
#![plugin(elastic_macros)]
extern crate elastic_hyper as elastic;

// Requests take a standard hyper http client
let mut client = Client::new();

// Optional headers and url query parameters can be added
// Execute a querystring request on a local Elasticsearch instance
let mut res = elastic::search::post(
    &mut client, elastic::RequestParams::default(),
    "http://localhost:9200",
    json_str!({
        query: {
            query_string: {
                query: "*"
            }
        }
    })
).unwrap();

The core types are provided by the elastic_types crate, currently I only have support for date though. I’m thinking an attribute to generate a json mapping for structs would work nicely.

It’s not up on crates.io yet, that’s my goal for this week, but would love some feedback on how things are looking. (There are plenty of issues and milestones to get a sense of where I’m trying to go).


#2

Is there an interface to use that doesn’t require a compiler plugin? (and a way to use Elasticsearch-Hyper without linking to that library?)


#3

The body is just a String, so however you get that is your business. The json macro just parses a token tree to a String. I’m thinking of changing the hyper functions to take the body as a Read instead so a caller has more options.

The macros library is just included behind a feature gate for integration testing (which depends on getting serde-yaml working) it’s not actually used by elastic_hyper itself so you can use it without requiring the compiler plugin. That should be clear once I get the cargo files cleaned up and have the crates on crates.io

There’s lots of tidying up to be done!


#4

So I’ve been doing heaps of work since initially uploading to crates.io. Lots of cleanup and ongoing work in the type design.

elastic_hyper now has support for custom headers and url query parameters. The function signatures are starting to get a bit messy, so I’ll contemplate ways to tidy that up.

elastic_macros has been cleaned up and the json_str macro (formally just json) no longer supports replacement tokens. For request bodies that can’t be fully known at compile-time, use json_macros instead.

Compatibility with Ben Ashford’s rs-es is also a top priority for type mapping.

Feedback from the community would be awesome to help drive in a useful direction :slight_smile:


#5

I’ve pushed out the first useful release of elastic_types that supports mapping structs with all the core field datatypes in Elasticsearch.

I’ve got a quick sample that uses elastic_hyper and elastic_types on nightly to build an index, map and index some data, and then query it back out again. The docs should also be in decent shape.

There’s still lots to do, like proper compatibility with rs-es, but so far am happy with the way elastic_types generates mapping definitions.


#6

I’ve pushed out a new batch of releases for elastic_types.

There’s been a lot of cleaning up done, better docs, samples etc as well as more default implementations of ElasticType for HashMaps, BTreeMaps and serde_json::Value.

We now also have compatibility with the mapping calls for rs-es. So you can use the strongly typed elastic_types mapping API in the strongly typed rs-es Query DSL. This implementation is going to change in the near future though, since the approach rs-es takes to mapping is being refactored.

The current state of the API is a bit unclear right now, I’m kind of tracking the 5.x alpha branch of Elasticsearch, but not doing a particularly good job of it. The goal is to be able to have proper compatibility with the new Elasticsearch stuff shortly after it hits GA.

Next in line is to get a proof-of-concept REST API implementation in rotor_http working, then clean up the CI process; split crates into separate repos etc :slight_smile:


#7

company use ES. :clap: 加油 later let’u use it.


#8

I’d love to hear how you go if you decide to check it out! :slight_smile: