Performance, Async Concurrency, and Data Modeling Challenges in a Rust Backend for My Texas Roadhouse Menu Website

I’m currently building the backend for my Texas Roadhouse menu website using Rust, and I’ve run into several challenges around performance, async handling, and data modeling that I’d appreciate some guidance on. The backend is responsible for serving menu data, handling search queries, and aggregating nutrition and pricing information from multiple sources. While the system works functionally, I’m noticing increasing response times as the dataset grows. Requests that used to respond in under 50ms are now regularly taking 300–500ms, even though CPU usage remains low. This makes me wonder if my async architecture or data access patterns are fundamentally flawed.

One issue I suspect involves how I’m handling async concurrency with Tokio. I’m using an async web framework and spawning tasks for menu lookups, filtering, and recommendation scoring. However, under moderate load, it seems like tasks are queuing up instead of running concurrently. I’ve checked that I’m not blocking on obvious synchronous calls, but I still see symptoms that resemble thread starvation. I’m unsure whether I should be using a different runtime configuration, more granular task spawning, or avoiding spawn altogether in some parts of the request lifecycle.

Another challenge is data modeling and memory usage. Menu items are stored in memory using nested structs with owned String fields for names, descriptions, ingredients, and categories. As the menu expanded, memory usage grew more than expected, and cloning these structures for request handling seems expensive. I’ve considered switching to Arc<str>, string interning, or borrowing with lifetimes, but I’m struggling to design a clean model that doesn’t become overly complex. Balancing Rust’s ownership model with performance is proving harder than anticipated for this use case.

Search and filtering logic is another area causing concern. I implemented a custom in-memory index for menu items to support fast category filtering and keyword search. While it works correctly, the code has become increasingly hard to reason about, especially around lifetimes and shared references. You can learn more here. I’m worried that my current approach might be fighting the borrow checker instead of working with it. I’d love advice on idiomatic Rust patterns for building read-heavy, low-latency data structures like this.

I’m also dealing with serialization overhead. The site serves JSON responses, and profiling shows a noticeable amount of time spent in serialization, especially for endpoints that return full menu sections. I’m using serde, but I’m not sure if there are better ways to structure my response types to reduce overhead—such as using flattened structs, pre-serialized buffers, or streaming responses. Since the Texas Roadhouse menu pages are hit frequently, even small inefficiencies add up.

Overall, I’m trying to determine whether these issues stem from poor async design, inefficient data structures, or simply a lack of familiarity with Rust best practices for web backends. If anyone has experience building high-performance Rust services with large in-memory datasets, I’d really appreciate advice on structuring async workloads, managing shared data safely, and keeping memory usage under control. This project is a learning experience, but it’s also a production system, and I want to make sure I’m building it in a way that scales cleanly as the Texas Roadhouse menu continues to grow. Sorry for long post

Sorry to say, but unfortunately you haven't given us much information that would be useful to help you. Perhaps you might want to share something more akin to your specific problem, like sharing the code where you have experienced bad performance or even better, a way to replicate it.

While the system works functionally, I’m noticing increasing response times as the dataset grows. Requests that used to respond in under 50ms are now regularly taking 300–500ms, even though CPU usage remains low.

Another challenge is data modeling and memory usage. Menu items are stored in memory using nested structs with owned String fields for names, descriptions, ingredients, and categories. As the menu expanded, memory usage grew more than expected, and cloning these structures for request handling seems expensive.

This would presumably explain some of the slowness you have experienced. It sounds like you are retrieving a large dataset and then you perform your computations in-memory, which would explain
why you have seen a linear increase in response times and an increase in memory consumption.

Unfortunately, without having more information about your system all I suggest you is to rethink your approach(es) in the endpoints where you are experiencing performance issues. The goal always has to be to perform the minimum work needed at every layer of your system. A few examples are:

  • Leverage your data layer system as much as you can, and prefilter the dataset on which you'll operate
  • Use data structures that are better suited for the operations you need
  • Simplify and reuse data and logic as much as you can

Can you give us an idea of the amount of items on this menu the type of searches you do and an example of info you want to display? it's pretty likely that it could be solved by going for a very simplistic approach