An AI assistant/(LLM) for

Dear Rust Community,

I'm writing to propose integrating an AI assistant or a large language model (LLM) like GPT into the website. This AI assistant could significantly enhance the user experience by offering a more intuitive and efficient way to navigate the vast amount of information available.

Here are some potential functionalities the AI assistant could provide:

  • Enhanced Documentation Search: Many users struggle to find specific information within the extensive Rust documentation. The AI assistant could understand natural language queries and pinpoint relevant sections, saving users valuable time.
  • Intelligent Blog Exploration: The assistant could analyze the blog content and suggest articles or references related to a user's search query. This would be particularly helpful for users seeking solutions to specific problems or wanting to delve deeper into a particular topic.
  • Context-Aware Code Examples: Imagine the AI assistant suggesting relevant code examples based on the user's current code snippet or the problem they're trying to solve. This could be a game-changer for learning and development.
  • Interactive Tutorials: An AI-powered interactive tutorial system could tailor learning experiences based on the user's skill level and progress. This would create a more personalized and engaging learning environment.
  • Community Integration: The assistant could potentially connect users with relevant discussions on forums or Stack Overflow based on their current task or question. This would foster a more collaborative learning environment.

Benefits for the Rust Community:

  • Increased User Engagement: Easier access to information would keep users engaged and productive within the Rust ecosystem.
  • Reduced Learning Curve: AI-powered tutorials and suggestions could significantly reduce the learning curve for new Rust programmers.
  • Improved Documentation Reach: The AI assistant could help users discover hidden gems within the documentation, leading to a better understanding of Rust's capabilities.
  • Strengthened Community Interaction: By facilitating connections to relevant discussions, the AI assistant could foster a more vibrant and supportive community.

Next Steps:

This is just a starting point for discussion. I encourage the community to consider the potential benefits and challenges of integrating AI into Perhaps a pilot program or user survey could be conducted to gauge interest and identify the most valuable functionalities.

Thank you for your time and consideration. I believe this integration has the potential to significantly enhance the Rust learning and development experience for everyone.



Your proposal is probably inevitable, though I largely disagree with your benefits. Honestly, sounds to me like all you want is @quinedot to be always online :sweat_smile:


Did… you did use ChatGPT to write this post, did you? It has such an obvious style that I can’t even imagine the amount of eye-rolling going on amongst teachers and TAs grading student work these days.


These things could be helpful, but LLMs are controversial due to risk of hallucinating wrong answers, as well on ideological grounds due to being based on copyrighted material without compensation or even attribution (even though it's probably legal), and being a threat to creative labor.

MDN added such assistant, and it wasn't well received in the community.


What I would’t be totally opposed to actually is integrating some Copilot-like thing into play.r-l. assuming it would in fact make it more convenient to write quick code snippets.


To be clear, I also believe there are potential benefits to semantic search and LLMs. But not the ones listed in the OP, except "improved documentation reach."


Not advocating anything like co-pilot but rather LLM driven semantic search.


Making it completely opt in -- so it doesn't appear in the UI at all unless enabled by a preference -- would avoid some resistance, for me at least. I wouldn't use it, for now at least, but I can understand why it could be important to start working on it and improving it. People are already using GPT for Rust questions and then coming to the forum with the output and asking for more help. So eventually perhaps a built-in GPT for the forum would be an improvement over that.

1 Like

I'm very flattered...assuming you don't mean my posts are LLM-quality :sweat_smile: :wink:.


Re. “semantic search” are there good precedents to look at for that? I have a hard time imagining the kinds of capabilities LLMs can offer there.

For context: My personal experience with ChatGPT always was that it’s almost impossible to get actual search results of actual “true” data / reference material out of it. Though perhaps that’s because it isn’t tailored to a narrower but better-known (to it) data set. It certainly cannot reliably and accurately quote from it’s original language training data, nor give references. I have not worked with models that are “fine tuned” (I believe that’s the correct term) to specific data sets yet, though. And as far as the web search capabilities it provides are concerned[1], the Bing queries it produces are most often almost childishly naive, with no hopes of finding any good results. (And then it just goes on and summarizes the useless web search results anyways, even if it has barely anything to do with the concrete question I’ve had.)

  1. not that that capability must have a lot to do with the proposed idea of “semantic search”, but at least both involve “search” and “LLM”, and it’s the best in terms of experience that I personally have ↩︎


@jasn-armstrng As a user who registered today with this being your only forum topic, can you share anything about your previous experience learning, using, or teaching Rust yourself?


ChatGPT was an interesting waste of time for me last week.

After spending half a day struggling to use some Rust crate (it does not matter what it is here) I had made no progress. The docs were not helping my simple mind. The examples did not cover what I wanted to do. The error messages coming out of it made no sense to me.

In desperation I turned to ChatGPT. I tried to ask my question as clearly as possible and give as much context as I could. ChatGPTs initial response looked kind of reasonable. This was encouraging.

However it did not compile. I promoted again with the error message and it politely apologised for the mistake and gave a new version of its suggestion.

That did not work either.

This conversation went on for quite a while, but ChatGPTs responses got worse and worse. It was hallucinating methods on that crate that did not exist all kind of nonsense. Starting over from scratch with a different tack did not help. After three hours I gave up and went down the pub.

Three beers later, in a desolate, quiet bar, a solution sprung into my mind. I cracked open the MacBook and tried it. It worked!

a) ChatGPT is worse at Rust than even me after three beers!
b) I'm to stupid to prompt ChatGPT in a productive way.


Just out of curiosity (as that does tend to make a real difference in capabilities in my experience)[1] was that the (paid) ChatGPT 4 model or the ChatGPT 3.5 one?

  1. by which – if it was version 3.5 – I am in no way trying to claim that I have any expectation of version 4 being necessarily any better at your particular task you gave it. I am just curious, and not trying to promote paid ChatGPT subscriptions ↩︎

1 Like

Only the free ChatGPT 3.5 on

I'm not into paying until I see some signs it might actually be useful.

So far, on a few tasks now, not looking good.

All I can imagine is a screenshot someone takes in the future where someone tricks the Rust LLM to give out instructions on how to enrich uranium, or something ridiculous like that. The technology is cool, but its unpredictable like people.

The main issue I see is not the LLM itself, but who is hosting it. Whoever hosts it will carry the liability of its responses. Additionally, whoever hosts it can affect its biases, which I foresee being a mess.

I argue that a Rust LLM is cool, but Rust shouldn't host it.


Thanks for the question. I've just started using Rust, coming from mainly Python.

I want to be able to use the extensive resources already on and not have to go to SO, ChatGPT, Copilot, YouTube, or books to know how to idiomatically open/read/write to a file for example.

If this is possible and what is highlighted here is a "reading documentation" skill issue please let me know and I'm open to recommendations.

1 Like

Three points here:

  • Rust documentation search is pretty bad currently. It only finds things from the types and names it seems. Nothing from the actual documentation text. This lead me to asking @BurntSushi for where to find replace in regex-automata recently. Turns out it was called interpolate. The module documentation text has the replace keyword though. This is not the first time something like this has happened. I believe just adding proper full text search would be a huge step forward.
  • Copilot for smarter tab completion is the one AI thing thing I actually found useful. Sure it is hallucinating quite a bit. But I know what I want generally and it helps save key strikes when it gets it right. It is really good for repeated cases (handle enum variant a b c and d according to a pattern, write one by hand get the others for free). Given that I have RSI (carpal tunnel, both hands) from too much computer usage I love this.
  • AI customer service is infuriating. I had to call FedEx the other day. They had something that even pretended typing on a keyboard to look up a shipping number. The whole experience left a bad taste in my mouth.

Understandable. By the fact that I was asking about your experience, you can probably see I’ve not ever been using it for the purpose of Rust development, either. I find it super useful for learning more about foreign languages, for example.

At translations and re-formulating stuff it’s perfect of course; spotting grammar and stylistic issues also works pretty good, etc… when asking grammar questions, the results depend - sometimes good, sometimes lots of hallucination.

Even for simple-ish tasks, one quality of the better model also was that it’s just way better at accurately following instructions. Like, if I want it to point out grammar mistakes in a text, and specify it should list the mistakes, not re-write the whole text into a fixed version, that’s the kind of thing where the GPT-3 models sometimes just plain did not do what I was asking for.

1 Like

Note that one of the factors here is that the entire documentation search code and index is distributed along with every set of generated documentation, so that it is private and offline-compatible. So, functionality improvements need to be weighed against increased size of the output.

1 Like

I believe there could be ways to mitigate this (compress the index file, shard the index based on n-grams, ...). Also as I understand it, currently makes crates share CSS and js resources (that is one of the reasons it uses nightly)?

Anyway I opened a thread on internals about this (pre-pre-pre-rfc?), so if you have specific insights, please share over there.