I'm using vscode and I have a situation where code compiles and runs fine, but Rust Analyzer is reporting an errors. This is with rustc 1.68.2 (9eb3afe9e 2023-03-27),
rust-analyzer version: 0.3.1463-standalone and vscode version 1.7.0.
Error output from "OUTPUT" -> "Rust Analyzer Language Server" window:
There is also errors reported in the source code window, where RA reports a "type-mismatch" error wherever sender_map_insert is invoked, such as here:
expected &{unknown}, found &ActorSenderrust-analyzer[type-mismatch](https://rust-analyzer.github.io/manual.html#type-mismatch)
This code actually won't pass tests because I actually need to use nightly, but since this fails with stable I thought that would be more interesting bug.
So I'd like some guidance on how I can help find the root cause of the problem. I suspect the first request might be to create a simpler way to reproduce the problem, but I thought I ask before going to that extra effort in case someone says "Yeah, known problem" or "You're doing xyz incorrectly".
The cyclic deps errors are a known issue with r-a and come from when a crate in your workspace has a dev-dependency on another where it forms a dependency cycle. This is most likely also the reason why r-a emits that type error because we don't handle these kinds of cycles yet.
In general, IDE plugins and language servers don't use the compiler's own mechanisms for checking the code (the reasons escape me, as there is an obvious opportunity for disconnects and confusion here). When in doubt, invoke (and believe) the compiler. If it says there's an error, then there's a compile error, if no, there isn't.
Everything else coming from various shiny tools is basically sleight of hand, and disagreements between external tools and the compiler aren't exactly rare. (There are at least 1-2 posts per week on this forum about the exact same kind of problem.)
Because rustc is optimized for one-shot operation, whereas rust-analyzer is optimized for quick iteration and low latency.
from: llogiq
In the past there was RLS that did exactly that. But the approach of rust-analyzer was found to be more performant.
from: Dreeg_Ocedam
Re-using the whole of rustc isn’t possible, as that wold be too slow. In terms of big-O, the work to process a change is proportional to the size of the codebase in rustc, and, for interactive IDE usage, it should be proportional to the size of the change itself. We actually did try that approach with RLS a while back, and we were not able to make it fast enough.
Re-using bits of rustc was hard, because rustc wasn’t really a modular codebase. For example, the input to the parser was not only a string to be parsed, but the whole complication state up to this moment (it used to literally be stuffed in a thread local). This is the part which probably could have been better! Eg, in Zig compiler’s parser takes a string, and returns an AST, without needing any extra context.
But maybe it couldn’t! Due to the way Rust’s parsing, macro expansion and name resolution are intertwined on the language level, it’s hard to separate them out in the implementation. And those early stages is where the difference between and IDE and a compiler is most profound.
Thanks. To be honest, I am extremely dissatisfied with this kind of reasoning. "Let's deliver something wrong really fast rather than giving the right answer more slowly". Reminds me of the ancient IT joke from the '90s about Intel being the faster while getting the result of division totally wrong.
The disconnect between RA and the compiler is a continuous pain point and confuses many users. The point of such a tool would be to make the programmer's life easier, so if it keeps spitting out spurious errors, then it achieves the opposite effect, and it's not helpful.
I furthermore fail to understand why the efforts aren't directed towards making the relevant parts of the compiler faster. We already have incremental compilation and a query system, after all. The really slow parts (codegen, optimization and linking) aren't needed for RA anyway, and I bet people would much rather wait 1-2 seconds for the correct result than being endlessly misled in a fraction of a second. But hey, priorities.
I agree that it's more than obviously desirable to have 1 source of truth for this, which should likely take inspiration from Roslyn and what various Rust teams are trying to accomplish right now.
The reasons I think the disconnect happens is something like this:
Most languages so far have been developed (at least up to say 1.0, the first production-ready version) in a context where the state of the art wrt tooling was downright deplorable. So not much inspiration to gain there.
Pre-1.0 the language developers have more pressing things on their minds: getting a. to a language design that actually works and b. an implementation that actually implements the design.
As a consequence of 1. and 2., accommodating both the language designers and tooling-community-to-be is impossible at the larval stage.
Which leads us to the situation we have now with Rust, a retroactive attempt to design such a system, which is even harder than before because it's like swapping all the parts of the Ship of Theseus, one after another, while the ship is sailing and actually growing in volume and mass.
Well there are efforts, but they mostly lie in the single digit % per couple of releases, so that's more of an "optimize existing datastructures" kind of thing rather than a "radical redesign for Fun and Profit".
I do have to say, it feels to me like the librarification project is progressing excruciatingly slowly, but I have to temper that with the knowledge that I am subject to a form of bias: I'm an outsider and don't see all the work behind the screens (eg in Polonius or Salsa) that might be happening as we speak. I have some experience myself in working on "foundational code": from the outside it really does seem like nothing is happening for a good long time even while the insider's perspective is working your ass off, and then (from the outsider's perspective) suddenly a bunch of things seem to be happening all at once as the new foundation becomes actually usable.