The code of the analysis-stats command might help here.
Thank you so much for the pointer, I will look into this!
But, I really doubt this will result in a good solution for your problem. It starts with the fact that RA type inference isn't perfect yet. But even if it was, rust-analyzer needs to be able to expand your macro to infer the types of the arguments. I would really suggest going back to the original problem and looking for a simpler solution.
I concur with this actually, but I must say that I was unable to find a decent solution to my client problem and this is where I ended up while trying to approximate a solution.
I will try to better explain what my client desires, with some limitation of the details I can provide, as people more experienced than me may be able to point me in a better direction. I'm sorry if some points will not be particularly detailed but It seems I am not allowed to expose all the details.
Furthermore, this project is currently in an exploratory phase, where I'm asked to actually find if any solution, at least approximately correct, can be provided and how much effort might be required for it.
My client has a desire to provide a library whose aim is to reduce some code sizes and move some non essential performance-heavy I/O computations from an embedded system to another computer, with much of the parts of the work executed at compile time.
In particular, he desires that the user can declare a macro call such as the following:
somemacro!(metadata1, metadata2, data1, data2, ..., datan);
Which expands to the runtime required code and produces some intermediate representation that is usable outside the code both to build a "receiver" on the non-embedded system and to provide visualizations over the domain system.
The expansion itself should not be problematic and can be completed before the build step I'm trying to provide is complete.
metadata2 describes a format, trough some rust code, for data1, ..., datan.
Furthermore, data1, ..., datan will need to be serialized as binary data and sent over a wire ( this part is what the macro expansion will probably end up providing ).
The input to the macro is the whole set of information I'm allowed to work with and the whole of the macro calls must be seen as a set describing a complete set of information for that specific compilation.
The receiving system of this data needs to be able to unpack the data and recognize both the memory layout and the "provenance", with regards to the original source code.
Then, the data will need to be processed, as the original rust structures, to produce some output.
Some of the compile-time computations will need to store some state; For example, some identifiers need to be injected into the calls or in the intermediate representation and some invariants needs to be hold ( this last part I could probably provide with dependent types, which rust doesn't yet(?) support, but I might not be able to provide all of it as there is still the requirement of moving some computations to compile time).
Some of the first idea I explored was from traits based architectures with support for derives and so on to the use of procedural_macros wich kept states in some files and so on.
The biggest blocking block, here, is that my client has a strong desire to have a specific interface which doesn't allow much information to be passed by the user and seems ummovable on this point.
Furthermore, to provide some of the invariants that are required and to ease the requested API I do at some point need to be able to have knowledge about the types that are being used, both to build a correct "receiver" for it and to provide external visualization over what is being done. Further work may need to be done for those types, from generating implementations to some analysis or visualization.
This point in time will be tied to either compile-time or an external process ( such as an external step to compilation itself ) and shouldn't be produced on the embedded system or at runtime.
In this sense, I feel that, the "compilation" phase poses itself as something similar to protobuff but with a description format that is integrated into normal rust code and that is quite "gaunt" on the direct information that it provides.
Furthermore, it seems to better posit itself as an outside tool or as a further compilation or analysis step.
Considering those requirements and restriction, the major problem has been retrieving type information to explore the required code generation, with regards to still being able to provide the requested interface.
Here I've considered and looked into some different possibilities to retrieve those information, from analysis of rust's intermediate representations, code generation from procedural macros that is optimized away at runtime, compiler plugins, nightly features and a build-time static analysis and code generation step.
I'm currently trying this RustAnalyzer based solution which I feel might be a middle, approximate, ground between allowing my client requests and still being able to use some of the underlying rust system, specifically with regards to the analysis of source code and type inference.
I'm sorry as some points are lacking in details and I further cannot in any way claim the certainty of the correctness of my analysis, such that I may be providing an already dirtied and misleading knowledge, but this an approximation of where my question comes from.
If you happen to have any pointer, albeit with partial details, that you would feel like sharing I would love to hear it as I'm currently dissatisfied with all of the ideas I could come up with and am exploring.