I am a rust starter, and I got a little stuck while planning my first project, which can be used for word frequency analyzing in a text file.
The basic feature has been implemented last week ( Update README.md · freeze-dolphin/rfreq@1680e60 · GitHub )
But there is one problem: it takes too much time while performing on a big file.
So I decided to use multi process, but I am really confused about it.
You can check the source code from the link below:
(note that this commit contains error)
My idea is to read the whole file first, and then separate the whole content into several parts and distribute them into threads. The threads will perform the analysis separately and after all the work are done, the result will be collected into one HashMap and be printed out in some format.
I may not active on this forum, so a PR on my repo will bring me much convenience!