Clean code review



I have created my first crates that is available on github.

I would appreciate any code review in order to improve this projects.

The goal of my project is to compute all the permutations of elements in a given Vec.

Thank you




Haven’t looked very closely into it yet, but I think there is one improvement you can add, namely functions(or the that return a permutation via just indices.

So instead of giving me back another vector of structs, just give me a vector of integers. This will allow the user to work on non-Clone types, and do without expensive copying.

And also, I’d personally add a way to reuse vectors(via mutable references or passing the ownership of the entire vector, etc), since creating a new vector for each new permutation is quite expensive(both memory and time), but this may not work well for Iterator without some slight redesigning.

Out of curiousity, what are your anticipated use cases for this library?



I agree with you concerning using indexes, references and mutable. I may add new methods to manage others cases.

I coded that way because it was easier for me as a beginner.

Meanwhile, do you have any advice about:

  • documentations
  • code coverage measure
  • files and modules organisation
  • logging: I have a println macro used in utils module
  • crates publish

About your curiosity I like it. I can tell you one thing is that my next crate will compute the shortest path to do from node A to node B into a directed graph. Then I will use both crates for a dedicated web app.





Since this is a library, you normally do this via code annotation, and once it’s published on, will generate the documentation for you.

You can embed examples in the annotation, and cargo test will be able to test the examples for you as well.

Add extra code comments if your crate is very critical, and you want to make sure people reviewing your code can check your design and discover errors easily.

Code coverage

Normally people utilise Travis-CI or other continuous integration services for this, and run kcov(code coverage measuring tool) through it, then upload the code coverage report to or

For libraries that do not require communication(no networking, no file IO, etc), you should aim to achieve above 90% code coverage. There will be some noises, so don’t freak out if you’re off by a few percent, but roughly that range.

For binaries, code coverage will be a bit tricky, you basically need to build custom scripts to run kcov on your binary with specific commands, then upload all code reports to etc.

Files and modules organisation

There aren’t fixed standards(as expected), but generally making it very maintainable is the goal, so no overly massive files is one of the things you can do.

Make use of module separation as best as you can as well, since you can keep things private to each module, and limit usage error propagation at the moudle boundary this way.


No experience with it, sorry.

Crates publish

Publish a 0.X.X version first, to see if things look right on and see if picks it up properly etc, then move onto publishing 1.0.0 version.

Following SemVer as best as you can.

Other advice

Keep a changelog whether your project is a library or binary. This will help the users to determine when to update, and what new stuff can be expected.

Have some form of service to check whether the dependencies are up-to-date for you, most crates I’ve seen use

Pointers to my stuff

You will find crates with much better practices, I’m posting my stuff purely because I’m most familiar with them and I can answer any questions readily, if you happen to have any.


  • the readme has badges to the services used(docs, deps, code coverage, etc)
  • Extra stuff I did
    • Since this is a library for doing error correction, it is extremely important that it’s largely bug-free. Things were reviewed very thoroughly and went through a lot of code refactoring(basically every line was rewritten a few times and reviewed several times), in particular I added “audit” comments to facilitate code reviews

I probably missed a few things, but hopefully I’ve got most of them right.

Nice, looking forward to seeing the final product!


One more thing, look into cross platform build templates for travis-ci and appveyor, this will allow you to test your library and binary a lot more thoroughly in addition to high code coverage

Most projects use trust I think.